score
int64 15
783
| text
stringlengths 897
602k
| url
stringlengths 16
295
| year
int64 13
24
|
---|---|---|---|
18 | From Uncyclopedia, the content-free encyclopedia
Please note that as Proof has been shot dead, all information below has been rendered obsolete.
Methods of proof
There are several methods of proof which are commonly used:
Proof by Revenge
"2+2=5" "no it doesn't" "REVENGE!"
Proof by Adding a Constant
2 = 1 if we add a constant C such that 2 = 1 + C.
Multiplicative Identity Additive Identity
Multiply both expressions by zero, e.g.,
- 1 = 2
- 1 × 0 = 2 × 0
- 0 = 0
Since the final statement is true, so is the first.
See also Proof by Pedantry.
Proof by Altering (or Destroying) the Original Premise (or Evidence)
- A = 1 and B = 1 and A + B = 3
- [Long list of confusing statements]
- [Somewhere down the line, stating B = 2 and covering up the previous definition]
- [Long list of other statements]
- A + B = 3
Works best over long period of time.
Proof by Analogy
Draw a poor analogy. Say you have two cows. But one is a bull. After the proper gestation period, 1 + 1 = 3.
Proof by Anti-proof
If there is proof that has yet to be accounted for in your opponent's argument, then it is wholly discreditable and thus proof of your own concept. It also works if you claim to be unable to comprehend their proof. Example:
- I can't see how a flagellum can evolve by itself, therefore the theory of evolution is incorrect, therefore someone must have put them together, therefore convert now!
Note: This generally works equally well in both directions:
- I can't see how someone could have put a flagellum together, therefore the theory of Creation is incorrect, therefore it must have evolved by itself, therefore Let's Party!
Proof by August
Since August is such a good time of year, no one will disagree with a proof published then, and therefore it is true. Of course, the converse is also true, i.e., January is crap, and all the logic in the world will not prove your statement then.
Proof by Assumption
An offshoot of Proof by Induction, one may assume the result is true. Therefore, it is true.
Proof by Axiom
Assert an axiom A such that the proposition P you are trying to prove is true. Thus, any statement S contradicting P is false, so P is true. Q.E.D.
Proof by Belief
"I believe assertion A to hold, therefore it does. Q.E.D."
Proof by Bijection
This is a method of proof made famous by P. T. Johnstone. Start with a completely irrelevant fact. Construct a bijection from the irrelevant fact to the thing you are trying to prove. Talk about rings for a few minutes, but make sure you keep their meaning a secret. When the audience are all confused, write Q.E.D. and call it trivial. Example:
- To prove the Chinese Remainder Theorem, observe that if p divides q, we have a well-defined function. Z/qZ → Z/qZ is a bijection. Since f is a homomorphism of rings, φ(mn) = φ(m) × φ(n) whenever (n, m) = 1. Using IEP on the hyperfield, there is a unique integer x, modulo mn, satifying x = a (mod m) and x = b (mod n). Thus, Q.E.D., and we can see it is trivial.
Proof by B.O.
This method is a fruitful attack on a wide range of problems: don't have a shower for several weeks and play lots of sports.
Proof by Calling the Other Guy an Idiot
"I used to respect his views, but by stating this opinion, he has now proven himself to be an idiot." Q.E.D.
Proof by Arbitration
Often times in mathematics, it is useful to create abitrary "Where in the hell did that come from?" type theorems which are designed to make the reader become so confused that the proof passes as sound reasoning.
Proof by Faulty Logic
Math professors and logicians sometimes rely on their own intuition to prove important mathematical theorems. The following is an especially important theorem which opened up the multi-disciplinary field of YouTube.
- Let k and l be the two infinities: mainly, the negative infinity and the positive infinity. Then, there exists a real number c, such that k and l cease to exist. Such a s is zero. We conclude that the zero infinity exists and is in between the postive and negative infinities. This theorem opens up many important ideas. For example, primitive logic would dictate that the square root of infinity, r, is a number less than r.
"I proved, therefore I am proof." – Isaac Newton, 1678, American Idol.
Proof by Canada
Like other proofs, but replace Q.E.D. with Z.E.D. Best when submitted with a bowl of Kraft Dinner.
Proof by Cantona
Conduct the proof in a confident manner in which you are convinced in what you are saying is correct, but which is absolute bollocks – and try to involve seagulls in some way. Example:
- If sin x < x … for all x > 0 … and when … [pause to have a sip of water] … the fisherman … throws sardines off the back of the trawler … and x > 0 … then … you can expect the seagulls to follow … and so sin x = 0 for all x.
Proof by Cases
AN ARGUMENT MADE IN CAPITAL LETTERS IS CORRECT. THEREFORE, SIMPLY RESTATE THE PROPOSITION YOU ARE TRYING TO PROVE IN CAPITAL LETTERS, AND IT WILL BE CORRECT!!!!!1 (USE TYPOS AND EXCLAMATION MARKS FOR ESPECIALLY DIFFICULT PROOFS)
Proof by Chocolate
By writing what seems to be an extensive proof and then smearing chocolate to stain the most crucial parts, the reader will assume that the proof is correct so as not to appear to be a fool.
Proof by Complexity
Remember, something is not true when its proof has been verified, it is true as long as it has not been disproved. For this reason, the best strategy is to limit as much as possible the number of people with the needed competence to understand your proof.
Be sure to include very complex elements in your proof. Infinite numbers of dimensions, hypercomplex numbers, indeterminate forms, graphs, references to very old books/movies/bands that almost nobody knows, quantum physics, modal logic, and chess opening theory are to be included in the thesis. Make sentences in Latin, Ancient Greek, Sanskrit, Ithkuil, and invent languages.
Refer to the Cumbersome Notation to make it more complex.
Again, the goal: nobody must understand, and this way, nobody can disprove you.
Proof by (a Broad) Consensus
If enough people believe something to be true, then it must be so. For even more emphatic proof, one can use the similar Proof by a Broad Consensus.
Proof by Contradiction (reductio ad absurdum)
- Assume the opposite: "not p".
- Bla, bla, bla …
- … which leads to "not p" being false, which contradicts assumption (1). Whatever you say in (2) and (3), (4) will make p true.
Useful to support other proofs.
Proof by Coolness (ad coolidum)
Let C be the coolness function
- C(2 + 2 = 4) < C(2 + 2 = 5)
- Therefore, 2 + 2 = 5
Let ACB be A claims B.
- C(Y) > C(X)
- Therefore Q unless there is Z, C(Z) > C(Y) and (ZC¬Q)
Let H be the previous demonstration, N nothingness, and M me.
- C(M) < C(N)
- Therefore ¬H
and all this is false since nothingness is cooler.
Let J be previous counter-argument and K be HVJ.
- Substitude K for H in J
- Therefore ¬K
- ¬K implies ¬J and ¬H
- ¬J implies H
- Therefore H and ¬H
- Therefore non-contradiction is false ad coolidum
- Therefore C(Aristotle) < C(M)
Proof by Cumbersome Notation
Best done with access to at least four alphabets and special symbols. Matrices, Tensors, Lie algebra and the Kronecker-Weyl Theorem are also well-suited.
Proof by Default
Proof by Definition
Define something as such that the problem falls into grade one math, e.g., "I am, therefore I am".
Proof by Delegation
"The general result is left as an exercise to the reader."
Proof by Dessert
The proof is in the pudding.
Philosophers consider this to be the tastiest possible proof.
Proof by Diagram
Reducing problems to diagrams with lots of arrows. Particularly common in category theory.
Proof by Disability
Proof conducted by the principle of not drawing attention to somebody's disability – like a speech impediment for oral proofs, or a severed tendon in the arm for written proofs.
Proof by Disgust
State two alternatives and explain how one is disgusting. The other is therefore obviously right and true.
- Do we come from God or from monkeys? Monkeys are disgusting. Ergo, God made Adam.
- Is euthanasia right or wrong? Dogs get euthanasia. Dogs smell and lick their butts. Ergo, euthanasia is wrong.
- Is cannibalism right or wrong? Eeew, blood! Ergo, cannibalism is wrong.
Proof by Dissent
If there is a consensus on a topic, and you disagree, then you are right because people are stupid. See global warming sceptics, creationist, tobacco companies, etc., for application of this proof.
Proof by Distraction
Be sure to provide some distraction while you go on with your proof, e.g., some third-party announces, a fire alarm (a fake one would do, too) or the end of the universe. You could also exclaim, "Look! A distraction!", meanwhile pointing towards the nearest brick wall. Be sure to wipe the blackboard before the distraction is presumably over so you have the whole board for your final conclusion.
Don't be intimidated if the distraction takes longer than planned – simply head over to the next proof.
An example is given below.
- Look behind you!
- … and proves the existence of an answer for 2 + 2.
- Look! A three-headed monkey over there!
- … leaves 5 as the only result of 2 + 2.
- Therefore 2 + 2 = 5. Q.E.D.
This is related to the classic Proof by "Look, a naked woman!"
Proof by Elephant
Q: For all rectangles, prove diagonals are bisectors. A: None: there is an elephant in the way!
Proof by Engineer's Induction
- See also: Proof by Induction.
Suppose P(n) is a statement.
- Prove true for P(1).
- Prove true for P(2).
- Prove true for P(3).
- Therefore P(n) is true for all .
Proof by Exhaustion
This method of proof requires all possible values of the expression to be evaluated and due to the infinite length of the proof, can be used to prove almost anything since the reader will either get bored whilst reading and skip to the conclusion or get hopelessly lost and thus convinced that the proof is concrete.
Proof by Eyeballing
Quantities that look similar are indeed the same. Often drawing random pictures will aid with this process.
Corollary: If it looks like a duck and acts like a duck, then it must be a duck.
Proof by Flutterby Effect
Proofs to the contrary that you can (and do) vigorously and emphatically ignore therefore you don't know about, don't exist. Ergo, they can't and don't apply.
Corollary: If it looks like a duck, acts like a duck and quacks like a duck, but I didn't see it (and hey, did you know my Mom ruptured my eardrums), then it's maybe … an aadvark?
Proof by Gun
A special case of Proof by Intimidation: "I have a gun and you don't. I'm right, you're wrong. Repeat after me: Q.E.D."
Proof by Global Warming
If it doesn't contribute to Global Warming, it is null and void.
Proof by God
Also a special case of proof. "Don't question my religion, or you are supremely insensitive and God will smite you." Similar to Proof by Religion, but sanctioned by George W. Bush.
Proof by Hand Waving
- See main article: Hand waving.
Commonly used in calculus, hand waving dispenses with the pointless notion that a proof need be rigorous.
Proof by Hitler Analogy
The opposite of Proof by Wikipedia. If Hitler said,
- "I like cute kittens."
then – automatically – cute kittens are evil, and liking them proves that you caused everything that's wrong in the world for the last 50 years.
Simple Proof by Hubris
I exist, therefore I am correct.
Proof by Hypnosis
Try to relate your proof to simple harmonic motion in some way and then convince people to look at a swinging pendulum.
Proof by Imitation
Make a ridiculous imitation of your opponent in a debate. Arguments cannot be seriously considered when the one who proposes them was laughed at a moment before.
Make sure to use puppets and high-pitched voices, and also have the puppet repeat "I am a X", replacing X with any minority that the audience might disregard: gay, lawyer, atheist, creationist, zoophile, paedophile … the choice is yours!
Proof by Immediate Danger
Having a fluorescent green gas gently seep into the room through the air vents will probably be beneficial to your proof.
Proof by Impartiality
If you, Y, disagree with X on issue I, you can invariably prove yourself right by the following procedure:
- Get on TV with X.
- Open with an ad hominem attack on X and then follow up by saying that God hates X for X's position on I.
- When X attempts to talk, interrupt him very loudly, and turn down his microphone.
- Remind your audience that you are impartial where I is concerned, while X is an unwitting servant of Conspiracy Z, e.g., the Liberal Media, and that therefore X is wrong. Then also remind your audience that I is binary, and since your position on I is different from X's, it must be right.
- That sometimes fails to prove the result on the first attempt, but by repeatedly attacking figures X1, X2, …, Xn – and by proving furthermore (possibly using Proof by Engineer's Induction) that Xn is wrong implies Xn+1 is wrong, and by demonstrating that you cannot be an Xi because your stance on I differs due to a change in position i, demonstrating that while the set of Xi's is countable, the set containing you is uncountable by the diagonal argument, and from there one can apply Proof by Consensus, as your set is infinitely bigger – you can prove yourself right.
A noted master of the technique is Bill O'Reilly.
Proof by Induction
Proof by Induction claims that
where is the number of pages used to contain the proof and is the time required to prove something, relative to the trivial case.
For the common, but special case of generalising the proof,
where is the number of pages used to contain the proof, is the number of things which are being proved and is the time required to prove something, relative to the trivial case.
The actual method of constructing the proof is irrelevant.
Proof by Intimidation
One of the principal methods used to prove mathematical statements. Remember, even if your achievements have nothing to do with the topic, you're still right. Also, if you spell even slightly better, make less typos, or use better grammar, you've got even more proof. The exact statement of proof by intimidation is given below.
Suppose a mathematician F is at a position n in the following hierarchy:
- Fields Medal winner
- Tenured Professor
- Non-tenured professor
- Graduate Student
- Undergraduate Student
If a second mathematician G is at any position p such that p < n, then any statement S given to F by G is true.
Alternatively: Theorem 3.6. All zeros of the Riemann Zeta function lie on the critical line (have a real component of 1/2).
Proof: "… trivial …"
Proof by Irrelevant References
A proof that is backed up by citations that may or may not contain a proof of the assertion. This includes references to documents that don't exist. (Cf. Schott, Wiggenmeyer & Pratt, Annals of Veterninary Medicine and Modern Domestic Plumbing, vol. 164, Jul 1983.)
Proof by Jack Bauer
If Jack Bauer says something is true, then it is. No ifs, ands, or buts about it. End of discussion.
This is why, for example, torture is good.
Proof by Lecturer
It's true because my lecturer said it was true. QED.
Proof by Liar
If liar say, that he is a liar, he lies, because liars always lie, so he is not liar.
Simple, ain't it?
Proof by Kim G. S. Øyhus' Inference
- and .
- Ergo, .
- Therefore I'm right and you're wrong.
Proof by LSD
Wow! That is sooo real, man!
Proof by Margin Too Small
"I have discovered a truly marvelous proof of this, which this margin is too narrow to contain."
Proof by Mathematical Interpretive Dance
Proof by Misunderstanding
"2 is equal to 3 for sufficiently large values of 2."
Proof by Mockery
Let the other state his claim in detail, wait he lists and explain all his argument and, at any time, explose in laughter and ask, "No, are you serious? That must be a joke. You can't really think that, do you?" Then you leave the debate in laughter and shout, "If you all want to listen to this parody of argument, I shan't prevent you!"
Proof by Narcotics Abuse
Spike the drinks/food of all people attending with physcoaltering or hallucinogenic chemicals.
Proof by Obama
Yes, we can.
Proof by Obfuscation
A long, plotless sequence of true and/or meaningless syntactically related statements.
Proof by Omission
Make it easier on yourself by leaving it up to the reader. After all, if you can figure it out, surely they can. Examples:
- The reader may easily supply the details.
- The other 253 cases are analogous.
- The proof is left as an exercise for the reader.
- The proof is left as an exercise for the marker (guaranteed to work in an exam).
Proof by Ostention
- 2 + 2 = 5
Proof by Outside the Scope
All the non-trivial parts of the proof are left out, stating that proving them is outside the scope of the book.
Proof by Overwhelming Errors
A proof in which there are so many errors that the reader can't tell whether the conclusion is proved or not, and so is forced to accept the claims of the writer. Most elegant when the number of errors is even, thus leaving open the possibility that all the errors exactly cancel each other out.
Proof by Ødemarksism
- See also: Proof by Consensus.
- The majority thinks P.
- Therefore P is true (and dissenters should be silenced in order to reduce conflict from diversity).
The silencing of dissenters can be made easier with convincing arguments.
Proof by Penis Size
My dick's much bigger than yours, so I'm right.
Corollary: You don't have a penis, so I'm right.
Proof by Pornography
Include pornographic pictures or videos in the proof – preferably playing a porno flick exactly to the side of where you are conducting the proof. Works best if you pretend to be oblivious to the porn yourself and act as if nothing is unusual.
Proof by Process of Elimination
so 2 + 2 = 5
Proof by Promiscuity
I get laid much more than you, so I'm right.
Proof by Proving
Well proven is the proof that all proofs need not be unproven in order to be proven to be proofs. But where is the real proof of this? A proof, after all, cannot be a good proof until it has been proven. Right?
Proof by Question
If you are asking me to prove something, it must be true. So why bother asking?
Proof by Realization
A form of proof where something is proved by realizing that is true. Therefore, the proof holds.
Proof by Reduction
Show that the theorem you are attempting to prove is equivalent to the trivial problem of not getting laid. Particularly useful in axiomatic set theory.
Proof by Reduction to the Wrong Problem
Why prove this theorem when you can show it's identical to some other, already proven problem? Plus a few additional steps, of course …
- Example: "To prove the four colour theorem, we reduce it to the halting problem."
Proof by Religion
Related to Proof by Belief, this method of attacking a problem involves the principle of mathematical freedom of expression by asserting that the proof is part of your religion, and then accusing all dissenters of religiously persecuting you, due to their stupidity of not accepting your obviously correct and logical proof. See also Proof by God.
Proof by Repetition
AKA the Socratic method.
If you say something is true enough times, then it is true. Repeatedly asserting something to be true makes it so. To repeat many times and at length the veracity of a given proposition adds to the general conviction that such a proposition might come to be truthful. Also, if you say something is true enough times, then it is true. Let n be the times any given proposition p was stated, preferably in different forms and ways, but not necessarily so. Then it comes to pass that the higher n comes to be, the more truth-content t it possesses. Recency bias and fear of ostracism will make people believe almost anything that is said enough times. If something has been said to be true again and again, it must definitely be true, beyond any shadow of doubt. The very fact that something is stated endlessly is enough for any reasonable person to believe it. And, finally, if you say something is true enough times, then it is true. Q.E.D.
Exactly how many times one needs to repeat the statement for it to be true, is debated widely in academic circles. Generally, the point is reached when those around die through boredom.
- E.g., let A = B. Since A = B, and B = A, and A = B, and A = B, and A = B, and B = A, and A = B, and A = B, then A = B.
Proof by Restriction
If you prove your claim for one case, and make sure to restrict yourself to this one, you thus avoid any case that could compromise you. You can hope that people won't notice the omission.
Example: Prove the four-color theorem.
- Take a map of only one region. Only 1 color is needed to color it, and 1 ≤ 4. End of the proof.
If someone questions the completeness of the proof, others methods of proofs can be used.
Proof by the Rovdistic Principle
- See also: Proof by Belief.
- I like to think that 2 + 2 = 5.
- Therefore, 2 + 2 = 5. Q.E.D.
Proof by Russian Reversal
In Soviet Russia, proof gives YOU!
Proof by Self-evidence
Claim something and tell how self-evident it is: you are right!
Proof by Semantics
Proof by semantics is simple to perform and best demonstrated by example. Using this method, I will prove the famous Riemann Hypothesis as follows:
We seek to prove that the Riemann function defined off of the critical line has no non-trivial zeroes. It is known that all non-trivial zeroes lie in the region with 0 < Re(z) < 1, so we need not concern ourselves with numbers with negative real parts. The Riemann zeta function is defined for Re(z) > 1 by sum over k of 1/kz, which can be written 1 + sum over k from 2 of 1/kz.
Consider the group (C, +). There is a trivial action theta from this group to itself by addition. Hence, by applying theta and using the fact that it is trivial, we can conclude that sum (1/kz) over k from 2 is the identity element 0. Hence, the Riemann zeta function for Re(z) > 0 is simply the constant function 1. This has an obvious analytic continuation to Re(z) > 0 minus the critical line, namely that zeta(z) = 1 for all z in the domain.
Hence, zeta(z) is not equal to zero anywhere with Re(z) > 0 and Re(z) not equal to 1/2. Q.E.D.
Observe how we used the power of the homonyms "trivial" meaning ease of proof and "trivial" as in "the trivial action" to produce a brief and elegant proof of a classical mathematical problem.
Proof by Semitics
If it happened to the Jews and has been confirmed by the state of Israel, then it must be true.
Proof by Staring
- x2 − 1 = (x + 1)(x − 1)
This becomes obvious after you stare at it for a while and the symbols all blur together.
Proof by Substitution
One may substitute any arbitrary value for any variable to prove something. Example:
- Assume that 2 = P.
- Substitute 3 for P.
- Therefore, 2 = 3. Q.E.D.
Proof by Superior IQ
- See also: Proof by Intimidation.
If your IQ is greater than that of the other person in the argument, you are right and what you say is proven.
Proof by Surprise
The proof is accomplished by stating completely random and arbitrary facts that have nothing to do with the topic at hand, and then using these facts to mysteriously conclude the proof by appealing to the Axiom of Surprise. The most known user of this style of proof is Walter Rudin in Principles of Mathematical Analysis. To quote an example:
Theorem: If and is real, then .
Proof: Let be an integer such that , . For , . Hence, . Since , . Q.E.D.
Walter Rudin, Principles of Mathematical Analysis, 3rd Edition, p. 58, middle.
Proof by Tarantino
Proof by Tension
Try to up the tension in the room by throwing in phrases like "I found my wife cheating on me … with another woman", or "I wonder if anybody would care if I slit my wrists tomorrow". The more awkward the situation you can make, the better.
Proof by TeX
Proof by … Then a Miracle Happens
Similar to Proof by Hand Waving, but without the need to wave your hand.
Example: Prove that .
- … then a miracle happens.
- . Q.E.D.
Proof by Triviality
The Proof of this theorem/result is obvious, and hence left as an exercise for the reader.
Proof by Uncyclopedia
Uncyclopedia is the greatest storehouse of human knowledge that has ever existed. Therefore, citing any fact, quote or reference from Uncyclopedia will let your readers know that you are no intellectual lightweight. Because of Uncyclopedia's steadfast adherence to accuracy, any proof with an Uncyclopedia reference will defeat any and all detractors.
(Hint: In any proof, limit your use of Oscar Wilde quotes to a maximum of five.)
Proof by Volume
If you shout something really, really loud often enough, it will be accepted as true.
Also, if the proof takes up several volumes, then any reader will get bored and go do something more fun, like math.
Proof by War
My guns are much bigger than yours, therefore I'm right.
See also Proof by Penis Size.
Proof by Wolfram Alpha
If Wolfram Alpha says it is true, then it is true.
Proof by Wikipedia
If the Wikipedia website states that something is true, it must be true. Therefore, to use this proof method, simply edit Wikipedia so that it says whatever you are trying to prove is true, then cite Wikipedia for your proof.
Proof by Yoda
If stated the proof by Yoda is, then true it must be.
Proof by Your Mom
You don't believe me? Well, your mom believed me last night!
Proof by Actually Trying and Doing It the Honest W– *gunshot*
Let this be a lesson to you do-gooders.
Proof by Reading the Symbols Carefully
Proving the contrapositive theorem: Let (p→q), (¬q→¬p) be true statements. (p→q) if (and only if) (¬q→¬p).
The symbols → may also mean shooting and ¬ may also represent a gun. The symbols would then be read as this:
If statement p shoots statement q, then statement q possibly did not shoot statement p at all, because statement q is a n00b player for pointing the gun in the opposite direction of statement p.
Also, if statement q didn't shoot statement p on the right direction in time (due to n00biness), p would then shoot q.
Oh! I get it now. The power of symbol reading made the theorem sense. Therefore, the theorem is true.
The Ultimate Proof
However, despite all of these methods of proof, there is only one way of ensuring not only that you are 100% correct, but 1000 million per cent correct, and that everyone, no matter how strong or how argumentative they may be, will invariably agree with you. That, my friends, is being a girl. "I'm a girl, so there", is the line that all men dread, and no reply has been discovered which doesn't result in a slap/dumping/strop being thrown/brick being thrown/death being caused. Guys, when approached by this such form of proof, must destroy all evidence of it and hide all elements of its existence.
Some other terms one may come across when working with proofs:
A method of proof attempted at 3:00 A.M. the day a problem set is due, which generally seems to produce far better results at that time than when looked at in the light of day.
Q.E.D. stands for "Quebec's Electrical Distributor", commonly known as Hydro Quebec. It is commonly used to indicate where the author has given up on the proof and moved onto the next problem.
Can be substituted for the phrase "So there, you bastard!" when you need the extra bit of proof.
When handling or working with proofs, one should always wear protective gloves (preferably made of LaTeX).
The Burden of Proof
In recent years, proofs have gotten extremely heavy (see Proof by Volume, second entry). As a result, in some circles, the process of providing actual proof has been replaced by a practice known as the Burden of Proof. A piece of luggage of some kind is placed in a clear area, weighted down with lead weights approximating the hypothetical weight of the proof in question. The person who was asked to provide proof is then asked to lift this so-called "burden of proof". If he cannot, then he loses his balance and the burden of proof falls on him, which means that he has made the fatal mistake of daring to mention God on an Internet message board. | http://uncyclopedia.wikia.com/wiki/Proof?direction=prev&oldid=5603130 | 13 |
15 | A filibuster is a type of parliamentary procedure where debate is extended, allowing one or more members to delay or entirely prevent a vote on a given proposal. It is sometimes referred to as talking out a bill, and characterized as a form of obstruction in a legislature or other decision-making body. The English term "filibuster" derives from the Spanish filibustero, itself deriving originally from the Dutch vrijbuiter, "privateer, pirate, robber" (also the root of English "freebooter"). The Spanish form entered the English language in the 1850s, as applied to military adventurers from the United States then operating in Central America and the Spanish West Indies such as William Walker.
The term in its legislative sense was first used by Democratic congressman Albert G. Brown of Mississippi in 1853, referring to Abraham Watkins Venable's speech against "filibustering" intervention in Cuba.
Ancient Rome
One of the first known practitioners of the filibuster was the Roman senator Cato the Younger. In debates over legislation he especially opposed, Cato would often obstruct the measure by speaking continuously until nightfall. As the Roman Senate had a rule requiring all business to conclude by dusk, Cato's purposefully long-winded speeches were an effective device to forestall a vote.
Cato attempted to use the filibuster at least twice to frustrate the political objectives of Julius Caesar. The first incident occurred during the summer of 60 BC, when Caesar was returning home from his propraetorship in Hispania Ulterior. Caesar, by virtue of his military victories over the raiders and bandits in Hispania, had been awarded a triumph by the Senate. Having recently turned 40, Caesar had also become eligible to stand for consul. This posed a dilemma. Roman generals honored with a triumph were not allowed to enter the city prior to the ceremony, but candidates for the consulship were required, by law, to appear in person at the Forum. The date of the election, which had already been set, made it impossible for Caesar to stand unless he crossed the pomerium and gave up the right to his triumph. Caesar petitioned the Senate to stand in absentia, but Cato employed a filibuster to block the proposal. Faced with a choice between a triumph and the consulship, Caesar chose the consulship and entered the city.
Cato made use of the filibuster again in 59 BC in response to a land reform bill sponsored by Caesar, who was then consul. When it was Cato's time to speak during the debate, he began one of his characteristically long-winded speeches. Caesar, who needed to pass the bill before his co-consul, Marcus Calpurnius Bibulus, took possession of the fasces at the end of the month, immediately recognized Cato's intent and ordered the lictors to jail him for the rest of the day. The move was unpopular with many senators and Caesar, realizing his mistake, soon ordered Cato's release. The day was wasted without the Senate ever getting to vote on a motion supporting the bill, but Caesar eventually circumvented Cato's opposition by taking the measure to the Tribal Assembly, where it passed.
Westminster-style parliaments
In the Parliament of the United Kingdom, a bill defeated by a filibustering manoeuvre may be said to have been "talked out". The procedures of the House of Commons require that members cover only points germane to the topic under consideration or the debate underway whilst speaking. Example filibusters in the Commons and Lords include:
- In 1874, Joseph Gillis Biggar started making long speeches in the House of Commons to delay the passage of Irish coercion acts. Charles Stewart Parnell, a young Irish nationalist Member of Parliament (MP), who in 1880 became leader of the Irish Parliamentary Party, joined him in this tactic to obstruct the business of the House and force the Liberals and Conservatives to negotiate with him and his party. The tactic was enormously successful, and Parnell and his MPs succeeded, for a time, in forcing Parliament to take the Irish question of return to self-government seriously.
- In 1983, Labour MP John Golding talked for over 11 hours during an all-night sitting at the committee stage of the British Telecommunications Bill. However, as this was at a standing committee and not in the Commons chamber, he was also able to take breaks to eat.
- On July 3, 1998, Labour MP Michael Foster's Wild Mammals (Hunting with Dogs) Bill was blocked in parliament by opposition filibustering.
- In January 2000, filibustering orchestrated by Conservative MPs to oppose the Disqualifications Bill led to cancellation of the day's parliamentary business on Prime Minister Tony Blair's 1000th day in office. However, since this business included Prime Minister's Question Time, Conservative Leader, at the time, William Hague was deprived of the opportunity of a high-profile confrontation with the Prime Minister.
- On Friday 20 April 2007, a Private Member's Bill aimed at exempting Members of Parliament from the Freedom of Information Act was 'talked out' by a collection of MPs, led by Liberal Democrats Simon Hughes and Norman Baker who debated for 5 hours, therefore running out of time for the parliamentary day and 'sending the bill to the bottom of the stack.' However, since there were no other Private Member's Bills to debate, it was resurrected the following Monday.
- In January 2011, Labour peers were attempting to delay the passage of the Parliamentary Voting System and Constituencies Bill 2010 until after 16 February, the deadline given by the Electoral Commission to allow the referendum on the Alternative Vote to take place on 5 May. On the eighth day of debate, staff in the House of Lords set up camp beds and refreshments to allow peers to rest, for the first time in eight years.
- In January 2012, Conservative and Scottish National Party MPs used filibustering to successfully block the Daylight Savings Bill 2010-12, a Private Member's Bill that would put the UK on Central European Time. The filibustering included an attempt by Jacob Rees-Mogg to amend the bill to give the county of Somerset its own time zone, 15 minutes behind London.
The all-time Commons record for non-stop speaking, six hours, was set by Henry Brougham in 1828, though this was not a filibuster. The 21st century record was set on December 2, 2005 by Andrew Dismore, Labour MP for Hendon. Dismore spoke for three hours and 17 minutes to block a Conservative Private Member's Bill, the Criminal Law (Amendment) (Protection of Property) Bill, which he claimed amounted to "vigilante law." Although Dismore is credited with speaking for 197 minutes, he regularly accepted interventions from other MPs who wished to comment on points made in his speech. Taking multiple interventions artificially inflates the duration of a speech, and is seen by many as a tactic to prolong a speech.
New Zealand
In 2009, several parties in New Zealand staged a filibuster of the Local Government (Auckland Reorganisation) Bill in opposition to the government setting up a new Auckland Council under urgency and without debate or review by select committee, by proposing thousands of wrecking amendments and voting in Māori as each amendment had to be voted on and votes in Māori translated into English. Amendments included renaming the council to "Auckland Katchafire Council" or "Rodney Hide Memorial Council" and replacing the phrase powers of a regional council with power and muscle. These tactics were borrowed from the filibuster undertaken by National and ACT in August 2000 for the Employment Relations Bill.
In 2011, the People's Ombudsman Bill ("Lokpal Bill") was blocked (voting stalled) in the Rajya Sabha ("Council of States") by Rajniti Prasad (RJD member from Bihar). Prasad colluded with the ruling United Progressive Alliance (UPA), which didn't have a majority in the Council.
Canada - Federal
A dramatic example of filibustering in the House of Commons of Canada took place between Thursday June 23, 2011 and Saturday June 25, 2011. In an attempt to prevent the passing of Bill C-6, which would have legislated the imposing of a four year contract and pay conditions on the locked out Canada Post workers, the New Democratic Party (NDP) led a filibustering session which lasted for fifty-eight hours. The NDP argued that the legislation in its then form undermined collective bargaining. Specifically, the NDP opposed the salary provisions and the form of binding arbitration outlined in the bill.
The House was supposed to break for the summer Thursday June 23, but remained open in an extended session due to the filibuster. The 103 NDP MPs had been taking it in turn to deliver 20 minute speeches - plus 10 minutes of questions and comments - in order to delay the passing of the bill. MPs are allowed to give such speeches each time a vote takes place, and many votes were needed before the bill could be passed. As the Conservative Party of Canada holds a majority in the House, the bill passed. This was the longest filibuster since the 1999 Reform Party of Canada filibuster, on native treaty issues in British Columbia.
Conservative Member of Parliament Tom Lukiwski is known for his ability to stall Parliamentary Committee business by filibustering. One such example occurred October 26, 2006, when he spoke for almost 120 minutes to prevent the Canadian House of Commons Standing Committee on Environment and Sustainable Development from studying a private member's bill to implement the Kyoto Accord. He also spoke for about 6 hours during the February 5, 2008 and February 7, 2008 at the Canadian House of Commons Standing Committee on Procedure and House Affairs meetings to block inquiry into allegations that the Conservative Party spent over the maximum allowable campaign limits during the 2006 election.
Canada - Provincial
The Legislature of the Province of Ontario has witnessed several significant filibusters, although two are notable for the unusual manner by which they were undertaken. The first was an effort on May 6, 1991, by Mike Harris, later premier but then leader of the opposition Progressive Conservatives, to derail the implementation of the budget tabled by the NDP government under premier Bob Rae. The tactic involved the introduction of Bill 95, the title of which contained the names of every lake, river and stream in the province. Between the reading of the title by the proposing MPP, and the subsequent obligatory reading of the title by the clerk of the chamber, this filibuster occupied the entirety of the day's session until adjournment. To prevent this particular tactic to be used again, changes were eventually made to the Standing Orders to limit the time allocated each day to the introduction of bills to 30 minutes.
A second high-profile and uniquely implemented filibuster in the Ontario Legislature occurred in April, 1997, where the New Democratic Party, then in opposition, tried to prevent the governing Progressive Conservatives' Bill 103 from taking effect. These efforts set in motion one of the longest filibustering sessions Canada had ever seen. To protest the Progressive Conservative government's legislation that would amalgamate the municipalities of Metro Toronto into the city of Toronto, the small New Democratic caucus introduced 11,500 amendments to the megacity bill, created on computers with mail merge functionality. Each amendment would name a street in the proposed city, and provide that public hearings be held into the megacity with residents of the street invited to participate. The Ontario Liberal Party also joined the filibuster with a smaller series of amendments; a typical Liberal amendment would give a historical designation to a named street. The NDP then added another series of over 700 amendments, each proposing a different date for the bill to come into force.
The filibuster began on April 2 with the Abbeywood Trail amendment and occupied the legislature day and night, the members alternating in shifts. On April 4, exhausted and often sleepy government members inadvertently let one of the NDP amendments pass, and the handful of residents of Cafon Court in Etobicoke were granted the right to a public consultation on the bill, although the government subsequently nullified this with an amendment of its own. On April 6, with the alphabetical list of streets barely into the Es, Speaker Chris Stockwell ruled that there was no need for the 220 words identical in each amendment to be read aloud each time, only the street name. With a vote still needed on each amendment, Zorra Street was not reached until April 8. The Liberal amendments were then voted down one by one, eventually using a similar abbreviated process, and the filibuster finally ended on April 11.
On 28 October 1897, Dr. Otto Lecher, Delegate for Brunn, spoke continuously for twelve hours before the Abgeordnetenhaus ("House of Delegates") of the Reichsrat ("Imperial Council") of Austria, to block action on the "Ausgleich" with Hungary, which was due for renewal. Mark Twain was present, and described the speech and the political context in his essay "Stirring Times in Austria".
A notable filibuster took place in the Northern Ireland House of Commons in 1936 when Tommy Henderson (Independent Unionist MP for Shankill) spoke for nine and a half hours (ending just before 4 am) on the Appropriation Bill. As this Bill applied government spending to all departments, almost any topic was relevant to the debate, and Henderson used the opportunity to list all of his many criticisms of the Unionist government.
In the Senate of the Philippines, Roseller Lim of the Nacionalista Party held out the longest filibuster in Philippine Senate history. On the election for the President of the Senate of the Philippines on April 1963, he stood on the podium for more than 18 hours to wait for party-mate Alejandro Almendras who was to arrive from the United States. The Nacionalistas, who comprised exactly half of the Senate, wanted to prevent the election of Ferdinand Marcos to the Senate Presidency. Prohibited from even going to the comfort room, he had to relieve in his pants until Almendras' arrival. He voted for party-mate Eulogio Rodriguez just as Almendras arrived, and had to be carried off via stretcher out of the session hall due to exhaustion. However, Almendras voted for Marcos, and the latter wrested the Senate Presidency from the Nacionalistas after more than a decade of control.
On December 16, 2010, Werner Kogler of the Austrian Green Party gave his speech before the budget committee, criticizing the failings of the budget and the governing parties (Social Democratic Party and Austrian People's Party) in the last years. The filibuster lasted for 12 hours and 42 minutes (starting at 13:18, and speaking until 2:00 in the morning), thus breaking the previous record held by his party-colleague Madeleine Petrovic (10 hours and 35 minutes on March 11 in 1993), after which the standing orders had been changed, so speaking time was limited to 20 minutes. However, it didn't keep Kogler from giving his speech.
United States
The filibuster is a powerful parliamentary device in the United States Senate, which was strengthened in 1975 and in the past decade has come to mean that most major legislation (apart from budgets) requires a 60% vote to bring a bill or nomination to the floor for a vote. In recent years the majority has preferred to avoid filibusters by moving to other business when a filibuster is threatened and attempts to achieve cloture have failed. Defenders call the filibuster "The Soul of the Senate." Senate rules permit a senator, or a series of senators, to speak for as long as they wish and on any topic they choose, unless "three-fifths of the Senators duly chosen and sworn" (usually 60 out of 100 senators) brings debate to a close by invoking cloture under Senate Rule XXII. According to the Supreme Court ruling in United States v. Ballin (1892), changes to Senate rules could be achieved by a simple majority, but only on the 1st day of the session in January or March. The idea is that on this first day, the rules of the new legislative session are determined afresh, and rules do not automatically continue from one session to the next. This is called the constitutional option by proponents, and the nuclear option by opponents, who insist that rules do remain in force across sessions. Under current Senate rules, a rule change itself could be filibustered, with two-thirds of those senators present and voting (as opposed to the normal three-fifths of those sworn) needing to vote to break the filibuster. Even if a filibuster attempt is unsuccessful, the process takes floor time.
House of Representatives
In the United States House of Representatives, the filibuster (the right to unlimited debate) was used until 1842, when a permanent rule limiting the duration of debate was created. The disappearing quorum was a tactic used by the minority until an 1890 rule eliminated it. As the membership of the House grew much larger than the Senate, the House has acted earlier to control floor debate and the delay and blocking of floor votes.
In France, in August 2006, the left-wing opposition submitted 137,449 amendments to the proposed law bringing the share in Gaz de France owned by the French state from 80% to 34% in order to allow for the merger between Gaz de France and Suez. Normal parliamentary procedure would require 10 years to vote on all the amendments.
The French constitution gives the government two options to defeat such a filibuster. The first one was originally the use of the article 49 paragraph 3 procedure, according to which the law was adopted except if a majority is reached on a non-confidence motion (reform July 2008 resulted in this power being restricted to budgetary measures only, plus one time each ordinary session - i.e. from October to June - on any bill. Before this reform, article 49, 3 was frequently used, especially when the government had short majority in the Assemblée nationale to support the text but still enough to avoid a non-confidence vote). The second one is the article 44 paragraph 3 through which the government can force a global vote on all amendments it did not approve or submit itself.
In the end, the government did not have to use either of those procedures. As the parliamentary debate started, the left-wing opposition chose to withdraw all the amendments to allow for the vote to proceed. The "filibuster" was aborted because the opposition to the privatisation of Gaz de France appeared to lack support amongst the general population. It also appeared that this privatisation law could be used by the left-wing in the presidential election of 2007 as a political argument. Indeed, Nicolas Sarkozy, president of the Union pour un Mouvement Populaire (UMP - the right wing party), Interior Minister, former Finance Minister and former President, had previously promised that the share owned by the French government in Gaz de France would never go below 70%.
Hong Kong
The first incidence of filibuster in the Legislative Council (LegCo) after the Handover occured during the second reading of the Provision of Municipal Services (Reorganization) Bill in 1999, which aimed at dissolving the partially elected Urban Council and Regional Council. As the absence of some pro-Establishment legislators would mean an inadequate support for the passing of the bill, the Pro-establishment Camp filibustered along with Michael Suen, the then-Secretary for Constitutional Affairs, the voting of the bill was delayed to the next day and that the absentees could cast their votes. Though the filibuster was criticised by the pro-democracy camp, Lau Kong-wah of the Democratic Alliance for the Betterment and Progress of Hong Kong (DAB) defended their actions, saying "it (a filibuster) is totally acceptable in a parliamentary assembly."
Legislators of the Pro-democracy Camp filibustered during a dabate about financing the construction of the Guangzhou-Shenzhen-Hong Kong Express Rail Link by raising many questions on very minor issues, delaying the passing of the bill from 18 December 2009 to 16 January 2010. The Legislative Council Building was surrounded by thousands of anti-high-speed rail protesters during the course of the meetings.
In 2012, Albert Chan and Raymond Wong of People Power submitted a total of 1306 amendments to the Legislative Council (Amendment) Bill, by which the government attempted to forbid lawmakers from participating in by-elections after their resignation. The bill was a response to the so-called 'Five Constituencies Referendum, in which 5 lawmakers from the pro-democracy camp resigned and then joined the by-election, claiming that it would affirm the public's support to push forward the electoral reform. The pro-democracy camp strongly opposed the bill, saying it was seen a deprivation of the citizens' political rights. As a result of the filibuster, the LegCo carried on multiple overnight debates on the amendments. In the morning of 17 May 2012, the President of the LegCo (Jasper Tsang) terminated the debate, citing Article 92 of the Rules of Procedure of LegCo: In any matter not provided for in these Rules of Procedure, the practice and procedure to be followed in the Council shall be such as may be decided by the President who may, if he thinks fit, be guided by the practice and procedure of other legislatures. In the end, all motions to amend the bill were defeated and the Bill was passed.
To ban filibuster, Ip Kwok-him of the DAB sought to limit each member to move only one motion, by amending the procedures of the Finance Committee and its two subcommittees in 2013. All 27 members from pan-democracy camp submitted 1.9 million amendments. The Secretariat estimated that 408 man-months (each containing 156 working hours) were needed to vet the facts and accuracy of the motions, and, if all amendments were admitted by the Chairman, the voting time would take 23,868 two-hour meetings.
See also
- Constitution of the Roman Republic
- Gaming the system
- Liberum veto
- Mae West hold
- Mr. Smith Goes to Washington, the 1939 film notably portrays the use of a filibuster.
- "Talking it out" usage example: "MPs renew info exemption effort". BBC. 15 May 2007. Retrieved 25 September 2010.
- Oxford English Dictionary, "freebooter". Retrieved 2012-10-26.
- Oxford English Dictionary, "filibuster". Retrieved 2012-10-26.
- "Walker, William", in Webster's Biographical Dictionary (1960), Springfield: Merriam-Webster
- William Safire, Safire's New Political Dictionary (2008) pp 244
- Goldsworthy, Adrian (2006). Caesar: Life of a Colossus. New Haven: Yale University Press. p. 583.
- "MPs' info exemption bill revived". BBC News. 2007-04-24. Retrieved 2010-12-24.
- Thomson, Ainsley (2011-01-17). "U.K. in Marathon Session on Voting Bill". The Wall Street Journal.
- "Conservative backbenchers halt effort to move clocks forward". January 21, 2012. Retrieved July 12, 2012.
- Jacob Rees-Mogg Proposes Somerset Time Zone. http://www.youtube.com/watch?v=n58wr9FVzO0.
- BBC News - "MP's marathon speech sinks bill." Retrieved February 14, 2007.
- Parliament of Australia - Standing Orders and other orders of the Senate. Retrieved June 23, 2008.[dead link]
- Parliament of Australia - House of Representatives Standing and Sessional Orders. Retrieved June 23, 2008.[dead link]
- ""Melissa Lee Memorial Council" mooted". Newstalk ZB. Retrieved 2010-12-24.
- "Labour filibuster on Supercity bills". Stuff.co.nz. Retrieved 2010-12-24.
- "Employment bill to drag on another day". Nzherald.co.nz. 2000-08-15. Retrieved 2010-12-24.
- "Canada Post back-to-work bill passes key vote". CBC. June 25, 2011.
- "John Ivison: Time stands still in the House of Commons as NDP filibuster drags on". National Post. June 24, 2011.
- "Marathon Canada Post debate continues on Hill". Vancouver Sun. June 24, 2011.
- Alexander Panetta (2008-04-03). "Tory's loose lips an asset - until now". Toronto: The Canadian Press. Retrieved 2010-02-13.
- Catherine Clark, Tom Lukiwski (July 27, 2009). "Beyond Politics interview (at 19:11)". CPAC.
- "Parties trade blame for House logjam". Toronto: The Canadian Press. 2006-10-26. Retrieved 2010-02-13.
- "Standing Committee on Environment and Sustainable Development". Parliament of Canada. October 26, 2006. Retrieved 2010-02-13.
- Mike De Souza. "Tories accused of stalling their own green agenda". www.canada.com. Retrieved 2010-02-13.
- "Angry chairman suspends session". www.canada.com. Retrieved 2010-02-13.
- "Tories accused of stalling ad scheme review". www.canada.com. Retrieved 2010-02-13.
- Kady O'Malley. "Filibuster ahoy! Liveblogging the Procedure and House Affairs Committee for as long as it takes...". www.macleans.ca. Retrieved 2010-02-13.[dead link]
- Kady O'Malley. "Liveblogging PROC: We’ll stop blogging when he stops talking – the return of the killer filibuster (From the archives)". www.macleans.ca. Retrieved 2010-02-13.
- Kady O'Malley. "Liveblogging the Procedure and House Affairs Committee for as long as it takes... (Part 3)". www.macleans.ca. Retrieved 2010-02-13.[dead link]
- "Obstruction in the Ontario Legislature: The struggle for power between the government and the opposition". Retrieved 2012-08-07.
- "On Filibusters". Retrieved 2012-08-07.
- "Legislative Assembly of Ontario. Hansard. Monday, 6 May 1991". Retrieved 2012-08-07.
- "Legislative Assembly of Ontario. Hansard. Wednesday, 2 April 1997, volume B" (in (French)). Ontla.on.ca. Retrieved 2010-12-24.
- "Legislative Assembly of Ontario. Hansard. Friday, 4 April 1997, volume H". Ontla.on.ca. Retrieved 2010-12-24.
- "Legislative Assembly of Ontario. Hansard. Sunday, 6 April 1997, volume N". Ontla.on.ca. Retrieved 2010-12-24.
- "Legislative Assembly of Ontario. Hansard. Tuesday, 8 April 1997, volume S". Ontla.on.ca. Retrieved 2010-12-24.
- "Legislative Assembly of Ontario. Hansard. Friday, 11 April 1997, volume AE". Ontla.on.ca. Retrieved 2010-12-24.
- Mark Twain, "Stirring Times in Austria"
- "Werner Kogler blocks budget with record filibuster", Presse, 16. Dezember 2010
- "Stenographical Protokol of the 107th conference of the XVIII. legislature period (March 10th to 12th; 1993)" (PDF) (in German). Retrieved 2010-12-24.
- Parlamentskorrespondenz/09/12.03.2007/Nr. 156, Die lange Nacht im Hohen Haus
- Jonathan Backer. Brennan Center for Justice: A Short History on the Constitutional Option.
- Gregory John Wawro; Eric Schickler (2006). Filibuster: Obstruction And Lawmaking in the U.S. Senate. Princeton U.P. pp. 1–12.
- Richard A. Arenberg; Robert B. Dove (2012). Defending the Filibuster: The Soul of the Senate. Indiana U.P.
- "Precedence of motions (Rule XXII)". Rules of the Senate. United States Senate. Retrieved January 21, 2010.
- Beth, Richard; Stanley Bach (March 28, 2003). Filibusters and Cloture in the Senate. Congressional Research Service. pp. 4, 9.
- "TIMELINE: Key dates in Gaz de France-Suez merger". Reuters. 2 September 2007. Retrieved 2010-02-24.
- Kanter, James (19 September 2006). "Plan for Gaz de France advances toward a vote". International Herald Tribune. Retrieved 2010-02-24.
- "France - Constitution". International Constitutional Law. Retrieved 2010-02-24.
- Official Record of Proceedings, Legislative Council, 1 December 1999, page 1875.
- Hong Kong Opposition to Rail Holds Off Vote, Wall Street Journal
- Paper for the Finance Committee Meeting on 22 February 2013: Members' motions that seek to amend the procedures of the Finance Committee and its two subcommittees, Legislative Council of Hong Kong
- BBC, "Filibustering," at BBC News, 16 July 2005.
- BBC, "MP's marathon speech sinks bill" at BBC News, 2 Dec. 2005.
Further reading
- Beth, Richard; Stanley Bach (2003-03-28). Filibusters and Cloture in the Senate. Congressional Research Service.
- Sarah A. Binder and Steven S. Smith, Politics or Principle: Filibustering in the United States Senate. Washington, D.C.: Brookings Institution Press, 1996. ISBN 0-8157-0952-8
- Eleanor Clift, "Filibuster: Not Like It Used to Be," Newsweek, 24 Nov. 2003.
- Bill Dauster, "It’s Not Mr. Smith Goes to Washington: The Senate Filibuster Ain’t What it Used To Be", The Washington Monthly, Nov. 1996, at 34-36.
- Alan S. Frumin, "Cloture Procedure," in Riddick's Senate Procedure, 282–334. Washington, D.C.: Government Printing Office, 1992.
- Gregory Koger (2010). Filibustering: A Political History of Obstruction in the House and Senate. Chicago: University of Chicago Press. ISBN 978-0-226-44964-7. OCLC 455871593.
- Lazare, Daniel (1996). The Frozen Republic: How the Constitution Is Paralyzing Democracy. Harcourt. ISBN 978-0-15-100085-2. OCLC 32626734.
- Jessica Reaves, "The Filibuster Formula," Time, 25 Feb. 2003.
- U.S. Senate, "Filibuster and Cloture."
- U.S. Senate, "Filibuster Derails Supreme Court Appointment."
|Look up filibuster in Wiktionary, the free dictionary.|
- Archive of the amendment debates, 2 April 1997 (Canada, Toronto) in the Provincial Hansard. The filibuster extends from section L176B of the archive to L176AE; the Cafon Court slip-up is in section L176H, Stockwell rules on the issue of repetition in L176N, and Zorra Street is reached in L176S.
- Congressional Quarterly 101 Filibuster | http://en.wikipedia.org/wiki/Filibuster_(legislative_tactic) | 13 |
23 | Food provides an
organism the materials it needs for energy,
growth, repair, and reproduction. These materials are called
nutrients. They can be divided into two main groups: macronutrients
and micronutrients. Macronutrients, or nutrients required
in large amounts, include carbohydrates, fats, and proteins.
These are the foods used for energy, reproduction, repair
and growth. Micronutrients, or nutrients required
in only small amounts, include vitamins and minerals. Most
foods contain a combination of the two groups. A balanced diet must contain all the
essential nutritional elements such as proteins, carbohydrates,
fats, vitamins, minerals, and water. If a diet is deficient
in any of these nutrients, health becomes impaired and diseases
may be the result.
not a nutrient per se, water is essential to the body for
the maintenance of intracellular and extra cellular fluids.
It is the medium in which digestion and absorption take
place, nutrients are transported to cells and metabolic
waste products are removed. The quality of water provided
to reptiles should be of utmost concern. Water and “soft
foods” (foods containing >20% moisture) are frequently implicated
in exposures to high concentrations of bacteria. An open
water container that becomes contaminated with fecal material
or food will promote rapid bacterial proliferation. In water
containing added vitamins, there can be a 100-fold increase
in the bacterial count in 24 hours. Changing the water and
rinsing the container will obviously decrease the bacterial
load, but an active biofilm remains on the container walls
unless it is disinfected or washed thoroughly. Contamination
in the water container, in addition to the aqueous medium
and compatible environmental temperatures, provide all the
requirements for microorganisms to thrive. Likewise, high-moisture
foods such as canned foods, paste foods, sprouts, fruits
and vegetables provide excellent growth media for microorganisms.
At warm environmental temperatures, these types of foods
can become contaminated in as little as four hours. Water
intake will be greatly influenced by the type of diet provided.
Most reptiles can derive the majority of their water requirement
from foodstuffs when the diet consists primarily of fruits,
vegetables or moist foods. Processed diets tend to increase
the reptile’s water intake because they generally are dry,
lower in fat and tend to have overall higher nutrient levels.
Slightly moister feces are often observed in reptiles on
a formulated diet.
Protein is the group of highly complex
organic compounds found in all living cells. Protein is
the most abundant class of all biological molecules, comprising
about 50% of cellular dry weight. Proteins are large molecules
composed of one or more chains of varying amounts of the
same 22 amino acids, which are linked by peptide bonds.
Protein chains may contain hundreds of amino acids; some
proteins also incorporate phosphorus or such metals as iron,
zinc, and copper. There are two general classes of proteins.
Most are Globular (functional) like enzymes (see 2.3.2),
carrier proteins, storage proteins, antibodies and certain
hormones. The other proteins, the structural proteins,
help organize the structure of tissues and organs, and give
them strength and flexibility. Some of these structural
proteins are long and fibrous. Dietary protein is food that contains
the amino acids necessary to construct the proteins described
above. Complete proteins, generally of animal origin, contain
all the amino acids necessary for reptile growth and maintenance;
incomplete proteins, from plant sources such as grains,
legumes, and nuts, lack sufficient amounts of one or more
essential amino acid.
Fibrous or structural proteins
The protein collagen is the most
important building block in the entire animal world.
More than a third of the reptile's protein is collagen
and makes up 75% of the skin. It controls cell shape
Vitamin C is required for the production
and maintenance of collagen, a protein substance that
forms the base of connective tissues in the body such
as bones, teeth, skin, and tendons. Collagen is the
protein that helps heal wounds, mend fractures, and
support capillaries in order to prevent bruises.
Collagen is the fibrous protein constituent of skin,
cartilage, bone, and other connective tissue.
Keratin produces strong and elastic
tissue, the protein responsible for forming scales and
claws in reptiles. These scales protect the reptile
body from the effects of both water and sun, and prevent
them from drying out.
Elastin is the structural protein
that gives elasticity to the tissues and organs. Elastin
found predominantly in the walls of arteries, in the
lungs, intestines, and skin, as well as in other elastic
tissues. It functions in connective tissue in partnership
with collagen. Whereas collagen provides rigidity, elastin
is the protein, which allows the connective tissues
in blood vessels and heart tissues, for example, to
stretch and then recoil to their original positions.
Hemoglobin is the respiratory pigment
found in the red blood. It is produced in the bone marrow
and carries oxygen from the lungs to the body. An inadequate
amount of circulating hemoglobin results in a condition
Myoglobin is close related to hemoglobin.
It transports oxygen to the muscle tissues. It is a
small, bright red protein which function is to store
an oxygen molecule (O2), which is released
during periods of oxygen deprivation, when the muscles
are hard at work.
Albumins are widely distributed in
plant and animal tissues, e.g., ovalbumin of egg, lactalbumin
of milk, and leucosin of wheat. Some contain carbohydrates.
Normally constituting about 55% of the plasma proteins,
albumins adhere chemically to various substances in
the blood, e.g., amino acids, and thus play a role in
their transport. Albumins and other blood proteins aid
in regulating the distribution of water in the body.
Actin & myosin
Actin is found in the muscle tissue
together with myosin where they interact to make muscle
contraction possible. Actin and myosin are also found
in all cells to bring about cell movement.
Ferritin is found predominantly in
the tissue of the liver, used for the storage of iron.
Binds with iron and transports it
for storage in the liver, or to bone marrow, where it
is used for the formation of hemoglobin.
Ferredoxin acts as a transport protein
for electrons involved in photosynthesis.
supply a reptile with the energy it needs to function. They
are found almost exclusively in plant foods, such as fruits,
vegetables, peas, and beans. Milk and milk products are
the only foods derived from animals that contain a significant
amount of carbohydrates. Carbohydrates
are divided into two groups, simple carbohydrates and complex
are the main source of blood glucose, which is a major fuel
for all of the cells and the only source of energy for the
brain and red blood cells. Except for fibre, which cannot
be digested, both simple and complex carbohydrates are converted
into glucose. The glucose is then either used directly to
provide energy, or stored in the liver for future use. Carbohydrates
take less oxygen to metabolize than protein or fat.
Simple carbohydrates (monosaccharide)
carbohydrates, include fructose, sucrose, and glucose,
as well as several other sugars. Fruits are one of the
richest natural sources of simple carbohydrates.
Simple carbohydrates are the main source of blood glucose,
which is a major energy source and the only for the
brain and red blood cells. When a reptile consumes more
single carbohydrates than it uses, a portion may be
stored as fat.
carbohydrates are also made up of sugars, but the sugar
molecules are strung together to form longer, more complex
chains. Complex carbohydrates include fibre, cellulose,
starch, glycogen, etc.
occurs in animal tissues, especially in the liver and
muscle cells. It is the major store of carbohydrate
energy in animal cells.
The liver removes glucose from the blood when blood
glucose levels are high. Through a process called glycogenesis,
the liver combines the glucose molecules in long chains
to create glycogen. When the amount of glucose in the
blood falls below the level required, the liver reverses
this reaction, transforming glycogen into glucose.
consists of two glucose polymers, amylose and amylopectin.
It occurs widely in plants, especially in roots, seeds,
and fruits, as a carbohydrate energy store. Starch is
therefore a major energy source, when digested it ultimately
is the part of plants and insects that are resistant to
digestive enzymes. As a result, only a relatively small
amount of fibre is digested or metabolized in the stomach
or intestines. Instead, most of it moves through the gastrointestinal
tract and ends up in the faeces.
Although most fibre is not digested, it delivers several
important benefits. Fibre retains water, resulting in softer
and bulkier faeces that prevent constipation and, fibre
binds with certain and eliminates these substances from
the body. Dietary fibre falls into four groups: celluloses,
hemicelluloses, lignins, and pectins.
is the major substance of the cell wall of all plants,
algae and some fungi. With some exceptions among insects
(see chitin), true cellulose is not found in animal
a form of cellulose, naturally occurs in the exoskeleton
of insects. It speeds the transit of foods through the
digestive system and promotes the growth of beneficial
bacteria in the intestines. Chitin can thereby improve
digestion, cleanse the colon, and prevent diarrhea and
constipation. Chitin is known to differ from other polysaccharides
in that it has a strong positive charge that lets it
chemically bond with fats. Because it is mostly indigestible,
it can then prevent lipids from being absorbed in the
digestive tract. Chitosan is derivative from chitin,
more soluble in water.
Fat is an essential nutrient and plays
several vital roles. Fat insulates internal organs and nerves,
carries fat-soluble vitamins throughout the body, helps
repair damaged tissue and fight infections, and provides
a source of energy. Fats are the way a reptile stores up
energy. Fats are stored in adipose tissues. These tissues
are situated under the skin, around the kidneys and mainly
in the tail (squamata and crocodilia). Amphibians store
fat in an organ attached to the kidneys, the fat body. Some
reptiles and amphibians depend on their fat stores during
hibernation. During growth, fat is necessary for normal
brain development. Throughout life, it is essential to provide
energy. Fat is, in fact, the most concentrated source of
energy available. However, adult animals require only small
amounts of fat, much less then is provided by the average
diet. Fats are composed of building blocks
call fatty acids. There are three major categories of fatty
acids: saturated, unsaturated, and polyunsaturated.
Amino acids are the basic chemical building
blocks of life, needed to build all the vital proteins,
hormones and enzymes required by all living organisms, from
the smallest bacterium to the largest mammal. Proteins are
needed to perform a host of vital functions, and can only
exist when an organism has access to amino acids that can
be combined into long molecular chains. An organism is continuously at work,
breaking dietary proteins down into individual amino acids,
and then reassembling these amino acids into new structures.
Amino acids are linked together to form proteins and enzymes.
Reptiles to construct muscles, bones, organs, glands, connective
tissues, nails, scales and skin use these proteins. Amino
acids are also necessary to manufacture protein structures
required for genes, enzymes, hormones, neurotransmitters
and body fluids. In the central nervous system, amino acids
act as neurotransmitters and as precursors to neurotransmitters
used in the brain to receive and send messages. Amino acids
are also required to allow vitamins and minerals to be utilized
properly. As long as a reptile has a reliable source
of dietary proteins containing the essential amino acids
it can adequately meet most of its needs for new protein
synthesis. Conversely, if a reptile is cut off from dietary
sources of the essential amino acids, protein synthesis
is affected and serious health problems can arise.
Depending upon the structure, there are
approximately twenty-nine commonly known amino acids that
account for the thousands of different types of proteins
present in all life forms. Many of the amino acids required
maintaining health can be produced in the liver from proteins
found in the diet. These nonessential amino acids are alanine,
aspartic acid, asparagine, glutamic acid, glutamine, glycine,
proline, and serine. The remaining amino acids, called the
essential amino acids, must be obtained from outside sources.
These essential amino acids are arginine, histidine, isoleucine,
leucine, Iysine, methionine, phenylalanine, threonine, tryptophan,
Essential Amino Acids
is an amino acid which becomes an essential amino acid
when under stress or is in an injured state. Depressed
growth results from lack of dietary arginine.
Arginine increases collagen; the protein providing the
main support for bone, cartilage, tendons, connective
tissue, and skin. It increases wound breaking strength
and improves the rate of wound healing. The demand for
arginine in animals occurs in response to physical trauma
like; injury, burns, dorsal skin wounds, fractures,
physical pain registered by the skin, malnutrition and
muscle and bone growth spurts.
is intricately involved in a large number of critical
metabolic processes, ranging from the production of
red and white blood cells to regulating antibody activity.
Histidine also helps to maintain the myelin sheaths
which surround and insulate nerves. In particular, Histidine
has been found beneficial for the auditory nerves, and
a deficiency of this vital amino acid has been noted
in cases of nerve deafness.
Histidine also acts as an inhibitory neurotransmitter,
supporting resistant to the effects of stress.
Histidine is naturally found in most animal and vegetable
is an essential amino acid found abundantly in most
foods. Isoleucine is concentrated in the muscle tissues
and is necessary for hemoglobin formation, and in stabilizing
and regulating blood sugar and energy levels. A deficiency
of isoleucine can produce symptoms similar to those
of hypoglycemia. Isoleucine is a branched chain amino
acid (BCAA), the others are Isoleucine, Leucine and
Valine. They play an important roll in treating injuries
and physical stress conditions.
is an essential amino acid which cannot be synthesized
but must always be acquired from dietary sources. Leucine
stimulates protein synthesis in muscles, and is essential
for growth. Leucine also promotes the healing of bones,
skin and muscle tissue.
Leucine, and the other branched-chain amino acids, Isoleucine
and Valine, are frequently deficient and increased requirements
can occur after stress.
is one of the essential amino acids that cannot be manufactured
by reptiles, but must be acquired from food sources
or supplements. It has an immune enhancing, high doses
of Lysine stop viral growth and reproduction through
the production of antibodies, hormones and enzymes.
In juveniles lysine is needed for proper growth and
bone development. Its aids calcium absorption and maintains
nitrogen balance in adults. It is also instrumental
in the formation of collagen, which is the basic matrix
of the connective tissues, skin, cartilage and bone.
Lysine aids in the repair of tissue, and helps to build
muscle protein, all of which are important for recovery
from injuries. Lysine deficiencies can result in lowered
immune function, loss of energy, bloodshot eyes, shedding
problems, retarded growth, and reproductive disorders,
and increases urinary excretion of calcium. Lysine has
no known toxicity.
is an essential amino acid that is not synthesized and
must be obtained from food or supplements. It is one
of the sulphur containing amino acids and is important
in many functions. Through its supply of sulphur, it
improves the tone and pliability of the skin, conditions
the scales and strengthens claws. The mineral sulphur
is involved with the production of protein. Methionine
is essential for the absorption and transportation of
selenium and zinc in the body. It also acts as a lipotropic
agent to prevent excess fat buildup in the liver, and
is an excellent chelator of heavy metals, such as lead,
cadmium and mercury, binding them and aiding in their
is one of the amino acids which reptiles cannot manufacture
themselfs, but must acquire from food or supplements.
Phenylalanine is a precursor of tyrosine, and together
they lead to the formation of thyroxine or thyroid hormone,
and of epinephrine and norepinephrine which is converted
into a neurotransmitter, a brain chemical which transmits
nerve impulses. This neurotransmitter is used by the
brain to manufacture norepinephrine which promotes mental
alertness, memory, and behavior.
an essential amino acid, is not manufactured by reptiles
and must be acquired from food or supplements. It is
an important constituent in many proteins and is necessary
for the formation of tooth enamel protein, collagen
and elastin. It is a precursor to the amino acids glycine
and serine. It acts as a lipotropic in controlling fat
build-up in the liver.
Nutrients are more readily absorbed when threonine is
present. Threonine is an immune stimulant and deficiency
has been associated with weakened cellular response
and antibody formation.
an essential amino acid, is one of the amino acids which
reptiles cannot manufacture them self, but most acquire
from food or supplements. It is the least abundant in
proteins and also easily destroyed by the liver. Tryptophan
is necessary for the production of the B-vitamin niacin,
which is essential for your brain to manufacture the
key neurotransmitters. It helps control hyperactivity,
relieves stress, and enhances the release of growth
hormones. Tryptophan to control aggressive behavior
in some reptiles.
is one of the amino acids which reptiles cannot for
manufacture them self but must acquire from food sources.
Valine is found in abundant quantities in most food.
Valine has a stimulant effect. Healthy growth depends
on it. A deficiency results in a negative hydrogen balance.
Non-Essential Amino Acids
Valine can be metabolized to produce energy, which spares
amino acid that can be manufactured by reptiles from
other sources as needed. Alanine is one of the simplest
of the amino acids and is involved in the energy-producing
breakdown of glucose. In conditions of sudden anaerobic
energy need, when muscle proteins are broken down for
energy, alanine acts as a carrier molecule to take the
nitrogen-containing amino group to the liver to be changed
to the less toxic urea, thus preventing buildup of toxic
products in the muscle cells when extra energy is needed.
No deficiency state is known.
is a nonessential amino acid and structurally similar
to aspartic acid, with an additional amino group on
the main carbon skeleton. Asparaginine aids in the metabolic
functioning of brain and nervous system cells, and may
be a mild immune stimulant as well.
Acid is a nonessential amino acid that the body can
make from other sources in sufficient amounts to meet
its needs. It is a critical part of the enzyme in the
liver that transfers nitrogen-containing amino groups,
either in building new proteins and amino acids, or
in breaking down proteins and amino acids for energy
and detoxifying the nitrogen in the form of urea.
Its ability to increase endurance is thought to be a
result of its role in clearing ammonia from the system.
Aspartic acid is one of two major excitatory amino acids
within the brain (The other is glutamic acid).
Depleted levels of aspartic acid may occur temporarily
within certain tissues under stress, but, because the
body is able to make its own aspartic acid to replace
any depletion, deficiency states do not occur. Aspartic
acid is abundant in plants, especially in sprouting
seeds. In protein, it exists mainly in the form of its
amide, asparagine. Aspartic acid is considered nontoxic.
is a dipeptide - an amino acid made from two other aminos,
methionine and lysine. It can be synthesized in the
liver if sufficient amounts of lysine, B1, B6 and iron
are available. Carnitine has been shown to have a major
role in the metabolism of fat and by increasing fat
utilization. It transfers fatty acids across the membranes
of the mitochondria where they can be utilized as sources
of energy. It also increases the rate at which the liver
Carnitine is stored primarily in the skeletal muscles
and heart, where it is needed to transform fatty acids
into energy for muscular activity.
is a high sulphur containing amino acid synthesized
by the liver. It is an important precursor to Glutathione,
one of the body's most effective antioxidants and free
radical destroyers. Free radicals are toxic waste products
of faulty metabolism, radiation and environmental pollutants
which oxidize and damage body cells. Glutathione also
protects red blood cells from oxidative damage and aids
in amino acid transport. It works most effectively when
taken in conjunction with vitamin E and selenium.
Through this antioxidant enzyme process, cysteine may
contribute to a longer life. It has immune enhancing
properties, promotes fat burning and muscle growth and
also tissue healing after injury or burns. 8% of the
scales consists of cysteine.
is a stable form of the amino acid cysteine. A reptile
is capable of converting one to the other as required
and in metabolic terms they can be thought of as the
same. Both cystine and cysteine are rich in sulphur
and can be readily synthesized. Cystine is found abundantly
in scale keratin, insulin and certain digestive enzymes.
Cystine or cysteine is needed for proper utilization
of vitamin B6. By reducing the body's absorption of
copper, cystine protects against copper toxicity, which
has been linked to behavioral problems. It is also found
helpful in the healing of wounds, and is used to break
down mucus deposits in illnesses such as bronchitis
and cystic fibrosis.
Gamma aminobutyric acid
aminobutyric acid is an important amino acid which functions
as the most prevalent inhibitory neurotransmitter in
the central nervous system. It works in partnership
with a derivative of Vitamin B-6, and helps control
the nerve cells from firing too fast, which would overload
acid is biosynthesized from a number of amino acids
including ornithine and arginine. When aminated, glutamic
acid forms the important amino acid glutamine. It can
be reconverted back into glutamine when combined with
ammonia can create confusion over which amino acid does
Glutamic acid (sometimes called glutamate) is a major
excitatory neurotransmitter in the brain and spinal
cord, and is the precursor to glutathione and Gamma-Aminobutyric
Acid (GABA). Glutamic acid is also a component of folic
acid. After glutamic acid is formed in the brain from
glutamine, it then has two key roles. The body uses
glutamic acid to fuel the brain and to inhibit neural
excitation in the central nervous system. Besides glucose,
it is the only compound used for fuel by the brain.
The second function is detoxifying ammonia in the brain
and removing it. It then reconverts to its original
form of glutamine.
is an amino acid widely used to maintain good brain
functioning. Glutamine is a derivative of glutamic acid
which is synthesized from the amino acids arginine,
ornithine and proline. Glutamine improves mental alertness
and mood. It is found abundantly in animal proteins
and needed in high concentrations in serum and spinal
fluid. When glutamic acid combines with ammonia, a waste
product of metabolic activity, is converted into glutamine.
is an amino acid that is a major part of the pool of
amino acids which aid in the synthesis of non essential
amino acids in the body. Glycine can be easily formed
in the liver or kidneys from Choline and the amino acids
Threonine and Serine. Likewise, Glycine can be readily
converted back into Serine as needed. Glycine is also
one of the few amino acids that can spare glucose for
energy by improving glycogen storage.
Glycine is required by the body for the maintenance
of the central nervous system, and also plays an important
function in the immune system were it is used in the
synthesis of other non-essential amino acids. Glycine
can reduce gastric acidity, and in higher doses, can
stimulate growth hormone release and contribute to wound
healing. Glycine comprises up to a third of the collagen
and is required for the synthesis of hemoglobin, the
oxygen-carrying molecule in the blood.
is made from the amino acid arginine and in turn is
a precursor to form glutamic acid, citruline, and proline.
Ornithine's value lies in its ability to enhance liver
function, protect the liver and detoxify harmful substances.
It also helps release a growth hormone when combined
with arginine, this growth hormone is also an immune
Arginine and ornithine have improved immune responses
to bacteria and viruses. Ornithine has been shown to
aid in wound healing and support liver regeneration.
is synthesized from the amino acids glutamine or ornithine.
It is one of the main components of collagen, the connective
tissue structure that binds and supports all other tissues.
Proline improves skin texture but collagen is neither
properly formed nor maintained if Vitamin C is lacking.
is an amino acid naturally found in vegetables, fruits,
and insects. It is also normally present in large amounts
in the bone marrow and blood. Pyroglutamate improves
memory and learning in rats, but it is not known it
has any effect on reptiles.
is synthesized from the amino acids glycine or threonine.
Its production requires adequate amounts of B-7 (niacin),
B-6, and folic acid. It is needed for the metabolism
of fats and fatty acids, muscle growth and a healthy
immune system. It aids in the production of immunoglobulins
and antibodies. It is a constituent of brain proteins
and nerve coverings. It is important in the formation
of cell membranes, involved in the metabolism of purines
and pyrimidines, and muscle synthesis.
is one of the most abundant amino acids. It is found
in the central nervous system, skeletal muscle and is
very concentrated in the brain and heart. It is synthesized
from the amino acids methionine and cysteine, in conjunction
with vitamin B6. Animal protein is a good source of
taurine, as it is not found in vegetable protein. Like
magnesium, taurine affects cell membrane electrical
excitability by normalizing potassium flow in and out
of heart muscle cells. It has been found to have an
effect on blood sugar levels similar to insulin. Taurine
helps to stabilize cell membranes and seems to have
some antioxidant and detoxifying activity. It helps
the movement of potassium, sodium, calcium and magnesium
in and out of cells, which helps generate nerve impulses.Taurine
is necessary for the chemical reactions that produce
is an amino acid which is synthesized from phenylalanine.
It is a precursor of the important brain neurotransmitters
epinephrine, norepinephrine and dopamine. Dopamine is
vital to mental function and seems to play a role in
Tyrosine is also used by the thyroid gland to produce
one of the major hormones, Thyroxin. This hormone regulates
growth rate, metabolic rate, skin health and mental
health. It is used in the treatment of anxiety. Animals
subjected to stress in the laboratory have been found
to have reduced levels of the brain neurotransmitter
norepinephrine. Doses of tyrosine prior to stressing
the animals prevents reduction of norepinephrine.
are naturally occurring elements found in the earth and
work in reptiles as coenzymes to allow the reptile to perform
vital functions. Minerals compose body fluids, blood and
bone, and the central nervous system functions.
The dependence on specific minerals is based upon millions
of years of evolutionary development that can be traced
back to the earliest living organisms. Over time mineral
salts have been released into the environment by the breakdown
and weathering of rock formations rich in elemental deposits.
Accumulating in the soil and oceans, minerals are passed
from micro organisms to plants and on to herbivorous creatures.
Reptiles then obtain minerals primarily from the plants,
insects and animals that make up their diet.
Minerals can be broken down into two basic groups: bulk,
or macro minerals, and trace, or micro minerals. The macro
minerals, such as calcium, magnesium, sodium (salt), potassium
and phosphorus are needed in fairly substantial amounts
for proper health. By comparison, the trace minerals are
needed in far smaller quantities and include substances
such as zinc, iron, copper, manganese, chromium, selenium,
After ingestion, dietary minerals enter the stomach where
they are attached to proteins in order to enhance absorption
into the blood stream. After minerals are absorbed they
are delivered by the blood stream to individual cells for
transport across cell membranes. Minerals must often compete
with other minerals for absorption, and in certain cases
must be in a proper balance with other minerals to be properly
utilized. For example, an excess of zinc can cause a depletion
of copper, and too much calcium can interfere with the absorption
of magnesium and phosphorus.
Minerals are generally considered safe, though high dosages
for long periods can lead to toxic effects.
is the most abundant mineral in a reptile and one of
the most important. This mineral constitutes about 1.5-2.0
percent of their body weight. Almost all (98 percent)
calcium is contained in the bones and the rest in the
other tissues or in circulation.
Many other nutrients, vitamin D-3, and certain hormones
are important to calcium absorption, function, and metabolism.
Phosphorus as well as calcium is needed for normal bones,
as are magnesium, silicon, strontium and possibly boron.
The ratio of calcium to phosphorus in bones is about
2.5:1; the best proportions of these minerals in the
diet for proper metabolism are currently under question.
Calcium works with magnesium in its functions in the
blood, nerves, muscles, and tissues, particularly in
regulating heart and muscle contraction and nerve conduction.
Vitamin D-3 is needed for much calcium (and phosphorus)
to be absorbed from the digestive tract.
Maintaining a balanced blood calcium level is essential
to life. If there is not enough calcium in the diet
to maintain sufficient amounts of calcium in the blood,
calcium then will be drawn out of the bones and increase
intestinal absorption of available calcium. So even
though most of the calcium is in the bones, the blood
and cellular concentrations of this mineral are maintained
Various factors can improve our calcium absorption.
Besides vitamin D-3, vitamins A and C can help support
normal membrane transport of calcium. Protein intake
helps absorption of calcium, but too much protein may
reduce it. Some dietary fat may also help absorption,
but high fat may reduce it.
A fast-moving intestinal tract can also reduce calcium
absorption. Stress also can diminish calcium absorption,
possibly through its effect on stomach acid levels and
digestion. Though calcium in the diet improves the absorption
of the important vitamin B-12, too much of it may interfere
with the absorption of the competing minerals magnesium,
zinc, iron, and manganese.
Because of the many complex factors affecting calcium
absorption, anywhere from 30-80 percent may end up being
excreted. Some may be eliminated in the feces. The kidneys
also control calcium blood levels through their filtering
and reabsorption functions.
makes up about 0.15 percent of the weight and is found
mainly in the extracellular fluid along with sodium.
As one of the mineral electrolytes, chloride works closely
with sodium and water to help the distribution of body
Chloride is easily absorbed from the small intestine.
It is eliminated through the kidneys, which can also
retain chloride as part of their finely controlled regulation
of acid-base balance.
is a very important essential macro mineral, even though
it is only 0.05 percent of the body weight. It is involved
in several hundred enzymatic reactions, many of which
contribute to production of energy. As with calcium,
the bones act as a reservoir for magnesium in times
of need. The remaining magnesium is contained in the
blood, fluids, and other tissues. The process of digestion
and absorption of magnesium is very similar to that
of calcium. Diets high in protein or fat, a diet high
in phosphorus or calcium (calcium and magnesium can
compete), may decrease magnesium absorption.
Usually, about 40-50 percent of the magnesium is absorbed,
though this may vary from 25-75 percent depending on
stomach acid levels, body needs, and diets. Stress may
increase magnesium excretion. The kidneys can excrete
or conserve magnesium according to body needs. The intestines
can also eliminate excess magnesium in the faeces.
is the second most abundant element (after calcium)
present in a reptile’s body, makes up about 1 percent
of the total body weight. It is present in every cell,
but 85 percent of the phosphorus is found in the bones.
In the bones, phosphorus is present in the phosphate
form as the bone salt calcium phosphate in an amount
about half that of the total calcium. Both these important
minerals are in constant turnover, even in the bone
A reptile uses a variety of mechanisms to control the
calcium-phosphorus ratio and metabolism. Phosphorus
is absorbed more efficiently than calcium. Nearly 70
percent of phosphorus is absorbed from the intestines,
although the rate depends somewhat on the levels of
calcium and vitamin D. Most phosphorus is deposited
in the bones, the rest is contained in the cells and
other tissues. Much is found in the red blood cells.
Iron, aluminium, or magnesium, which may all form insoluble
phosphates and be eliminated in the faeces. This results
in a decrease of phosphorus absorption.
Phosphorus is involved in many functions besides forming
bones. Phosphorus is important to the utilization of
carbohydrates and fats for energy production and also
in protein synthesis for the growth, maintenance, and
repair of all tissues and cells. It helps in kidney
function and acts as a buffer for acid-base balance
in the body. Phosphorus aids muscle contraction, including
the regularity of the heartbeat, and is also supportive
of proper nerve conduction. This important mineral supports
the conversion of niacin and riboflavin to their active
There is no known toxicity specific to phosphorus; however,
high dietary phosphorus, can readily affect calcium
metabolism. Potential calcium deficiency symptoms may
be more likely when the phosphorus dosage is very high.
Symptoms of phosphorus deficiency may include weakness,
weight loss, decreased growth, poor bone development,
and symptoms of rachitis may occur in phosphorus-deficient
is a very significant mineral, important to both cellular
and electrical function. It is one of the main blood
minerals called "electrolytes" (the others are sodium
and chloride), which means it carries a tiny electrical
charge (potential). Potassium is the primary positive
ion found within the cells, where 98 percent of potassium
is found. Magnesium helps maintain the potassium in
the cells, but the sodium and potassium balance is as
finely tuned as those of calcium and phosphorus or calcium
and magnesium. Potassium is well absorbed from the small
intestine, with about 90 percent absorption and is one
of the most soluble minerals. Most excess potassium
is eliminated in the urine.
Along with sodium, it regulates the water balance and
the acid-base balance in the blood and tissues. Potassium
is very important in cellular biochemical reactions
and energy metabolism; it participates in the synthesis
of protein from amino acids in the cell. Potassium also
functions in carbohydrate metabolism; it is active in
glycogen and glucose metabolism, converting glucose
to glycogen that can be stored in the liver for future
energy. Potassium is important for normal growth and
for building muscle.
Elevations or depletions of this important mineral can
cause many problems. Maintaining consistent levels of
potassium in the blood and cells is vital to function.
Even with high dosages of potassium, the kidneys will
clear any excess, and blood levels will not be increased.
Low potassium may impair glucose metabolism. In more
severe potassium deficiency, there can be serious muscle
weakness, bone fragility, central nervous system changes,
and even death.
is another mineral that is not commonly written about
as an essential nutrient. It is present in the soil
and is actually the most abundant mineral in the earth's
crust, as carbon is the most abundant in plant and animal
tissue. Silicon is very hard and is found in rock crystals
such as quartz or flint. Silicon molecules in the tissues,
such as the nails and connective tissue, give them strength
and stability. Silicon is present in bone, blood vessels,
cartilage, and tendons, helping to make them strong.
Silicon is important to bone formation, as it is found
in active areas of calcification. It is also found in
plant fibres and is probably an important part of their
structure. This mineral is able to form long molecules,
much the same as is carbon, and gives these complex
configurations some durability and strength. Retarded
growth and poor bone development is the result of a
silicon-deficient diet. Collagen contains silicon, helping
hold the body tissues together. This mineral works with
calcium to help restore bones.
It deeply penetrates the tissues and help to clear stored
toxins. The essential strength and stability this mineral
provides to the tissues should give them protection
is the primary positive ion found in the blood and body
fluids; it is also found in every cell although it is
mainly extracellular, working closely with potassium,
the primary intracellular mineral. Sodium is one of
the electrolytes, along with potassium and chloride,
and is closely tied in with the movement of water; "where
sodium goes, water goes." Sodium chloride is present
in solution on a large part of the earth's surface in
ocean water. Along with potassium, sodium helps to regulate
the fluid balance both within and outside the cells.
Through the kidneys, by buffering the blood with a balance
of all the positive or negative ions present, these
two minerals help control the acid-base balance as well.
Sodium and potassium interaction helps to create an
electrical potential (charge) that enables muscles to
contract and nerve impulses to be conducted. Sodium
is also important to hydrochloric acid production in
the stomach and is used during the transport of amino
acids from the gut into the blood.
Since sodium is needed to maintain blood fluid volume,
excessive sodium can lead to increased blood volume,
especially when the kidneys do not clear it efficiently.
In the case of sodium, there is more of a concern with
toxicity from excesses than with deficiencies.
Sodium deficiency is less common than excess sodium,
as this mineral is readily available in all diets, but
when it does occur, deficiency can cause problems. The
deficiency is usually accompanied by water loss. When
sodium and water are lost together, the extracellular
fluid volume is depleted, which can cause decreased
blood volume, increased blood count and muscle weakness.
When sodium is lost alone, water flows into the cells,
causing cellular swelling and symptoms of water intoxication.
With low sodium, there is also usually poor carbohydrate
is an interesting non-metallic element that is found
mainly as part of larger compounds. Sulphur is present
in four amino acids: methionine, an essential amino
acid and the nonessential cystine and cysteine and.
Sulphur is also present in two B vitamins; thiamine
is important to skin and biotin to the scales. Sulphur
is also available as various sulphates or sulphides.
But overall, sulphur is most important as part of protein.
Sulphur is absorbed from the small intestine primarily
as the four amino acids or from sulphates in water,
fruits and vegetables. Sulphur is stored in all cells,
especially the skin, scales, and nails. Excess amounts
are eliminated through the urine or in the faeces.
As part of four amino acids, sulphur performs a number
of functions in enzyme reactions and protein synthesis.
It is necessary for formation of collagen, the protein
found in connective tissue. Sulphur is also present
in keratin, which is necessary for the maintenance of
the skin, scales, and nails, helping to give strength,
shape, and hardness to these protein tissues. There
is minimal reason for concern about either toxicity
or deficiency of sulphur. No clearly defined symptoms
exist with either state. Sulphur deficiency is common
with low-protein diets, or with a lack of intestinal
bacteria, though none of these seems to cause any problems
in regard to sulphur functions and metabolism.
Essential Trace Minerals (Trace elements)
is a vital molecule in regulating carbohydrate metabolism
by enhancing insulin function for proper use of glucose.
Together with two niacin molecules, and three amino
acids; glycine, cysteine, and glutamic acid.
Chromium is really considered an "ultra-trace" mineral,
since it is needed in such small quantities to perform
its essential functions. The blood contains about 20
parts per billion (ppb), a fraction of a microgram.
Even though it is in such small concentrations, this
mineral is important to a reptile’s health. The kidneys
clear any excess from the blood, while much of chromium
is eliminated through the faeces. This mineral is stored
in many parts, including the skin, fat, muscles and
kidneys. Because of the low absorption and high excretion
rates of chromium, toxicity is not at all common in
reptiles, but it is in amphibians.
is another essential mineral needed in very small amounts
in the diet.
Cobalt, as part of vitamin B12, is not easily absorbed
from the digestive tract. It is stored in the red blood
cells and the plasma, as well as in the liver, and kidneys.
As part of vitamin B12, cobalt is essential to red blood
cell formation and is also helpful to other cells.
Toxicity can occur from excess inorganic cobalt found
as a food contaminant. High dosage may affect the thyroid
or cause overproduction of red blood cells, thickened
blood, and increased activity in the bone marrow.
Deficiency of cobalt is not really a concern with enough
vitamin B12. As cobalt deficiency leads to decreased
availability of B12, there is an increase of many symptoms
and problems related to B12 deficiency, particularly
is important as a catalyst in the formation of haemoglobin,
the oxygen-carrying molecule. It helps oxidize vitamin
C and works with C to form collagen (part of cell membranes
and the supportive matrix in muscles and other tissues),
especially in the bone and connective tissue. It helps
the cross-linking of collagen fibres and thus supports
the healing process of tissues and aids in proper bone
formation. An excess of copper may increase collagen
and lead to stiffer and less flexible tissues.
Copper is found in many enzymes; most important is the
cytoplasmic superoxide dismutase. Copper enzymes play
a role in oxygen-free radical metabolism, and in this
way have a mild anti-inflammatory effect. Copper also
functions in certain amino acid conversions. Being essential
in the synthesis of phospholipids, copper contributes
to the integrity of the myelin sheaths covering nerves.
It also aids the conversion of tyrosine to the pigment
melanin, which gives scales and skin their colouring.
Copper, as well as zinc, is important for converting
T3 (triiodothyronine) to T4 (thyroxine), both thyroid
hormones. Low copper levels may reduce thyroid functions.
Copper, like most metals, is a conductor of electricity;
it helps the nervous system function. It also helps
control levels of histamine. Copper in the blood is
fixed to the protein cerulosplasmin, and copper is part
of the enzyme histaminase, which is involved in the
metabolism of histamine.
Problems of copper toxicity may include stress, hyperactivity,
nervousness, and discoloration of the skin and scales.
Copper deficiency is commonly found together with iron
deficiency. High zinc dosage can lead to lower copper
levels and some symptoms of copper deficiency. The reduced
red blood cell function and shortened red cell life
span can influence energy levels and cause weakness
and may also affect tissue health and healing. Weakened
immunity, skeletal defects related to bone demineralization,
and poor nerve conductivity, might all result from copper
depletions. Copper deficiency results in several abnormalities
of the immune system, such as reduced cellular immune
response, reduced activity of white blood cells and
an increased infection rate.
is an essential nutrient for production of thyroid hormones
and therefore is required for normal thyroid function.
The thyroid hormones, particularly thyroxine, which
is 65 percent iodine, are responsible for basal metabolic
rate, the reptile’s use of energy. Thyroid is required
for cell respiration and the production of energy and
further increases oxygen consumption and general metabolism.
The thyroid hormones, thyroxine and triiodothyronine,
are also needed for normal growth and development, protein
synthesis, and energy metabolism. Nerve and bone formation,
reproduction, and the condition of the skin, scales,
and nails. Thyroid and, thus, iodine also affect the
conversion of carotene to vitamin A and of ribonucleic
acids to protein; cholesterol synthesis; and carbohydrate
There is no significant danger of toxicity of iodine
from a natural diet, though some care must be taken
when supplementing iodine. High iodine dosage, however,
may actually reduce thyroxine production and thyroid
function. Deficiencies of iodine have been very common,
especially in areas where the soil is depleted, as discussed
earlier. Several months of iodine deficiency leads to
slower metabolism, decreased resistance to infection,
and a decrease in sexual energy.
function of iron is the formation of haemoglobin. Iron
is the central core of the haemoglobin molecule, which
is the essential oxygen-carrying component of the red
blood cell (RBC). In combination with protein, iron
is carried in the blood to the bone marrow, where, with
the help of copper, it forms haemoglobin. The ferritin
and transferrin proteins actually hold and transport
the iron. Haemoglobin carries the oxygen molecules throughout
the body. Red blood cells pick up oxygen from the lungs
and distribute it to the rest of the tissues, all of
which need oxygen to survive. Iron's ability to change
back and forth between its ferrous and ferric forms
allows it to hold and release oxygen. Myoglobin is similar
to haemoglobin in that it is an iron-protein compound
that holds oxygen and carries it into the muscles, mainly
the skeletal muscles and the heart. It provides higher
amounts of oxygen to our muscles with increased activity.
Myoglobin also acts as an oxygen reservoir in the muscle
cells. So muscle performance actually depends on this
function of iron, besides the basic oxygenation by haemoglobin
through normal blood circulation.
Usually, it takes moderately high amounts over a long
period with minimal losses of this mineral to develop
any iron toxicity problems.
Iron deficiency occurs fairly commonly when a rapid
growth period increases iron needs, which are often
not met with additional dietary intake. Females need
more iron than males. Symptoms are weight loss from
decreased appetite, loss of energy, lowered immunity
(a weakened resistance), and may cause a strange symptom;
eating and licking inedible objects, such as stones,
mud, or glass.
is involved in many enzyme systems-that is, it helps
to catalyze many biochemical reactions. There are some
biochemical suggestions that manganese is closer to
magnesium in more than just name. It is possible that
magnesium can substitute for manganese in certain conditions
when manganese is deficient.
Manganese activates the enzymes necessary for a reptile
to use biotin, thiamine (B1), vitamin C, and choline.
It is important for the digestion and utilization of
food, especially proteins, through peptidase activity,
and it is needed for the synthesis of cholesterol and
fatty acids and in glucose metabolism.
Manganese may be one of the least toxic minerals.
Manganese deficiency can lead to sterility, and to infertile
eggs or to poor growth in the offspring. There is decreased
bone growth, especially in length.
is a vital part of three important enzyme systems—xanthine
oxidase, aldehyde oxidase, and sulphite oxidase—and
so has a vital role in uric acid formation and iron
utilization, in carbohydrate metabolism, and sulphite
detoxification as well. Xanthine oxidase (XO) helps
in the production of uric acid, an end product of protein
(purine) metabolism and may also help in the mobilization
of iron from liver reserves. Aldehyde oxidase helps
in the oxidation of carbohydrates and other aldehydes,
including acetaldehyde produced from ethyl alcohol.
Sulphate oxidase helps to detoxify sulphurs.
Animals given large amounts experience weight loss,
slow growth, anaemia, or diarrhea, though these effects
may be more the result of low levels of copper, a mineral
with which molybdenum competes. Molybdenum-deficient
diets seem to produce weight loss and decreased life
has a variety of functions; its main role is as an antioxidant
in the enzyme selenium-glutathione peroxidase. Selenium
is part of a nutritional antioxidant system that protects
cell membranes and intracellular structural membranes
from lipid peroxidation. It is actually the selenocysteine
complex that is incorporated into glutathione peroxidase
(GP), an enzyme that helps prevent cellular degeneration
from the common peroxidase free radicals, such as hydrogen
peroxide. GP also aids red blood cell metabolism and
prevent chromosome damage in tissue cultures. As an
antioxidant, selenium in the form of selenocysteine
prevents or slows the biochemical aging process of tissue
degeneration and hardening-that is, loss of youthful
elasticity. This protection of the tissues and cell
membranes is enhanced by vitamin E. Selenium also protects
reptiles from the toxic effects of heavy metals and
other substances. Selenium may also aid in protein synthesis,
growth and development, and fertility, especially in
the male. It has been shown to improve sperm production
High doses of selenium can lead to weight loss, liver
and kidney malfunction, and even death if the levels
are high enough. With selenium deficiency, there may
be increased risk of infections. Many other metals,
including cadmium, arsenic, silver, copper, and mercury,
are thought to be more toxic in the presence of selenium
involved in a multitude of functions and is part of
many enzyme systems. With regard to metabolism, zinc
is part of alcohol dehydrogenase, which helps the liver
detoxify alcohols (obtained often from eating rotten,
high sugar fruits), including ethanol, methanol, ethylene
glycol, and retinol (vitamin A). Zinc is also thought
to help utilize and maintain levels of vitamin A. Through
this action, zinc may help maintain healthy skin cells
and thus may be helpful in generating new skin. By helping
collagen formation, zinc may also improve wound healing.
Zinc is needed for lactate and malate dehydrogenases,
both important in energy production. Zinc is a cofactor
for the enzyme alkaline phosphatase, which helps contribute
phosphates to bones. Zinc is also part of bone structure.
Zinc is important to male sex organ function and reproductive
fluids. It is in high concentration in the eye, liver,
and muscle tissues suggesting its functions in those
Zinc in carboxypeptidase (a digestive enzyme) helps
in protein digestion. Zinc is important for synthesis
of nucleic acids, both DNA and RNA. Zinc has also been
shown to support immune function.
Zinc is fairly non-toxic. There are many symptoms and
decreased functions due to zinc deficiency. It may cause
slowed growth or slow sexual behaviour. Lowered resistance
and increased susceptibility to infection may occur
with zinc deficiency, which is related to a decreased
cellular immune response. Sensitivity and reactions
to environmental chemicals may be exaggerated in a state
of zinc deficiency, as many of the important detoxification
enzyme functions may be impaired.
Reptiles with zinc deficiency may show poor appetite
and slow development. Dwarfism and a total lack of sexual
function may occur with serious zinc deficiency. Fatigue
is common. Sterility may result from zinc deficiency.
Birth defects have been associated with zinc deficiency
during pregnancy in experimental animals. The offspring
showed reduced growth patterns.
Other Trace minerals
is an important trace mineral necessary for the proper
absorption and utilization of calcium for maintaining
bone density and the prevention of loss of bone mass.
It possibly affects calcium, magnesium, and phosphorus
balance and the mineral movement and makeup of the bones
by regulating the hormones.
helps strengthen the crystalline structure of bones.
The calcium fluoride salt forms a fluorapatite matrix,
which is stronger and less soluble than other calcium
salts and therefore is not as easily reabsorbed into
circulation to supply calcium needs. In bones, fluoride
reduces loss of calcium and thereby may reduce osteoporosis.
No other functions of fluoride are presently known,
though it has been suggested to have a role in growth,
in iron absorption, and in the production of red blood
Toxicity from fluoride is definitely a potential problem.
As stated, fluoridated water must be closely monitored
to keep the concentration at about 1 ppm. At 8 to about
20 ppm, initial tissue sclerosis will occur, especially
in the bones and joints. At over 20 ppm, much damage
can occur, including decreased growth and cellular changes,
especially in the metabolically active organs such as
the liver, kidneys, and reproductive organs. More than
50 ppm of fluoride intake can be fatal. Animals eating
extra fluoride in grains, vegetables or in water have
been shown to have bone lesions. Fat and carbohydrate
metabolism has also been affected. There are many other
concerns about fluoride toxicity, including bone malformations.
Sodium fluoride is less toxic than most other fluoride
salts. In cases of toxicity, extra calcium will bind
with the fluoride, making a less soluble and less active
Fluoride deficiency is less of a concern. It is possible,
that traces of fluoride are essential, but it is not
clear whether it is a natural component of the tissues.
Low fluoride levels may correlate with a higher amount
of bone fractures, but that is usually in the presence
mineral germanium itself may be needed in small; however,
research has not yet shown this. It is found in the
soil, in foods, and in many plants, such as aloe vera,
garlic, and ginseng. The organo-germanium does not release
the mineral germanium to the tissues for specific action,
but is absorbed, acts, and is eliminated as the entire
not yet known what particular function of lithium may
make it an essential nutrient. It is thought to stabilize
serotonin transmission in the nervous system; it influences
sodium transport; and it may even increase lymphocytic
(white blood cell) proliferation and depress the suppressor
cell activity, thus strengthening the immune system.
Deficiency of lithium is not really known. Symptoms
of lithium toxicity include diarrhea, thirst, increased
urination, and muscle weakness. Skin eruptions may also
occur. With further toxicity, staggering, seizures,
kidney damage, coma, and even death may occur.
function of nickel is still somewhat unclear. Nickel
is found in the body in highest concentrations in the
nucleic acids, particularly RNA, and is thought to be
somehow involved in protein structure or function. It
may activate certain enzymes related to the breakdown
or utilization of glucose. Low nickel can lead to decreased
growth, dermatitis, pigment changes, decreased reproduction
capacities, and compromised liver function.
are currently no known essential functions of rubidium.
In studies with mice, rubidium has helped decrease tumour
growth, possibly by replacing potassium in cell transport
mechanisms or by rubidium ions attaching to the cancer
cell membranes. Rubidium may have a tranquilizing or
hypnotic effect in some animals.
There is no known deficiency or toxicity for rubidium.
may help improve the cell structure and mineral matrix
of the bones, adding strength and helping to prevent
soft bones, though it is not known if low levels of
strontium causes these problems.
There have been no cases of known toxicity from natural
strontium. Strontium deficiency may correlate with decreased
growth, poor calcification of the bones.
has a function, it may be related to protein structure
or oxidation and reduction reactions, though tin is
generally a poor catalyst. Tin may interact with iron
and copper, particularly in the gut, and so inhibit
Though tin is considered a mildly toxic mineral, there
are no known serious diseases. Studies in rats showed
mainly a slightly shortened life span. There are no
known problems from tin deficiency.
is known about vanadium function. Vanadium seems to
be involved in catecholamine and lipid metabolism. It
has been shown to have an effect in reducing the production
of cholesterol. Other research involves its role in
calcium metabolism, in growth, reproduction, blood sugar
regulation, and red blood cell production. The enzyme-stimulation
role of vanadium may involve it in bone formation and,
through the production of coenzyme A, in fat metabolism.
Vanadium has been thought to be essentially non-toxic,
possibly because of poor absorption. In reptiles, vanadium
deficiency causes some problems with the scales and
shedding, bone development, and reproduction.
Toxic Trace Minerals
has only recently been considered a problem mineral.
Though it is not very toxic in normal levels, neither
has it been found to be essential. Aluminium is very
abundant in the earth and in the sea. It is present
in only small amounts in animal and plant tissues. The
best way to prevent aluminium build-up is to avoid the
sources of aluminium. Some tap waters contain aluminium;
this can be checked. Avoiding aluminium water dishes
and replacing it with stainless steel, ceramic, or plastic
is a good idea.
arsenic's reputation as a poison, it actually has fairly
low toxicity in comparison with some other metals. In
fact, it has been shown to be essential animals. Organic
arsenic as and elemental arsenic both found naturally
in the earth and in foods do not readily produce toxicity.
In fact, they are handled fairly easily and eliminated
by the kidneys. The inorganic arsenites or trivalent
forms of arsenic, such as arsenic trioxide seem to create
the problems. They accumulate in the body, particularly
in the skin, scales, and nails, but also in internal
organs. Arsenic can accumulate when kidney function
is decreased. Luckily, absorption of arsenic is fairly
low, so most is eliminated in the faeces and some in
In some studies, arsenic has been shown to promote longevity
Like lead, it is an underground mineral that did not
enter our air, food, and water in significant amounts
until it was mined as part of zinc deposits. As cadmium
and zinc are found together in natural deposits, so
are they similar in structure and function. Cadmium
may actually displace zinc in some of its important
enzymatic and organ functions; thus, it interferes with
these functions or prevents them from being completed.
The zinc-cadmium ratio is very important, as cadmium
toxicity and storage are greatly increased with zinc
deficiency, and good levels of zinc protect against
tissue damage by cadmium. The refinement of grains reduces
the zinc-cadmium ratio, so zinc deficiency and cadmium
toxicity are more likely when the diet is high in refined
Besides faecal losses, mainly the kidneys excrete it.
This mineral is stored primarily in the liver and kidneys.
In rat studies, higher levels of cadmium are associated
with an increase in heart size, higher blood pressure,
progressive atherosclerosis, and reduced kidney function.
And in rats, cadmium toxicity is worse with zinc deficiency
and reduced with higher zinc intake.
Cadmium appears to depress some immune functions, mainly
by reducing host resistance to bacteria and viruses.
Cadmium also affects the bones. It was associated with
weak bones that lead to deformities, especially of the
spine, or to more easily broken bones.
metal lead is the most common toxic mineral, it is the
worst and most widespread pollutant, though luckily
not the most toxic; cadmium and mercury are worse. Though
this is not completely clear, lead most likely interferes
with functions performed by essential minerals such
as calcium, iron, copper, and zinc. Lead does interrupt
several red blood cell enzymes. Especially in brain
chemistry, lead may create abnormal function by inactivating
important zinc-, copper-, and iron-dependent enzymes.
Lead affects both the brain and the peripheral nerves
and can displace calcium in bone. Lead also lowers host
resistance to bacteria and viruses, and thus allows
an increase in infections.
Calcium and magnesium reduce lead absorption. Iron,
copper, and zinc also do this. With low mineral intake,
lead absorption and potential toxicity are increased.
Algin, as from kelp (seaweed) or the supplement sodium
alginate, helps to bind lead and other heavy metals
in the gastrointestinal tract and carry them to elimination
and reduce absorption. With this, though, essential
vitamins and minerals, such as the Bs, iron, calcium,
zinc, copper, and chromium, help decrease lead absorption.
or "quicksilver," is a shiny liquid metal that is fairly
toxic, though the metallic mercury is less so. Especially
a problem is methyl or ethyl mercury, or mercuric chloride,
which is very poisonous.
Some mercury is retained in the tissues, mainly in the
kidneys, which store about 50 percent of the body mercury.
The blood, bones, liver, spleen, brain, and fat tissue
also hold mercury. This potentially toxic metal does
get into the brain and nerve tissue, so central nervous
system symptoms may develop. But mercury is also eliminated
daily through the urine and faeces. Mercury has no known
essential functions and probably affects the inherent
protein structure, which may interfere with functions
relating to protein production. Mercury may also interfere
with some functions of selenium, and can be an immunosuppressant.
Algin can decrease absorption of mercury, especially
inorganic mercury. Selenium binds both inorganic and
methyl mercury; mercury selenide is formed and excreted
in faecal matter. Selenium is, for many reasons, an
important nutrient; it does seem to protect against
heavy metal toxicity.
Vitamins are essential for the proper
regulation of reproduction, growth, and energy production.
Since reptiles are unable to manufacture most of the vitamins
required for good health, they must be obtained from dietary
sources, either from food or from supplements.Vitamins are commonly referred to as
micronutrients because of the extremely small amounts required
to maintain optimal health, as compared to macronutrients
such as fats, protein and carbohydrates, which are required
in much greater amounts. Vitamins, unlike the macronutrients,
are not a source of calories, but without adequate amounts,
reptiles cannot utilize the macronutrients, and health and
energy levels will suffer.Vitamins are divided into two sub-categories,
fat-soluble vitamins and water-soluble vitamins. The four
fat soluble vitamins, vitamins A, D, E and K, share a chemical
relationship, based on the common need for cholesterol in
their synthesis. The fat-soluble vitamins can be stored
in fatty tissues and be released at a later time as needed.
The vitamin B complex consists of a family of nutrients
that have been grouped together due to the interrelationships
in their function in enzyme systems, as well as their distribution
in natural food sources. All of the B vitamins are soluble
in water and are easily absorbed. Unlike fat-soluble nutrients,
the B-complex vitamins cannot be stored in the body, and
must therefore be replaced daily from food sources or supplements.
Unlike humans, reptiles are able to synthesize
their need of vitamin C.
Water Soluble Vitamins
bios, vitamin B8, vitamin H, protective factor
X, and coenzyme R.
is a water-soluble vitamin and member of the B-complex
family. Biotin is an essential nutrient that is required
for cell growth and for the production of fatty acids.
Biotin also plays a central role in carbohydrate and
protein metabolism and is essential for the proper utilization
of the other B-complex vitamins. Biotin contributes
to healthy skin and scales. A biotin deficiency is rare,
as biotin is easily synthesized in the intestines by
bacteria, usually in amounts far greater than are normally
require for good health. Those at highest risk for biotin
deficiency are reptiles with digestive problems that
can interfere with normal intestinal absorption, and
those treated with antibiotics or sulfa drugs, which
can inhibit the growth of the intestinal bacteria that
vitamin M and vitamin B9
Acid is a water-soluble nutrient belonging to the B-complex
family. The name folic acid is derived from the Latin
word "folium", so chosen since this essential nutrient
was first extracted from green leafy vegetables, or
foliage. Among its various important roles, folic acid
is a vital coenzyme required for the proper synthesis
the nucleic acids that maintain the genetic codes and
insure healthy cell division. Adequate levels of folic
acid are essential for energy production and protein
metabolism, for the formulation of red blood cells,
and for the proper functioning of the intestinal tract.
Folic acid deficiency affects all cellular functions,
but most importantly it reduces the reptile's ability
to repair damaged tissues and grow new cells. Tissues
with the highest rate of cell replacement, such as red
blood cells, are affected first. Folic acid deficiency
symptoms include diarrhea, poor nutrient absorption
and malnutrition leading to stunted growth and weakness.
is one of the bioflavonoids, naturally occurring nutrients
usually found in association with Vitamin C. The bioflavonoids,
sometimes called Vitamin P, were found to be the essential
component in correcting this bruising tendency and improving
the permeability and integrity of the capillary lining.
These bioflavonoids include Hesperidin, Citrin, Rutin,
Flavones, Flavonals, Calechin, and Quercetin.
Hesperidin deficiency has been linked with abnormal
capillary leaking causing weakness. Supplemental Hesperidin
may also help reduce excess swelling in the limbs due
to fluid accumulation. Like other bioflavonoids, hesperidin
works together with Vitamin C and other bioflavonoids.
No signs of toxicity have been observed with normal
B-1 is a nutrient with a critical role in maintaining
the central nervous system. Adequate thiamine levels
can dramatically affect physiological wellbeing. Conversely,
inadequate levels of B1 can lead to eye weakness and
loss of physical coordination.
Vitamin B1 is required for the production of hydrochloric
acid, for forming blood cells, and for maintaining healthy
circulation. It also plays a key role in converting
carbohydrates into energy, and in maintaining good muscle
tone of the digestive system.
Like all the B-vitamins, B-1 is a water-soluble nutrient
that cannot be stored in the body, but must be replenished
on a daily basis. B-1 is most effective when given in
a balanced complex of the other B vitamins.
A chronic deficiency of thiamine will lead to damages
of the central nervous system. Thiamine levels can be
affected in combination with antibiotics and sulfa drugs.
A diet high in carbohydrates can also increase the need
B-2 is an easily absorbed, water-soluble micronutrient
with a key role in maintaining health. Like the other
B vitamins, riboflavin supports energy production by
aiding in the metabolization of fats, carbohydrates
and proteins. Vitamin B-2 is also required for red blood
cell formation and respiration, antibody production,
and for regulating growth and reproduction. Riboflavin
is known to increase energy levels and aid in boosting
immune system functions. It also plays a key role in
maintaining healthy scales, skin and nails. A deficiency
of vitamin B-2 may be indicated by the appearance of
skin and shedding problems. Gravid females need Vitamin
B-2, as it is critical for the proper growth and development
of the eggs.
Niacin, Niacinamide and Nicotinic Acid
B-3 is an essential nutrient required by all animals
for the proper metabolism of carbohydrates, fats, and
proteins, as well as for the production of hydrochloric
acid for digestion. B-3 also supports proper blood circulation,
healthy skin, and aids in the functioning of the central
nervous system. Because of its role in supporting the
higher functions of the brain and cognition. Lastly,
adequate levels of B-3 are vital for the proper synthesis
of insulin, and the sex hormones such as estrogen, testosterone,
A deficiency in vitamin B-3 can result in a disorder
characterized by malfunctioning of the nervous system,
diarrhea, and skin- and shedding problems.
acid is a water-soluble B vitamin that cannot be stored
in the body but must be replaced daily, either from
diet or from supplements.
Pantothenic acids' most important function is as an
essential component in the production of coenzyme A,
a vital catalyst that is required for the conversion
of carbohydrates, fats, and protein into energy. Pantothenic
acid is also referred to as an antistress vitamin due
to its vital role in the formation of various adrenal
hormones, steroids, and cortisone, as well as contributing
to the production of important brain neuro-transmitters.
B-5 is required for the production of cholesterol, bile,
vitamin D, red blood cells, and antibodies.
Lack of B5 can lead to a variety of symptoms including
skin disorders, digestive problems and muscle cramps.
B-6 is a water-soluble nutrient that cannot be stored
in the body, but must be obtained daily from either
dietary sources or supplements.
Vitamin B-6 is an important nutrient that supports more
vital functions than any other vitamin. This is due
to its role as a coenzyme involved in the metabolism
of carbohydrates, fats, and proteins. Vitamin B-6 is
also responsible for the manufacture of hormones, red
blood cells, neurotransmitters and enzymes. Vitamin
B-6 is required for the production of serotonin, a brain
neurotransmitter that controls appetite, sleep patterns,
and sensitivity to pain. A deficiency of vitamin B-6
can quickly lead to a profound malfunctioning of the
central nervous system.
Among its many benefits, vitamin B-6 is recognized for
helping to maintain healthy immune system functions.
Cobalamin and Cyanocobalamin
B-12 is a water-soluble compound of the B vitamin family
with a unique difference. Unlike the other B-vitamins
which cannot be stored, but which must be replaced daily,
vitamin B12 can be stored for long periods in the liver
Vitamin B-12 is a particularly important coenzyme that
is required for the proper synthesis of DNA, which controls
the healthy formation of new cells throughout the body.
B-12 also supports the action of vitamin C, and is necessary
for the proper digestion and absorption of foods, for
protein synthesis, and for the normal metabolism of
carbohydrates and fats. Additionally, vitamin B-12 prevents
nerve damage by contributing to the formation of nerve
cells insulators. B-12 also maintains fertility, and
helps promote normal growth and development.
Since vitamin B-12 can be easily stored in the reptile’s
body, and is only required in tiny amounts, symptoms
of severe deficiency usually take time to appear. When
symptoms do surface, it is likely that deficiency was
due to digestive disorders or malabsorption rather than
to poor diet. The source of B-12 in herbivorous reptiles
is not known, since B-12 only comes from animal sources.
Due to its role in healthy cell formation, a deficiency
of B-12 disrupts the formation of red blood cells, leading
to reduced numbers of poorly formed red cells. Symptoms
include loss of appetite. B-12 deficiency can lead to
improper formation of nerve cells, resulting in irreversible
C is powerful water-soluble antioxidant that is vital
for the growth and maintenance of all body tissues.
Though easily absorbed by the intestines, vitamin C
cannot be stored in the body, and is excreted in the
urine. Reptiles are able to synthesize their need of
vitamin C, unlike humans, along with apes and guinea
pigs. One of vitamin C's most vital roles is in the
production of collagen, an important cellular component
of connective tissues, muscles, tendons, bones, scales
and skin. Collagen is also required for the repair of
blood vessels, bruises, and broken bones.
This easily destroyed nutrient also protects us from
the ravages of free radicals, destroying cell membranes
on contact and damaging DNA strands, leading to degenerative
diseases. The antioxidant activity of vitamin C can
also protect reptiles from the damaging effects of radiation.
Vitamin C also aids in the metabolization of folic acid,
regulates the uptake of iron, and is required for the
conversion of some amino acids. The hormone responsible
for sleep, pain control and well being, also requires
adequate supplies of vitamin C. A deficiency of ascorbic
acid can impair the production of collagen and lead
to retarded growth, reduced immune response, and increased
susceptibility to infections.
Fat Soluble Vitamins
retinol (=preformed vitamin A)
A is a vital fat-soluble nutrient and antioxidant that
can maintain healthy skin and confer protection against
diseases. Vitamin A is commonly found in two forms;
as preformed vitamin A (retinol) and as provitamin A,
Vitamin A deficiency can lead to blindness and defective
formation of bones and scales. In addition to promoting
good vision, other recognized major benefits of vitamin
A include its ability to boost the immune system, speed
recovery from infections, promote wound healing and
D-3 is required for the proper regulation and absorption
of the essential minerals calcium and phosphorus. Vitamin
D-3 can be produced photochemically by the action of
sunlight or ultraviolet light from the precursor sterol
7-dehydrocholesterol which is present in the epidermis
or skin of most higher animals. Adequate levels of Vitamin
D-3 are required for the proper absorption of calcium
and phosphorus in the small intestines. Vitamin D-3
further supports and regulates the use of these minerals
for the growth and development of the bones and teeth.
Because of this vital link, adequate intake of Vitamin
D-3 is critically important for the proper mineralization
of the bones in developing reptiles. Vitamin D-3 also
aids in the prevention and treatment of metabolic bone
disease and hypocalcemia in adults.
A prolonged Vitamin D-3 deficiency may result in rachitis,
a bone deformity, and in softening of the bone tissue.
Ultra violet rays (in the UVB range) acting directly
upon the skin can synthesis vitamin D-3, so exposure
to sunlight brief but regular is usually an effective
way to for assure adequate levels of Vitamin D-3.
Since the body can endogenously produce vitamin D-3
and since it is retained for long periods of time by
vertebrate tissue, it is difficult to determine with
precision the minimum daily requirements for this seco-steroid.
The requirement for vitamin D is also known to be dependent
on the concentration of calcium and phosphorus in the
diet, the physiological stage of development, age, sex
and degree of exposure to the sun (geographical location).
High levels of vitamin D-3 can be toxic and have the
same effects as a deficiency.
E functions as a powerful antioxidant to protect fatty
tissues from free radical damage. Free radicals are
extremely dangerous and reactive oxygen compounds that
are constantly being produced from a variety of natural
sources such as radiation and the breakdown of proteins
in the body. Left unchecked, free radicals course rupturing
cell membranes, causing massive damage to skin, connective
tissues and cells.
Vitamin E also plays an important reproduction and muscle
K is an essential fat-soluble vitamin that is required
for the regulation of normal blood clotting functions.
Dietary vitamin K is found primarily in the form of
dark leafy vegetables, but most of the needs for this
micronutrient are met by micro organisms that synthesize
vitamin K in the intestinal tract.
Vitamin K's main function is in to synthesize a protein
vital for blood clotting. Vitamin K also aids in converting
glucose into glycogen for storage in the liver, and
may also play a role in forming bone formation. Vitamin
K deficiency can result in impaired blood clotting and
internal bleeding. A deficiency of vitamin K can be
caused by use of antibiotics, which can inhibit the
growth of the intestinal micro organisms required for
the synthesis of vitamin K.
are substances which are transformed to vitamins in
(sometimes known as provitamin D3) is a chemical
that, in the presence of ultraviolet light, is converted
by the body to previtamin D3 which is then
isomerized into Vitamin D3.
is an exciting and powerful fat-soluble antioxidant
with tremendous ability to neutralize free radicals
and fight infectious diseases. Beta-carotene is also
referred to as provitamin A because in its natural form
it is not readily available for use in reptiles. When
there is need for extra vitamin A, beta carotene undergoes
a transformation as powerful liver enzymes split each
molecule of beta carotene to form two molecules of vitamin
A. This unique feature enables beta carotene to be non-toxic
at high doses whereas vitamin A can produce toxic effects
in relatively low doses. In addition to promoting good
vision, beta-carotene also boosts immune functions,
speeds recovery from infections and promotes wound healing.
are an essential ingredient of the digestion process. From
the time food enters the mouth, enzymes are at work breaking
the food down into smaller and smaller components until
it can be absorbed through the intestinal wall and into
the blood stream. These enzymes come from two sources, those
found in the food itself, and those secreted in the body.Food
naturally contains the enzymes necessary to aid in its digestion.
When food is chewed enzymes are liberated to aid in digestion.
Enzymes called protease break down proteins into polypeptides
(smaller amino acids chains) and eventually into single
amino acids. Amylase reduces complex carbohydrates to simpler
sugars like sucrose, lactose, and maltose. The Lipase enzyme
turns fat into free fatty acids and glycerol. Cellulases
break the bonds found in fibre and liberate the nutritional
value of fruits and vegetables.A reptile
is capable of producing similar enzymes, with the exception
of cellulase, necessary to digest food and allow for the
absorption of nutrients. Most
food enzymes are functionally destroyed in processed food,
leaving them without natural enzyme activity. Reptiles need
a certain amount of enzymes to properly digest food and
thus must produce more of their own enzymes in order to
make up the difference. The digestive processes can be come
over-stressed leading to an inadequacy of enzymes production
in the organs designed to do so. This digestive inadequacy
can cause improper digestion and poor absorption of nutrients
having far reaching effects. The consequences of malabsorption
may include impaired immune system function, poor wound
healing and skin problems. Supplementing
with added enzymes can improve digestion and help assure
maximum nutrient absorption.
is responsible for digesting proteins in the food, which
is probably one of the most difficult substances to
metabolize. Because of this, protease is considered
to be one of the most important enzymes. If the digestive
process is incomplete, undigested protein can wind up
in the circulatory system, as well as in other parts
of the body.
is a group of enzymes that are present in saliva, pancreatic
juice, and parts of plants and catalyze the hydrolysis
of starch to sugar to produce carbohydrate derivatives.
Amylase, also called diastase, is responsible for digesting
carbohydrates in the food. It hydrolyzes starch, glycogen,
and dextrin to form in all three instances glucose,
maltose, and limit-dextrin. Salivary amylase is known
as ptyalin. Ptyalin begins polysaccharide digestion
in the mouth; the process is completed in the small
intestine by the pancreatic amylase, sometimes called
is an enzyme capable of degrading lipid molecules. The
bulk of dietary lipids are a class called triacylglycerols
and are attacked by lipases to yield simple fatty acids
and glycerol, molecules which can permeate the membranes
of the stomach and small intestine for use by the body.
is included to break down plant fibre (cellulose). Cellulase
is actually a complex consisting of three distinct enzymes
which together convert cellulose to glucose. Without
it plant fibre passes through undigested.
Probiotics are friendly bacteria
found in the mouth and intestines of healthy reptiles. These
microorganisms help defend against invading bacteria and
yeasts. Probiotic bacteria contribute to gastrointestinal
health by providing a synergistic environment and producing
health promoting substances including some vitamins. They
can regulate bowel movements and halt diarrhea while at
the same time enhancing the immune system. The use of antibiotics
kills the beneficial and the harmful bacteria. Supplemental
replenishment with probiotics can quickly return the flora
balance to normal, thus preventing many of the common side
effects associated with antibiotic treatment. | http://www.phelsumania.com/public/captive%20care/nutrition.html | 13 |
33 | Art of speaking or writing effectively. It may entail the study of principles and rules of composition formulated by critics of ancient times, and it can also involve the study of writing or speaking as a means of communication or persuasion. Classical rhetoric probably developed along with democracy in Syracuse (Sicily) in the 5th century BC, when dispossessed landowners argued claims before their fellow citizens. Shrewd speakers sought help from teachers of oratory, called rhetors. This use of language was of interest to philosophers such as Plato and Aristotle because the oratorical arguments called into question the relationships among language, truth, and morality. The Romans recognized separate aspects of the process of composing speeches, a compartmentalization that grew more pronounced with time. Renaissance scholars and poets studied rhetoric closely, and it was a central concern of humanism. In all times and places where rhetoric has been significant, listening and reading and speaking and writing have been the critical skills necessary for effective communication.
Learn more about rhetoric with a free trial on Britannica.com.
Rhetoric has had many definitions; no simple definition can do it justice. In fact, the very act of defining has itself been a central part of rhetoric: It appears among Aristotle's topoi, heuristics for rhetorical inventio. For Aristotle, rhetoric is the art of practical wisdom and decision making, a counterpart to logic and a branch of politics. The word is derived from the ancient Greek eiro, which means "I say." In its broadest sense, rhetoric concerns human discourse.
In eras of European history, rhetoric concerned itself with persuasion in public and political settings such as assemblies and courts. Because of its associations with democratic institutions, rhetoric is commonly said to flourish in open and democratic societies with rights of free speech, free assembly, and political enfranchisement for some portion of the population.
As a course of study, rhetoric trains students to speak and/or write effectively. The rhetorical curriculum is nearly as old as the rhetorical tradition itself. Over its many centuries, the curriculum has been transformed in a number of ways, but, in general, it has emphasized the study of principles and rules of composition as a means for moving audiences. In Greece, rhetoric originated in a school of pre-Socratic philosophers known as Sophists circa 600 BC. It was later taught in the Roman Empire and during the Middle Ages as one of the three original liberal arts or trivium (along with logic and grammar).
The relationship between rhetoric and knowledge is one of its oldest and most interesting problems. The contemporary stereotype of rhetoric as "empty speech" or "empty words" reflects a radicial division of rhetoric from knowledge, a division that has had influential adherents within the rhetorical tradition, most notably Plato in ancient Athens, and Peter Ramus in 16C Renaissance Europe. It is a division that has been strongly associated with Enlightenment thinking about language.
Most rhetoricians, however, see a closer relationship between rhetoric and knowledge. Researchers in the rhetoric of science, for instance, have shown how the two are difficult to separate, and how discourse helps to create knowledge. This perspective is often called "epistemic rhetoric," where communication among interlocutors is fundamental to the creation of knowledge in communities.
Emphasizing this close relationship between discourse and knowledge, contemporary rhetoricians have been associated with a number of philosophical and social scientific theories that see language and discourse as central to, rather than in conflict with knowledge-making (See Critical Theory, Post-structuralism, Hermeneutics, Reflexivity).
Contemporary studies of rhetoric address a more diverse range of domains than was the case in ancient times. While classical rhetoric trained speakers to be effective persuaders in public forums and institutions like courtrooms and assemblies, contemporary rhetoric investigates human discourse writ large. Rhetoricians have studied the discourses of a wide variety of domains, including the natural and social sciences, fine art, religion, journalism, fiction, history, cartography, and architecture, along with the more traditional domains of politics and the law.
Public relations, lobbying, law, marketing, professional and technical writing, and advertising are modern professions that employ rhetorical practitioners.
Rhetoric thus evolved as an important art, one that provided the orator with the forms, means, and strategies for persuading an audience of the correctness of the orator's arguments. Today the term rhetoric can be used at times to refer only to the form of argumentation, often with the pejorative connotation that rhetoric is a means of obscuring the truth. Classical philosophers believed quite the contrary: the skilled use of rhetoric was essential to the discovery of truths, because it provided the means of ordering and clarifying arguments.
The word "sophistry" developed strong negative connotations in ancient Greece that continue today, but in ancient Greece sophists were nevertheless popular and well-paid professionals, widely respected for their abilities but also widely criticized for their excesses.
See Jacqueline de Romilly, The Great Sophists in Periclean Athens (French orig. 1988; English trans. Clarendon Press/Oxford University Press, 1992).
Plato (427-347 BC) famously outlined the differences between true and false rhetoric in a number of dialogues, but especially the Gorgias and the Phaedrus. Both dialogues are complex and difficult, but in both Plato disputes the Sophistic notion that an art of persuasion, the art of the Sophists which he calls "rhetoric" (after the public speaker or rhêtôr), can exist independent of the art of dialectic. Plato claims that since Sophists appeal only to what seems likely or probable, rather than to what is true, they are not at all making their students and audiences "better," but simply flattering them with what they want to hear. While Plato's condemnation of rhetoric is clear in the Gorgias, in the Phaedrus he seems to suggest the possibility of a true art of rhetoric based upon the knowledge produced by dialectic, and he relies on such a dialectically informed rhetoric to appeal to the main character, Phaedrus, to take up philosophy. It is possible that in developing his own theory of knowledge, Plato coined the term "rhetoric" both to denounce what he saw as the false wisdom of the sophists, and to advance his own views on knowledge and method. Plato's animosity against the Sophists derives not only from their inflated claims to teach virtue and their reliance on appearances, but from the fact that his teacher, Socrates, was accused of being a sophist and ultimately sentenced to death for his teaching.
In the first sentence of The Art of Rhetoric, Aristotle says that "rhetoric is the counterpart [literally, the antistrophe] of dialectic." As the "antistrophe" of a Greek ode responds to and is patterned after the structure of the "strophe" (they form two sections of the whole and are sung by two parts of the chorus), so the art of rhetoric follows and is structurally patterned after the art of dialectic because both are arts of discourse production. Thus, while dialectical methods are necessary to find truth in theoretical matters, rhetorical methods are required in practical matters such as adjudicating somebody's guilt or innocence when charged in a court of law, or adjudicating a prudent course of action to be taken in a deliberative assembly. For Plato and Aristotle, dialectic involves persuasion, so when Aristotle says that rhetoric is the antistrophe of dialectic, he means that rhetoric as he uses the term has a domain or scope of application that is parallel to but different from the domain or scope of application of dialectic. In Nietzsche Humanist (1998: 129), Claude Pavur explains that "[t]he Greek prefix 'anti' does not merely designate opposition, but it can also mean 'in place of.'" When Aristotle characterizes rhetoric as the antistrophe of dialectic, he no doubt means that rhetoric is used in place of dialectic when we are discussing civic issues in a court of law or in a legislative assembly. The domain of rhetoric is civic affairs and practical decision making in civic affairs, not theoretical considerations of operational definitions of terms and clarification of thought -- these, for him, are in the domain of dialectic.
Aristotle's treatise on rhetoric is an attempt to systematically describe civic rhetoric as a human art or skill (techne). His definition of rhetoric as "the faculty of observing in any given case the available means of persuasion," essentially a mode of discovery, seems to limit the art to the inventional process, and Aristotle heavily emphasizes the logical aspect of this process. But the treatise in fact also discusses not only elements of style and (briefly) delivery, but also emotional appeals (pathos) and characterological appeals (ethos). He thus identifies three steps or "offices" of rhetoric--invention, arrangement, and style--and three different types of rhetorical proof:
Aristotle also identifies three different types or genres of civic rhetoric: forensic (also known as judicial, was concerned with determining truth or falsity of events that took place in the past, issues of guilt), deliberative (also known as political, was concerned with determining whether or not particular actions should or should not be taken in the future), and epideictic (also known as ceremonial, was concerned with praise and blame, values, right and wrong, demonstrating beauty and skill in the present).
One of the most fruitful of Aristotelian doctrines was the idea of topics (also referred to as common topics or commonplaces). Though the term had a wide range of application (as a memory technique or compositional exercise, for example) it most often referred to the "seats of argument"--the list of categories of thought or modes of reasoning--that a speaker could use in order to generate arguments or proofs. The topics were thus a heuristic or inventional tool designed to help speakers categorize and thus better retain and apply frequently used types of argument. For example, since we often see effects as "like" their causes, one way to invent an argument (about a future effect) is by discussing the cause (which it will be "like"). This and other rhetorical topics derive from Aristotle's belief that there are certain predictable ways in which humans (particularly non-specialists) draw conclusions from premises. Based upon and adapted from his dialectical Topics, the rhetorical topics became a central feature of later rhetorical theorizing, most famously in Cicero's work of that name.
See Eugene Garver, Aristotle's Rhetoric: An Art of Character (University of Chicago Press,1994).
Latin rhetoric was developed out of the Rhodian schools of rhetoric. In the second century BC, Rhodes became an important educational center, particularly of rhetoric, and the sons of noble Roman families studied there.
Although not widely read in Roman times, the Rhetorica ad Herennium (sometimes attributed to Cicero, but probably not his work) is a notable early work on Latin rhetoric. Its author was probably a Latin rhetorician in Rhodes, and for the first time we see a systematic treatment of Latin elocutio. The Ad Herennium provides a glimpse into the early development of Latin rhetoric, and in the Middle Ages and Renaissance, it achieved wide publication as one of the basic school texts on rhetoric.
Whether or not he wrote the Rhetorica ad Herennium, Cicero, along with Quintilian (the most influential Roman teacher of rhetoric), is considered one of the most important Roman rhetoricians. His works include the early and very influential De Inventione (On Invention, often read alongside the Ad Herennium as the two basic texts of rhetorical theory throughout the Middle Ages and into the Renaissance), De Oratore (a fuller statement of rhetorical principles in dialogue form), Topics (a rhetorical treatment of common topics, highly influential through the Renaissance), Brutus (a discussion of famous orators) and Orator (a defense of Cicero's style). Cicero also left a large body of speeches and letters which would establish the outlines of Latin eloquence and style for generations to come. It was the rediscovery of Cicero's speeches (such as the defence of Archias) and letters (to Atticus) by Italians like Petrarch that, in part, ignited the cultural innovations that we know as the Renaissance.
Quintilian's career began as a pleader in the courts of law; his reputation grew so great that Vespasian created a chair of rhetoric for him in Rome. The culmination of his life's work was the Institutio oratoria (or Institutes of Oratory), a lengthy treatise on the training of the orator in which he discusses the training of the "perfect" orator from birth to old age and, in the process, reviews the doctrines and opinions of many influential rhetoricians who preceded him.
In the Institutes, Quintilian organizes rhetorical study through the stages of education that an aspiring orator would undergo, beginning with the selection of a nurse. Aspects of elementary education (training in reading and writing, grammar, and literary criticism) are followed by preliminary rhetorical exercises in composition (the progymnasmata) that include maxims and fables, narratives and comparisons, and finally full legal or political speeches. The delivery of speeches within the context of education or for entertainment purposes became widespread and popular under the term "declamation." Rhetorical training proper was categorized under five canons that would persist for centuries in academic circles:
This work was available only in fragments in medieval times, but the discovery of a complete copy at Abbey of St. Gall in 1416 led to its emergence as one of the most influential works on rhetoric during the Renaissance.
Quintilian's work attempts to describe not just the art of rhetoric, but the formation of the perfect orator as a politically active, virtuous, publicly minded citizen. His emphasis on the real life application of rhetorical training was in part nostalgia for the days when rhetoric was an important political tool, and in part a reaction against the growing tendency in Roman schools toward standardization of themes and techniques and increasing separation between school exercises and actual legal practice, a tendency equally powerful today in public schools and law schools alike. At the same time that rhetoric was becoming divorced from political decision making, rhetoric rose as a culturally vibrant and important mode of entertainment and cultural criticism in a movement known as the "second sophistic," a development which gave rise to the charge (made by Quintilian and others) that teachers were emphasizing ornamentation over substance in rhetoric. Quintilian's masterful work was not enough to curb this movement, but his dismayed response cemented the scholarly opinion that 2nd century C.E. rhetoric fell into decadence and political irrelevance, despite its wide popularity and cultural importance.
A valuable collection of studies can be found in Stanley E. Porter, ed., Handbook of Classical Rhetoric in the Hellenistic Period 330 B.C. - A.D. 400 (Brill, 1997).
Although he is not commonly regarded as a rhetorician, St. Augustine (354-430) was trained in rhetoric and was at one time a professor of Latin rhetoric in Milan. After his conversion to Christianity, he became interested in using these "pagan" arts for spreading his religion. This new use of rhetoric is explored in the Fourth Book of his De Doctrina Christiana, which laid the foundation of what would become homiletics, the rhetoric of the sermon. Augustine begins the book by asking why "the power of eloquence, which is so efficacious in pleading either for the erroneous cause or the right", should not be used for righteous purposes (IV.3).
One early concern of the medieval Christian church was its attitude to classical rhetoric itself. Jerome (d. 420) complained, "What has Horace to do with the Psalms, Virgil with the Gospels, Cicero with the Apostles?" Augustine is also remembered for arguing for the preservation of pagan works and fostering a church tradition which led to conservation of numerous pre-Christian rhetorical writings.
Rhetoric would not regain its classical heights until the renaissance, but new writings did advance rhetorical thought. Boethius (480?-524), in his brief Overview of the Structure of Rhetoric, continues Aristotle's taxonomy by placing rhetoric in subordination to philosophical argument or dialectic. One positive consequence of the Crusades was the introduction of Arab scholarship and renewed interest in Aristotle, leading to what some historians call the twelfth century renaissance. A number of medieval grammars and studies of poetry and rhetoric appeared.
Late medieval rhetorical writings include those of St. Thomas Aquinas (1225?-1294), Matthew of Vendome (Ars Versificatoria, 1175?), and Geoffrey of Vinsauf (Poetria Nova, 1200-1216). Pre-modern female rhetoricians, outside of Socrates' friend Aspasia, are rare; but medieval rhetoric produced by women either in religious orders, such as Julian of Norwich (d. 1415), or the very well-connected Christine de Pizan (1364?-1430?), did occur if not always recorded in writing.
In his 1943 Cambridge University doctoral dissertation in English, Canadian Marshall McLuhan (1911-1980) surveys the verbal arts from approximately the time of Cicero down to the time of Thomas Nashe (1567-1600?). His dissertation is still noteworthy for undertaking to study the history of the verbal arts together as the trivium, even though the developments that he surveys have been studied in greater detail since he undertook his study. As noted below, McLuhan became one of the most widely publicized thinkers in the 20th century, so it is important to note his scholarly roots in the study of the history of rhetoric and dialectic.
Another interesting record of medieval rhetorical thought can be seen in the many animal debate poems popular in England and the continent during the Middle Ages, such as The Owl and the Nightingale (13th century) and Geoffrey Chaucer's Parliament of Fowls (1382?).
One influential figure in the rebirth of interest in classical rhetoric was Erasmus (c.1466-1536). His 1512 work, De Duplici Copia Verborum et Rerum (also known as Copia: Foundations of the Abundant Style), was widely published (it went through more than 150 editions throughout Europe) and became one of the basic school texts on the subject. Its treatment of rhetoric is less comprehensive than the classic works of antiquity, but provides a traditional treatment of res-verba (matter and form): its first book treats the subject of elocutio, showing the student how to use schemes and tropes; the second book covers inventio. Much of the emphasis is on abundance of variation (copia means "plenty" or "abundance", as in copious or cornucopia), so both books focus on ways to introduce the maximum amount of variety into discourse. For instance, in one section of the De Copia, Erasmus presents two hundred variations of the sentence "Semper, dum vivam, tui meminero". Another of his works, the extremely popular The Praise of Folly, also had considerable influence on the teaching of rhetoric in the later sixteenth century. Its orations in favour of qualities such as madness spawned a type of exercise popular in Elizabethan grammar schools, later called adoxography, which required pupils to compose passages in praise of useless things.
Juan Luis Vives (1492 - 1540) also helped shape the study of rhetoric in England. A Spaniard, he was appointed in 1523 to the Lectureship of Rhetoric at Oxford by Cardinal Wolsey, and was entrusted by Henry VIII to be one of the tutors of Mary. Vives fell into disfavor when Henry VIII divorced Catherine of Aragon and left England in 1528. His best-known work was a book on education, De Disciplinis, published in 1531, and his writings on rhetoric included Rhetoricae, sive De Ratione Dicendi, Libri Tres (1533), De Consultatione (1533), and a rhetoric on letter writing, De Conscribendis Epistolas (1536).
It is likely that many well-known English writers would have been exposed to the works of Erasmus and Vives (as well as those of the Classical rhetoricians) in their schooling, which was conducted in Latin (not English) and often included some study of Greek and placed considerable emphasis on rhetoric. See, for example, T.W. Baldwin's William Shakspere's Small Latine and Lesse Greeke, 2 vols. (University of Illinois Press, 1944).
The mid-1500s saw the rise of vernacular rhetorics — those written in English rather than in the Classical languages; adoption of works in English was slow, however, due to the strong orientation toward Latin and Greek. A successful early text was Thomas Wilson's The Arte of Rhetorique (1553), which presents a traditional treatment of rhetoric. For instance, Wilson presents the five canons of rhetoric (Invention, Disposition, Elocutio, Memoria, and Utterance or Actio). Other notable works included Angel Day's The English Secretorie (1586, 1592), George Puttenham's The Arte of English Poesie (1589), and Richard Rainholde's Foundacion of Rhetorike (1563).
During this same period, a movement began that would change the organization of the school curriculum in Protestant and especially Puritan circles and lead to rhetoric losing its central place. A French scholar, Pierre de la Ramée, in Latin Petrus Ramus (1515-1572), dissatisfied with what he saw as the overly broad and redundant organization of the trivium, proposed a new curriculum. In his scheme of things, the five components of rhetoric no longer lived under the common heading of rhetoric. Instead, invention and disposition were determined to fall exclusively under the heading of dialectic, while style, delivery, and memory were all that remained for rhetoric. See Walter J. Ong, Ramus, Method, and the Decay of Dialogue: From the Art of Discourse to the Art of Reason (Harvard University Press, 1958; reissued by the University of Chicago Press, 2004, with a new foreword by Adrian Johns). Ramus, rightly accused of sodomy and erroneously of atheism, was martyred during the French Wars of Religion. His teachings, seen as inimical to Catholicism, were short-lived in France but found a fertile ground in the Netherlands, Germany and England.
One of Ramus' French followers, Audomarus Talaeus (Omer Talon) published his rhetoric, Institutiones Oratoriae, in 1544. This work provided a simple presentation of rhetoric that emphasized the treatment of style, and became so popular that it was mentioned in John Brinsley's (1612) Ludus literarius; or The Grammar Schoole as being the "most used in the best schooles." Many other Ramist rhetorics followed in the next half-century, and by the 1600s, their approach became the primary method of teaching rhetoric in Protestant and especially Puritan circles. See Walter J. Ong, Ramus and Talon Inventory (Harvard University Press, 1958); Joseph S. Freedman, Philosophy and the Art Europe, 1500-1700: Teaching and Texts at Schools and Universities (Ashgate, 1999). John Milton (1608-1674) wrote a textbook in logic or dialectic in Latin based on Ramus' work, which has now been translated into English by Walter J. Ong and Charles J. Ermatinger in The Complete Prose Works of John Milton (Yale University Press, 1982; 8: 206-407), with a lengthy introduction by Ong (144-205). The introduction is reprinted in Ong's Faith and Contexts (Scholars Press, 1999; 4: 111-41).
Ramism could not exert any influence on the established Catholic schools and universities, which remained by and large stuck in Scholasticism, or on the new Catholic schools and universities founded by members of the religious orders known as the Society of Jesus or the Oratorians, as can be seen in the Jesuit curriculum (in use right up to the 19th century, across the Christian world) known as the Ratio Studiorum (that Claude Pavur, S.J., has recently translated into English, with the Latin text in the parallel column on each page (St. Louis: Institute of Jesuit Sources, 2005). If the influence of Cicero and Quintilian permeates the Ratio Studiorum, it is through the lenses of devotion and the militancy of the Counter-Reformation. The Ratio was indeed imbued with a sense of the divine, of the incarnate logos, that is of rhetoric as an eloquent and humane means to reach further devotion and further action in the Christian city, which was absent from Ramist formalism. The Ratio is, in rhetoric, the answer to St Ignatius Loyola's practice, in devotion, of "spiritual exercizes". This complex oratorical-prayer system is absent from Ramism.
Francis Bacon (1561-1626), although not a rhetorician, contributed to the field in his writings. One of the concerns of the age was to find a suitable style for the discussion of scientific topics, which needed above all a clear exposition of facts and arguments, rather than the ornate style favored at the time. Bacon in his The Advancement of Learning criticized those who are preoccupied with style rather than "the weight of matter, worth of subject, soundness of argument, life of invention, or depth of judgment." On matters of style, he proposed that the style conform to the subject matter and to the audience, that simple words be employed whenever possible, and that the style should be agreeable. See Lisa Jardine, Francis Bacon: Discovery and the Art of Discourse (Cambridge University Press, 1975).
Thomas Hobbes (1588-1679) also wrote on rhetoric. Along with a shortened translation of Aristotle's Rhetoric, Hobbes also produced a number of other works on the subject. Sharply contrarian on many subjects, Hobbes, like Bacon, also promoted a simpler and more natural style that used figures of speech sparingly.
Perhaps the most influential development in English style came out of the work of the Royal Society (founded in 1660), which in 1664 set up a committee to improve the English language. Among the committee's members were John Evelyn (1620-1706), Thomas Sprat (1635-1713), and John Dryden (1631-1700). Sprat regarded "fine speaking" as a disease, and thought that a proper style should "reject all amplifications, digressions, and swellings of style" and instead "return back to a primitive purity and shortness" (History of the Royal Society, 1667).
While the work of this committee never went beyond planning, John Dryden is often credited with creating and exemplifying a new and modern English style. His central tenet was that the style should be proper "to the occasion, the subject, and the persons." As such, he advocated the use of English words whenever possible instead of foreign ones, as well as vernacular, rather than Latinate, syntax. His own prose (and his poetry) became exemplars of this new style.
At the turn of the twentieth century, there was a revival of rhetorical study manifested in the establishment of departments of rhetoric and speech at academic institutions, as well as the formation of national and international professional organizations. Theorists generally agree that a significant reason for the revival of the study of rhetoric was the renewed importance of language and persuasion in the increasingly mediated environment of the twentieth century (see Linguistic turn). The rise of advertising and of mass media such as photography, telegraphy, radio, and film brought rhetoric more prominently into people's lives.
Chaim Perelman was a philosopher of law, who studied, taught, and lived most of his life in Brussels. He was among the most important argumentation theorists of the twentieth century. His chief work is the Traité de l'argumentation - la nouvelle rhétorique (1958), with Lucie Olbrechts-Tyteca, which was translated into English as The New Rhetoric: A Treatise on Argumentation, by John Wilkinson and Purcell Weaver (1969). Perelman and Olbrechts-Tyteca move rhetoric from the periphery to the center of argumentation theory. Among their most influential concepts are "the universal audience," "quasi-logical argument," and "presence."
Henry Johnstone Jr. was an American philosopher and rhetorician known especially for his notion of the "rhetorical wedge" and his re-evaluation of the ad hominem fallacy. He was the founder and longtime editor of the journal Philosophy and Rhetoric.
Kenneth Burke was a rhetorical theorist, philosopher, and poet. Many of his works are central to modern rhetorical theory: A Rhetoric of Motives (1969), A Grammar of Motives (1945), Language as Symbolic Action (1966), and Counterstatement (1931). Among his influential concepts are "identification," "consubstantiality," and the "dramatic pentad."
Marshall McLuhan was a media theorist whose discoveries are important to the study of rhetoric. McLuhan's famous dictum "the medium is the message" highlighted the important role of the mass media in modern communication.
Rhetoric was part of the curriculum in Jesuit and, to a lesser extent, Oratorian colleges until the French Revolution. For Jesuits, right from the foundation, in France, of the Society, rhetoric was an integral part of the training of young men toward taking up leadership positions in the Church and in State institutions, as Marc Fumaroli has shown it in his foundational Age de l’éloquence (1980). The Oratorians, by contrast, reserved it a lesser place, in part due to the stress they placed on modern languages acquisition and a more sensualist philosophy (Bernard Lamy’s Rhetoric is an excellent example of their approach).Nonetheless, in the 18th Century, rhetoric was the armature and crowning of college education, with works such as Rollin’s Treatise of Studies achieving a wide and enduring fame across the Continent.
The French Revolution, however, turned this around. Philosophers like Condorcet, who drafted the French revolutionary chart for a people’s education under the rule of reason, dismissed rhetoric as an instrument of oppression in the hands of clerics in particular. The Revolution went as far as suppressing the Bar, arguing that forensic rhetoric did disservice to a rational system of justice, by allowing fallacies and emotions to come into play. Nonetheless, as later historians of the 19th century were keen to explain, the Revolution was a high moment of eloquence and rhetorical prowess, yet, against a background of rejection of rhetoric.
Under the First Empire and its wide ranging educational reforms, imposed on or imitated across the Continent, rhetoric regained little ground. In fact instructions to the newly founded Polytechnic School, tasked with training the scientific and technical elites, made it clear that written reporting was to supersede oral reporting. Rhetoric re-entered the college curriculum in fits and starts, but never regained the prominence it enjoyed under the ancien régime, although the penultimate year of college education was known as the Class of Rhetoric. When manuals were redrafted in the mid-century, in particular after the 1848 Revolution, care was taken by writers in charge of formulating a national curriculum to distance their approach to rhetoric from that of the Church seen as an agent of conservatism and reactionary politics. By the end of the 1870s, a major change had taken place: philosophy, of the rationalist or eclectic kind, by and large Kantian, had taken over rhetoric as the true terminal stage in secondary education, (the so-called Class of Philosophy bridged college and university education). Rhetoric was then relegated to the study of literary figures of speech, a discipline later on taught as Stylistics within the French literature curriculum. More decisively, in 1890 a new standard written exercise superseded the rhetorical exercises of speech writing, letter writing and narration. The new genre, called dissertation, had been invented, in 1866, for the purpose of rational argument in the philosophy class. Typically, in a dissertation, a question is asked, such as: “Is history a sign of humanity’s freedom?” The structure of a dissertation consists in an introduction that elucidates the basic definitions involved in the question as set, followed by an argument or thesis, a counter-argument or antithesis, and a resolving argument or synthesis that is not a compromise between the former but the production of a new argument, ending with a conclusion that does not sum up the points but opens onto a new problem. The dissertation design was influenced by Hegelianism. It remains today the standard of writing in the humanities.
By the beginning of the 20th century rhetoric was fast losing the remains of its former importance, to be taken out of the school curriculum altogether at the time of the Separation of State and Churches (1905) – part of the argument was indeed that rhetoric remained the last element of irrationality, driven by religious arguments, in what was perceived as inimical to Republican education. The move initiated in 1789 found its resolution in 1902 when rhetoric is expunged from all curricula. However, it must be noted that, at the same time, Aristotelian rhetoric, owing to a revival of Thomistic philosophy initiated by Rome, regained ground in what was left of Catholic education in France, in particular at the prestigious Faculty of Theology of Paris, now a private entity. Yet, for all intents and purposes, rhetoric vanished from the French scene, educational or intellectual, for some 60 years.
In the early 1960s a change began to take place, as the word rhetoric, let alone the body of knowledge it covers, started to be used again, in a modest and near confidential way. The new linguistic turn, through the rise of semiotics as well as structural linguistics, brought to the fore a new interest in figures of speech as signs, the metaphor in particular (in the works of Roman Jakobson, Michel Charles, Gérard Genette) while famed Structuralist Roland Barthes, a classicist by training, perceived how some basic elements of rhetoric could be of use in the study of narratives, fashion and ideology. Knowledge of rhetoric was so dim in the early 1970s, that his short memoir on rhetoric was seen as highly innovative. Basic as it was, it did help rhetoric regain some currency in avant-garde circles. Psycho-analyst Jacques Lacan, his contemporary, makes references to rhetoric, in particular to the Pre-Socratics. Philosopher Jacques Derrida wrote on Voice.
However, at the same time, more profound work was taking place that, eventually, gave rise to the French school of rhetoric as it exists today.
This rhetorical revival took place on two fronts. Firstly, in the area of 17th century French studies, the mainstay of French literary education, awareness grew that rhetoric was necessary to push further the limits of knowledge, and also provide an antidote to Structuralism and its denial of historicism in culture. This was the pioneering work of Marc Fumaroli who, building on the work of classicist and Neo-Latinist Alain Michel and French scholars such as Roger Zuber, published his famed Age de l’Eloquence (1980), was one of the founders of the International Society for the History of Rhetoric and was eventually elevated to a chair in rhetoric at the prestigious College de France. He is the editor in chief of a monumental History of Rhetoric in Modern Europe. His disciples form the second generation, with rhetoricians such as Françoise Waquet, Delphine Denis both of the Sorbonne, or Philippe-Joseph Salazar until recently at Derrida's College international de philosophie. Secondly, in the area of Classical studies, Latin scholars, in the wake of Alain Michel, fostered a renewal in Cicero studies, breaking away from a pure literary reading of his orations, in an attempt to embed Cicero in European ethics, while, among Greek scholars literary historian and philologist Jacques Bompaire, philologist and philosopher E. Dupréel and, somewhat later and in a more popular fashion, historian of literature Jacqueline de Romilly pioneered new studies in the Sophists and the Second Sophistic. The second generation of Classicists, often trained in philosophy as well (following Heidegger and Derrida, mainly), built on their work, with authors such as Marcel Detienne(now at Johns Hopkins), Nicole Loraux (d. in 2006), Medievalist and logician Alain De Libera (Geneva), Ciceronian scholar Carlos Lévy (Sorbonne, Paris) and Barbara Cassin (Collége international de philosophie, Paris). Sociologist of science Bruno Latour and economist Romain Laufer may also be considered part of, or close to this group. Links between the two strands, the literary and the philosophical, of the French school of rhetoric are strong and collaborative and bear witness to the revival of rhetoric in France.
Available online texts include: | http://www.reference.com/browse/rhetoric | 13 |
16 | While a class B is asked to provide some values to, or perform some assignment(s) for, another class A, many things would happen. In fact, there is an order that the actions should follow. For example, during the lifetime of a program, that is, while a program is running, a class may be holding a value it can provide to its client but at another time, that value may not be available anymore, for any reason; nothing strange, this is just the ways it happens. Because different things can happen to a class B while a program is running, and because only class B would be aware of these, it must be able to signal to the other classes when there is a change. This is the basis of events: An event is an action that occurs on an object and affects it in a way that its clients must be made aware of.
An event is declared like a pseudo-variable but based on a delegate. Therefore, to declare an event, you must have a delegate that would implement it. To actually declare an event, you use the Event keyword with the following formula:
[modifier] Event Name(Argument) As Type
The modifier can be Public, Private, Protected, or Friend. The Event keyword is required. It is followed by the name of the delegate that specifies its behavior. If the event as arguments, enter them in its parentheses. The declaration of an event must specify its type, and the type must be a delegate.
Here is an example that declares an event:
Imports System.Drawing Imports System.Windows.Forms Module Exercise Private Delegate Sub Display() Public Class Starter Inherits Form Private Event Evidence As Display Dim components As System.ComponentModel.Container Public Sub New() InitializeComponent() End Sub Public Sub InitializeComponent() End Sub End Class End Module
When the event occurs, its delegate would be invoked. This specification is also referred to as hooking up an event. As the event occurs (or fires), the procedure that implements the delegate runs. This provides complete functionality for the event and makes the event ready to be used.
Before using an event, you must specify the procedure that will carry the event. This procedure is referred to as a handler. Obviously you must first have created a procedure. Here is an example:
Imports System.Drawing Imports System.Windows.Forms Module Exercise Private Delegate Sub Display() Public Class Starter Inherits Form Private Event Evidence As Display Dim components As System.ComponentModel.Container Public Sub New() InitializeComponent() End Sub Public Sub InitializeComponent() End Sub End Class Private Sub Viewer() MsgBox("The Viewer") End Sub End Module
To add a handler to the program, you use the AddHandler operator with the following formula:
AddHandler EventName, AddressOf Procedure
The AddHandler and the AddressOf operators are required. The EventName placeholder is used to specify the name of the event that is being dealt with. The Procedure factor is the name of the procedure that will implement the event. Here is an example:
Imports System.Drawing Imports System.Windows.Forms Module Exercise Private Delegate Sub Display() Public Class Starter Inherits Form Private Event Evidence As Display Dim components As System.ComponentModel.Container Public Sub New() InitializeComponent() End Sub Public Sub InitializeComponent() AddHandler Evidence, AddressOf Viewer End Sub End Class Private Sub Viewer() MsgBox("The Viewer") End Sub End Module
After adding a handler for the event, it is ready but you must launch its action. To do this, you can use the RaiseEvent operator with the following formula:
The RaiseEvent operator is required. The EventName placeholder is used to specify the name of the event, and it must be followed by parentheses. Here is an example:
Imports System.Drawing Imports System.Windows.Forms Module Exercise Private Delegate Sub Display() Public Class Starter Inherits Form Private Event Evidence As Display Dim components As System.ComponentModel.Container Public Sub New() InitializeComponent() End Sub Public Sub InitializeComponent() AddHandler Evidence, AddressOf Viewer RaiseEvent Evidence() End Sub End Class Private Sub Viewer() MsgBox("The Viewer") End Sub Function Main() As Integer Dim frmStart As Starter = New Starter Application.Run(frmStart) Return 0 End Function End Module
The event we used in the previous sections did not take any argument. Just like a delegate, an event can take an argument. The primary rule to follow is that both its delegate and the procedure associated with it must take the same type of event. The second rule is that, when raising the event, you must pass an appropriate event to it. Here is an example:
Imports System.Drawing Imports System.Windows.Forms Module Exercise Private Delegate Sub Display(ByVal Message As String) Public Class Starter Inherits Form Private Event Evidence As Display Dim components As System.ComponentModel.Container Public Sub New() InitializeComponent() End Sub Public Sub InitializeComponent() Dim m As String m = "Welcome to the wornderful world of events" AddHandler Evidence, AddressOf Viewer RaiseEvent Evidence(m) End Sub End Class Private Sub Viewer(ByVal msg As String) MsgBox(msg) End Sub Function Main() As Integer Dim frmStart As Starter = New Starter Application.Run(frmStart) Return 0 End Function End Module
Just like an event can take an argument, it can also take more than one argument. The primary rules are the same as those for a single parameter. You just have to remember that you are dealing with more than one argument. Here is an example:
Imports System.Drawing Imports System.Windows.Forms Module Exercise Private Delegate Sub Description(ByVal Name As String, _ ByVal Salary As Double) Public Class Starter Inherits Form Private Event Recorder As Description Dim components As System.ComponentModel.Container Public Sub New() InitializeComponent() End Sub Public Sub InitializeComponent() Dim FullName As String Dim HourlySalary As Double FullName = "Paul Bertrand Yamaguchi" HourlySalary = 24.55 AddHandler Recorder, AddressOf ShowRecord RaiseEvent Recorder(FullName, HourlySalary) End Sub End Class Private Sub ShowRecord(ByVal id As String, ByVal wage As Double) MsgBox("Employee Name: " & id & vbCrLf & _ "Hourly Salary: " & CStr(wage)) End Sub Function Main() As Integer Dim frmStart As Starter = New Starter Application.Run(frmStart) Return 0 End Function End Module
Instead of a parameter of a primitive type, you can create an event that takes a class as argument.
An application is made of various objects or controls. During the lifetime of an application, its controls regularly send messages to the operating system to do something. These messages are similar to human messages and must be processed appropriately. Since most of the time more than one application is running on the computer, the controls of such an application also send messages to the operating system. As the operating system is constantly asked to perform these assignments, because there can be so many requests presented unpredictably, the operating system leaves it up to the controls to specify what they want, when they want it, and what behavior or result they expect. These scenarios work by the controls sending events.
Events in the .NET Framework are implements through the concepts of delegates and events as reviewed previously. The most common events have already been created for the objects of the .NET Framework controls so much that you will hardly need to define new events. Most of what you will do consists of implementing the desired behavior when a particular event fires. To start, you should know what events are available, when they work, how they work, and what they produce.
To process a message, it (the message) must provide at least two pieces of information: What caused the message and what type of message is it? Both values are passed as the arguments to the event. Since all controls used in the .NET Framework are based on the Object class, the first argument must be an Object type and represents the control that sent the message.
As mentioned already, each control sends its own messages when necessary. Based on this, some messages are unique to some controls according to their roles. Some other messages are common to various controls, as they tend to provide similar actions. To manage such various configurations, the .NET Framework considers the messages in two broad categories.
As it happens, in order to perform their intended action(s), some messages do not require much information. For example, suppose your heart sends a message to the arm and states, "Raise your hand". In this case, suppose everything is alright, the arm does not ask, "how do I raise my hand?". It simply does because it knows how to, without any assistance. This type of message would be sent without much detailed information.
In the .NET Framework, a message that does not need particular information is carried by a class named EventArgs. In the event implementation, an EventArgs argument is passed as the second parameter.
Consider another message where the arm carries some water and says to the mouth, "Swallow the following water". The mouth would need the water that needs to be swallowed. Therefore, the message must be accompanied by additional information. Consider one more message where the heart says to the tongue, "Taste the following food but do not swallow it." In order to process this message, the tongue would need the food and something to indicate that the food must not be swallowed. In this case, the message must be accompanied by detailed pieces of information.
When a message must carry additional information, the control that sent the message specifies that information by the name of the second argument. Because there are various types of messages like that, there are also different types of classes used to carry such messages. We will introduce each class when appropriate.
Although there are different means of implementing an event, there are two main ways you can initiate its coding. If the control has a default event and if you double-click it, the designer would initiate the default event and open the Code Editor. The cursor would be positioned in the body of the event, ready to receive your instructions.
As another technique, display the form and click either the body of the form or a control on it. Then, on the Properties window, click the Events button , and double-click the name of the event you want to use.
Another alternative you can use consists of displaying the form, right- clicking the form and clicking View Code. Then, in the Class Name combo box, select the name of the control:
In the Method Name combo box, select the event you want to implement:
While an application is opening on the screen or it needs to be shown, the operating system must display its controls. To do this, the controls colors and other visual aspects must be retrieved and restored. This is done by painting the control. If the form that hosts the controls was hidden somewhere such as behind another window or was minimized, when it comes up, the operating system needs to paint it (again).
When a control gets painted, it fires the Paint() event. The syntax of the Paint() event is:
Private Sub Control_Paint(ByVal sender As System.Object, _ ByVal e As System.Windows.Forms.PaintEventArgs) _ Handles MyBase.Paint End Sub
This event is carried by a PaintEventHandler delegate declared as follows:
Public Delegate Sub PaintEventHandler ( _ sender As Object, _ e As PaintEventArgs _ )
The PaintEventArgs parameter provides information about the area to be painted and the graphics object to paint.
When using an application, one of the actions a user can perform on a form or a control is to change its size, provided the object allows it. Also, some time to time, if possible, the user can minimize, maximize, or restore a window. Whenever any of these actions occur, the operating system must keep track of the location and size of a control. For example, if a previously minimized or maximized window is being restored, the operating system must remember where the object was previously positioned and what its dimensions were.
When the size of a control has been changed, it fires the Resize() event, which is a EventArgs type.
A keyboard is a hardware object attached to the computer. By default, it is used to enter recognizable symbols, letters, and other characters on a control. Each key on the keyboard displays a symbol, a letter, or a combination of those, to give an indication of what the key could be used for.
The user typically presses a key, which sends a signal to a program. The signal is analyzed to find its meaning. If the program or control that has focus is equipped to deal with the signal, it may produce the expected result. If the program or control cannot figure out what to do, it ignores the action.
Each key has a code that the operating system can recognize.
When a keyboard key is pressed, a message called KeyDown is sent. KeyDown is a KeyEventArgs type interpreted through the KeyEventHandler event whose syntax is:
Public Event KeyDown As KeyEventHandler
The KeyEventHandler event is carried by the KeyEventArgs class defined in the System.Windows.Forms namespace through the KeyEventHandler delegate:
Public Delegate Sub KeyEventHandler(sender As Object, e As KeyEventArgs)
When you initiate this event, its KeyEventArgs argument provides as much information as possible to implement an appropriate behavior.
As opposed to the key down message that is sent when a key is down, the KeyUp message is sent when the user releases the key. This event also is of type KeyEventArgs:
When the user presses a key, the KeyPress message is sent. Unlike the other two keyboard messages, the key pressed for this event should (must) be a character key. This event is handled by the KeyPressEventHandler event whose syntax is:
Public Event KeyPress As KeyPressEventHandler
The KeyPress event is carried by a KeyPressEventArgs class. The Handled property identifies whether this event was handled. The KeyChar property identifies the key that was pressed. It must be a letter or a recognizable symbol. Lowercase alphabetic characters, digits, and the lower base characters such as ; , " [ ] - = / are recognized as they are. For an uppercase letter or an upper base symbols, the user must press Shift + the key. The character would be identified as one entity. This means that the symbol % typed with Shift + 5 is considered as one character.
The mouse is another object that is attached to the computer allowing the user to interact with the machine. The mouse and the keyboard can each accomplish some tasks that are not normally available on the other or both can accomplish some tasks the same way.
The mouse is equipped with two, three, or more buttons. When a mouse has two buttons, one is usually located on the left and the other is located on the right. When a mouse has three buttons, one usually is in the middle of the other two. A mouse can also have a round object referred to as a wheel.
The mouse is used to select a point or position on the screen. Once the user has located an item, which could also be an empty space, a letter or a word, he or she would position the mouse pointer on it.
To actually use the mouse, the user would press either the left, the middle (if any), or the right button. If the user presses the left button once, this action is called Click. If the user presses the right mouse button, the action is referred to as Right-Click. If the user presses the left button twice and very fast, the action is called Double-Click.
If the mouse is equipped with a wheel, the user can position the mouse pointer somewhere on the screen and roll the wheel. This usually causes the document or page to scroll up or down, slow or fast, depending on how it was configured.
Before using a control using the mouse, the user must first position the mouse on it. When this happens, the control fires a MouseEnter event. Its syntax is:
Public Event MouseEnter As EventHandler
This event is carried by an EventArgs argument but does not provide much information, only to let you know that the mouse was positioned on a control.
Whenever the mouse is being moved on top of a control, a mouse event is sent. This event is called MouseMove and is of type MouseEventArgs. Its syntax is:
Public Event MouseMove As MouseEventHandler
To implement this event, a MouseEventArgs argument is passed to the MouseEventHandler event implementer:
Public Delegate Sub MouseEventHandler ( _ sender As Object, _ e As MouseEventArgs)
The MouseEventArgs argument provides the necessary information about the event such as what button was clicked, how many times the button was clicked, and the location of the mouse.
If the user positions the mouse on a control and hovers over it, a MouseHover event is fired:
Public Event MouseHover As EventHandler
This event is carried by an EventArgs argument that does not provide further information than the mouse is hovering over the control.
Imagine the user has located a position or an item on a document and presses one of the mouse buttons. While the button is pressed and is down, a button-down message is sent. This event is called MouseDown:
Public Event MouseDown As MouseEventHandler
The MouseDown event is of type MouseEventArgs. Like the other mouse move events, the MouseDown event is carried by a MouseEventArgs argument.
After pressing a mouse button, the user usually releases it. While the button is being released, a button-up message is sent and it depends on the button, left or right, that was down. The event produced is MouseUp:
Public Event MouseUp As MouseEventHandler
Like the MouseDown message, the MouseUp event is of type MouseEventArgs which is passed to the MouseEventHandler for processing.
When the user moves the mouse pointer away from a control, the control fires a MouseLeave event:
Public Event MouseLeave As EventHandler
To support drag n' drop operations, the .NET Framework provides various events through the Control class. The object from where the operation starts is referred to as the source. This object may or may not be from your application. For example, a user can start a dragging operation from a file utility (such as Windows Explorer). The object or document where the dragged operation must end is referred to as the target. Because you are creating an application where you want to allow the user to drop objects, the target is usually part of your application.
To start the dragging operation, the user clicks and holds the mouse on an object or text. Then the user starts moving the mouse. At this time, if the source object is from your application, the control fires a DragLeave event:
Public Event DragLeave As EventHandler
This event is of type EventArgs, which means it doesn't carry any information, just to let you know that a dragging operation has started.
The user drags an object from one location to another. Between the source and the target, the user may pass over another object but that is not the target. If this happens, the control that is being over passed fires a DragOver event:
Public Event DragOver As DragEventHandler
The DragOver event is handled by a class named DragEventAgs. Most of the time, you will not be concerned with this event because most drag n' drop operations involve a source and a target. For this reason, we will not review this class at this time.
At one time, the user will reach the target. That is, before dropping the item. At this time, the control over which the mouse is positioned fires the DragEnter event:
Public Event DragEnter As DragEventHandler
At this time, the user has not yet decided to drop the object or not. This is important because at this time, you can still make some decisions, such as identifying the type of item the user is carrying, whether you want to allow this operation or not, etc.
The DragEnter event is handled by the DragEventArgs class. One of the properties of this class is named Effect. This property is of type DragDropEffects, which is an enumeration.
Probably the most important piece of information you want to know is what the user is carrying. For example, a user can drag text, a picture, or a document, etc. To assist you with identifying what the mouse is holding, the DragEventArgs class is equipped with the Data property, which is of type IDataObject.
Once the user has reached the target and still wants to complete the operation, he or she must release the mouse. At this time, the object on which the mouse is positioned fires the DragDrop event:
Public Event DragDrop As DragEventHandler
The DragDrop event is handled by the DragEventArgs class.
It is possible, but unlikely, that none of the available events featured in the controls of the .NET Framework suits your scenario. If this happens, you can implement your own event. To do this, you should first consult the Win32 documentation to identify the type of message you want to send.
There are two main techniques you can use to create or send a message that is not available in a control. You may also want to provide your own implementation of a message.
In order to send a customized version of a Windows message from your control, you must first be familiar with the message. A message in the .NET Framework is based on the Message structure. One of the properties of this structure is Msg. This property holds a constant integer that is the message to send. The constant properties of messages are defined in the Win32 library. To send a message, you can declare a variable of type Message and define it. Once the variable is ready, you can pass it to the DefWndProc() method. Its syntax is:
Protected Overridable Sub DefWndProc(ByRef m As Message)
To know the various messages available, you can consult the Win32 documentation but you need a way to get the constant value of that message. Imagine you want to send a message to close a form when the user clicks a certain button named Button1. If you have Microsoft Visual Studio (any version) installed in your computer, you can open the WINUSER.H file. In this file, the WM_CLOSE message that carries a close action is defined with the hexadecimal constant 0x0010
You can then define a constant integer in your code and initialize it with this same value.
To process a Windows message that is not available for a control you want to use in your application, you can implement its WndProc() method. Its syntax is:
Protected Overridable Sub WndProc(ByRef m As Message)
In order to use this method, you must override it in your own class. Once again, you must know the message you want to send. This can be done by consulting the Win32 documentation. | http://www.functionx.com/vb/topics/events.htm | 13 |
90 | Prepared by Nicole Strangman & Tracey Hall
National Center on Accessing the General Curriculum
Note: Links have been updated on 8/24/09
Many people associate virtual reality and computer simulations with science fiction, high-tech industries, and computer games; few associate these technologies with education. But virtual reality and computer simulations have been in use as educational tools for some time. Although they have mainly been used in applied fields such as aviation and medical imaging, these technologies have begun to edge their way into the primary classroom. There is now a sizeable research base addressing the effectiveness of virtual reality and computer simulations within school curriculum. The following five sections present a definition of these technologies, a sampling of different types and their curriculum applications, a discussion of the research evidence for their effectiveness, useful Web resources, and a list of referenced research articles.
Definition and Types
Computer simulations are computer-generated versions of real-world objects (for example, asky scraper or chemical molecules) or processes (for example, population growth or biological decay). They may be presented in 2-dimensional, text-driven formats, or, increasingly, 3-dimensional, multimedia formats. Computer simulations can take many different forms, ranging from computer renderings of 3-dimensional geometric shapes to highly interactive, computerized laboratory experiments.
Virtual reality is a technology that allows students to explore and manipulate computer-generated, 3-dimensional, multimedia environments in real time. There are two main types of virtual reality environments. Desktop virtual reality environments are presented on an ordinary computer screen and are usually explored by keyboard, mouse, wand, joystick, or touchscreen. Web-based "virtual tours" are an example of a commonly available desktop virtual reality format. Total immersion virtual reality environments are presented on multiple, room-size screens or through a stereoscopic, head-mounted display unit. Additional specialized equipment such as a DataGlove (worn as one would a regular glove) enable the participant to interact with the virtual environment through normal body movements. Sensors on the head unit and DataGlove track the viewer's movements during exploration and provide feedback that is used to revise the display enabling real-time, fluid interactivity. Examples of virtual reality environments are a virtual solar system that enables users to fly through space and observe objects from any angle, a virtual science experiment that simulates the growth of microorganisms under different conditions, a virtual tour of an archeological site, and a recreation of the Constitutional Convention of 1787.
Applications Across Curriculum Areas
Computer simulations and virtual reality offer students the unique opportunity of experiencing and exploring a broad range of environments, objects, and phenomena within the walls of the classroom. Students can observe and manipulate normally inaccessible objects, variables, and processes in real-time. The ability of these technologies to make what is abstract and intangible concrete and manipulable suits them to the study of natural phenomena and abstract concepts, "(VR) bridges the gap between the concrete world of nature and the abstract world of concepts and models (Yair, Mintz, & Litvak, 2001, p.294)." This makes them a welcome alternative to the conventional study of science and mathematics, which require students to develop understandings based on textual descriptions and 2-D representations.
The concretizing of objects atoms, molecules, and bacteria, for example, makes learning more straightforward and intuitive for many students and supports a constructivist approach to learning. Students can learn by doing rather than, for example, reading. They can also test theories by developing alternative realities. This greatly facilitates the mastery of difficult concepts, for example the relation between distance, motion, and time (Yair et al.).
It is not therefore surprising that math and science applications are the most frequent to be found in the research literature. Twenty-two of the thirty-one studies surveyed in this review of the literature investigated applications in science; 6 studies investigated math applications. In contrast, only one study investigated applications in the humanities curriculum (specifically, history and reading). The two remaining addressed generalized skills independent of a curriculum area.
It is important to keep in mind, however, when reading this review, that virtual reality and computer simulations offer benefits that could potentially extend across the entire curriculum. For example, the ability to situate students in environments and contexts unavailable within the classroom could be beneficial in social studies, foreign language and culture, and English curricula, enabling students to immerse themselves in historical or fictional events and foreign cultures and explore them first hand. With regard to language learning, Schwienhorst (2002) notes numerous benefits of virtual reality, including the allowance of greater self-awareness, support for interaction, and the enabling of real-time collaboration (systems can be constructed to allow individuals in remote locations to interact in a virtual environment at the same time) (Schwienhorst, 2002).
The ability of virtual reality and computer simulations to scaffold student learning (Jiang & Potter, 1994; Kelly, 1997-98), potentially in an individualized way, is another characteristic that well suits them to a range of curriculum areas. An illustrative example of the scaffolding possibilities is a simulation program that records data and translates between notation systems for the student, so that he or she can concentrate on the targeted skills of learning probability (Jiang & Potter, 1994). The ability for students to revisit aspects of the environment repeatedly also helps put students in control of their learning. The multisensory nature can be especially helpful to students who are less visual learners and those who are better at comprehending symbols than text. With virtual environments, students can encounter abstract concepts directly, without the barrier of language or symbols and computer simulations and virtual environments are highly engaging, "There is simply no other way to engage students as virtual reality can (Sykes & Reid, 1999, p.61)." Thus, although math and science are the most frequently researched applications of these two technologies, humanities applications clearly merit the same consideration.
Evidence for Effectiveness
In the following sections, we discuss the evidence for the effectiveness of virtual reality and computer simulations based on an extensive survey of the literature published between 1980 and 2002. This survey included 31 research studies conducted in K-12 education settings and published in peer-reviewed journals (N=27) or presented at conferences (N=3) (it was necessary to include conference papers due to the low number of virtual reality articles in peer-reviewed journals). Every attempt was made to be fully inclusive but some studies could not be accessed in a timely fashion. Although the research base is somewhat small, particularly in the case of virtual reality, it provides some useful insights.
Numerous commentaries and/or descriptions of virtual reality projects in education have been published. Research studies are still relatively rare. We identified only 3 research investigations of virtual reality in the K-12 classroom: one journal article (Ainge, 1996) and two conference papers (Song, Han, & Yul Lee, 2000; Taylor, 1997).
Taylor's (1997) research was directed at identifying variables that influence students' enjoyment of virtual reality environments. After visiting a virtual reality environment, the 2,872 student participants (elementary, middle, and high school) rated the experience by questionnaire. Their responses were indicative of high levels of enjoyment throughout most of the sample. However, responses also indicated the need for further development of the interface both to improve students' ability to see in the environment and to reduce disorientation. Both factors were correlated with ratings of the environment's presence or authenticity, which itself was tightly associated with enjoyment. It's uncertain whether these technical issues remain a concern with today's virtual reality environments, which have certainly evolved since the time this study was published.
Whether or not virtual reality technology has yet been optimized to promote student enjoyment, it appears to have the potential to favorably impact the course of student learning. Ainge (1996) and Song et al. both provide evidence that virtual reality experiences can offer an advantage over more traditional instructional experiences at least within certain contexts. Ainge showed that students who built and explored 3D solids with a desktop virtual reality program developed the ability to recognize 3D shapes in everyday contexts, whereas peers who constructed 3D solids out of paper did not. Moreover, students working with the virtual reality program were more enthusiastic during the course of the study (which was, however, brief - 4 sessions). Song et al. reported that middle school students who spent part of their geometry class time exploring 3-D solids were significantly more successful at solving geometry problems that required visualization than were peers taught geometry by verbal explanation. Both studies, however, seem to indicate that the benefits of virtual reality experiences are often limited to very specific skills. For example, students taught by a VR approach were not any more effective at solving geometry problems that did not require visualization (Song et al.).
Clearly, the benefits of virtual reality experiences need to be defined in a more comprehensive way. For example, although numerous authors have documented student enjoyment of virtual reality (Ainge, 1996; Bricken & Byrne, 1992; Johnson, Moher, Choo, Lin, & Kim, 2002; Song et al.), it is still unclear whether virtual reality can offer more than transient appeal for students. Also, the contexts in which it can be an effective curriculum enhancement are still undefined. In spite of the positive findings reported here, at this point it would be premature to make any broad or emphatic recommendations regarding the use of virtual reality as a curriculum enhancement.
There is substantial research reporting computer simulations to be an effective approach for improving students' learning. Three main learning outcomes have been addressed: conceptual change, skill development, and content area knowledge.
Conceptual change. One of the most interesting curriculum applications of computer simulations is the generation of conceptual change. Students often hold strong misconceptions be they historical, mathematical, grammatical, or scientific. Computer simulations have been investigated as a means to help students confront and correct these misconceptions, which often involve essential learning concepts. For example, Zietsman & Hewson (1986) investigated the impact of a microcomputer simulation on students' misconceptions about the relationship between velocity and distance, fundamental concepts in physics. Conceptual change in the science domain has been the primary target for these investigations, although we identified one study situated within the mathematics curriculum (Jiang & Potter, 1994). All 3 studies that we directly reviewed (Jiang & Potter, 1994; Kangassalo, 1994; Zietsman & Hewson, 1986) supported the potential of computer simulations to help accomplish needed conceptual change. Stratford (1997) discusses additional evidence of this kind (Brna, 1987; Gorsky & Finegold, 1992) in his review of computer-based model research in precollege science classrooms (Stratford, 1997).
The quality of this research is, however, somewhat uneven. Lack of quantitative data (Brna, 1987; Jiang & Potter, 1994; Kangassalo, 1994) and control group(s) (Brna, 1987; Gorsky & Finegold, 1992; Jiang & Potter, 1994; Kangassalo, 1994) are recurrent problems. Nevertheless, there is a great deal of corroboration in this literature that computer simulations have considerable potential in helping students develop richer and more accurate conceptual models in science and mathematics.
Skill development. A more widely investigated outcome measure in the computer simulation literature is skill development. Of 12 studies, 11 reported that the use of computer simulations promoted skill development of one kind or another. The majority of these simulations involved mathematical or scientific scenarios (for example, a simulation of chemical molecules and a simulation of dice and spinner probability experiments), but a few incorporated other topic areas such as history (a digital text that simulated historical events and permitted students to make decisions that influenced outcomes) and creativity (a simulation of Lego block building). Skills reported to be improved include reading (Willing, 1988), problem solving (Jiang & Potter, 1994; Rivers & Vockell, 1987), science process skills (e.g. measurement, data interpretation, etc.; (Geban, Askar, & Ozkan, 1992; Huppert, Lomask, & Lazarowitz, 2002), 3D visualization (Barnea & Dori, 1999), mineral identification (Kelly, 1997-98), abstract thinking (Berlin & White, 1986), creativity (Michael, 2001), and algebra skills involving the ability to relate equations and real-life situations (Verzoni, 1995).
Seven (Barnea & Dori, 1999; Berlin & White, 1986; Huppert et al.; Kelly, 1997-98; Michael, 2001; Rivers & Vockell, 1987) of these twelve studies incorporated control groups enabling comparison of the effectiveness of computer simulations to other instructional approaches. Generally, they compared simulated explorations, manipulations, and/or experiments to hands-on versions involving concrete materials. The results of all 7 studies suggest that computer simulations can be implemented to as good or better effect than existing approaches.
There are interpretive questions, however, that undercut some of these studies' findings. One of the more problematic issues is that some computer simulation interventions have incorporated instructional elements or supports (Barnea & Dori, 1999; Geban et al.; Kelly, 1997-98; Rivers & Vockell, 1987; Vasu & Tyler, 1997) that are not present in the control treatment intervention. This makes it more difficult to attribute any advantage of the experimental treatment to the computer simulation per say. Other design issues such as failure to randomize group assignment (Barnea & Dori, 1999; Kelly, 1997-98; Rivers & Vockell, 1987; Vasu & Tyler, 1997; Verzoni, 1995) none of these studies specified that they used random assignment) and the use of ill-documented, qualitative observations (Jiang & Potter, 1994; Mintz, 1993; Willing, 1988) weaken some of the studies. When several of these flaws are present in the same study (Barnea & Dori, 1999; Kelly, 1997-98; Rivers & Vockell, 1987; Vasu & Tyler, 1997), the findings should be weighted more lightly. Even excluding such studies, however, the evidence in support of computer simulations still outweighs that against them.
Two studies reported no effect of computer simulation use on skill development (Mintz, 1993, hypothesis testing; Vasu & Tyler, 1997, problem solving). However, neither of these studies is particularly strong. Mintz (1993) presented results from a small sample of subjects and based conclusions on only qualitative, observational data. Vasu & Tyler (1997) provide no detailed information about the nature of the simulation program investigated in their study or how students interacted with it, making it difficult to evaluate their findings.
Thus, as a whole, there is good support for the ability of computer simulations to improve various skills, particularly science and mathematics skills. Important questions do remain. One of the more important questions future studies should address is the degree to which two factors, computer simulations' novelty and training for involved teachers and staff, are fundamental to realizing the benefits of this technology.
Content area knowledge. Another potential curriculum application for computer simulations is the development of content area knowledge. According to the research literature, computer programs simulating topics as far ranging as frog dissection, a lake's food chain, microorganismal growth, and chemical molecules, can be effectively used to develop knowledge in relevant areas of the curriculum. Eleven studies in our survey investigated the impact of working with a computer simulation on content area knowledge. All 11 researched applications for the science curriculum, targeting, for example, knowledge of frog anatomy and morphology, thermodynamics, chemical structure and bonding, volume displacement, and health and disease. Students who worked with computer simulations significantly improved their performance on content-area tests (Akpan & Andre, 2000; Barnea & Dori, 1999; Geban et al.; Yildiz & Atkins, 1996). Working with computer simulations was in nearly every case as effective (Choi & Gennaro, 1987; Sherwood & Hasselbring, 1985/86) or more effective (Akpan & Andre, 2000; Barnea & Dori, 1999; Geban et al.; Huppert et al.; Lewis, Stern, & Linn, 1993; Woodward, Carnine, & Gersten, 1988) than traditional, hands-on materials for developing content knowledge.
Only two studies (Bourque & Carlson, 1987; Kinzer, Sherwood, & Loofbourrow, 1989) report an inferior outcome relative to traditional learning methods. Both studies failed to include a pretest, without which it is difficult to interpret posttest scores. Students in the simulation groups may have had lower posttest scores and still have made greater gains over the course of the experiment because they started out with less knowledge. Or they may have had more knowledge than their peers, resulting in a ceiling effect. Moreover, Bourque & Carlson (1997) designed their experiment in a way that may have confounded the computer simulation itself with other experimental variables. Students who worked off the computer took part in activities that were not parallel to those experienced by students working with computer simulations. Only students in the hands-on group were engaged in a follow-up tutorial and post-lab problem solving exercise.
Experimental flaws such as these are also problematic for many of the 11 studies that support the benefits of using computer simulations. Neither Choi & Gennaro (1987), Sherwood and Hasselbring (1985/86), nor Woodward et al. included a pretest. Like Bourque & Carlson (1997, above) both Akpan & Andre (2000) and Barnea & Dori (1999) introduced confounding experimental variables by involving the computer simulation group in additional learning activities (filling out a keyword and definition worksheet and completing a self study, review and quiz booklet, respectively). In addition, four studies (Barnea & Dori, 1999; Huppert et al.; Woodward et al.; Yildiz & Atkins, 1996) did not clearly indicate that they randomized assignment, and two did not include a control group (Lewis et al.; Yildiz & Atkins, 1996).
Little of the evidence to support computer simulations' promotion of content knowledge is iron clad. Although further study is important to repeat these findings, the quality of evidence is nevertheless on par with that supporting the use of traditional approaches. Taking this perspective, there is reasonably good support for the practice of using computer simulations as a supplement to or in place of traditional approaches for teaching content knowledge. However, the same questions mentioned above in talking about the skill development literature, linger here and need to be addressed in future research.
Factors Influencing Effectiveness
Factors influencing the effectiveness of computer simulations have not been extensively or systematically examined. Below we identify a number of likely candidates, and describe whatever preliminary evidence exists for their influence on successful learning outcomes.
At this point, it appears that computer simulations can be effectively implemented across a broad range of grade levels. Successful learning outcomes have been demonstrated for elementary (Berlin & White, 1986; Jiang & Potter, 1994; Kangassalo, 1994; Kinzer et al.; Park, 1993; Sherwood & Hasselbring, 1985/86; Vasu & Tyler, 1997; Willing, 1988), junior high (Akpan & Andre, 2000; Choi & Gennaro, 1987; Jackson, 1997; Jiang & Potter, 1994; Lewis et al.; Michael, 2001; Roberts & Blakeslee, 1996; Verzoni, 1995; Willing, 1988) and high school students (Barnea & Dori, 1999; Bourque & Carlson, 1987; Geban et al.; Huppert et al.; Jiang & Potter, 1994; Kelly, 1997-98; Mintz, 1993; Rivers & Vockell, 1987; Ronen & Eliahu, 1999; Willing, 1988; Woodward et al.; Yildiz & Atkins, 1996; Zietsman & Hewson, 1986). Because the majority of studies (14/27) have targeted junior high and high school populations, there is weightier support for these grade levels. But although fewer in numbers studies targeting students in grades 4 through 6 are also generally supportive of the benefits of using computer simulations. At this point, the early grades, 1-3 (Kangassalo, 1994) are too poorly represented in the research base to draw any conclusions about success of implementation.
Only one study has directly examined the impact of grade level on the effectiveness of using computer simulations. Berlin & White (1986) found no significant difference in the effectiveness of this approach for 2nd and 4th grade students. In the absence of other direct comparisons, a metaanalysis of existing research to determine the average effect size for different grade levels would help to determine whether this is a strong determinant of the effectiveness of computer simulations.
Looking across students, even just those considered to represent the "middle" of the distribution, there are considerable differences in their strengths, weaknesses, and preferences (Rose & Meyer, 2002). Characteristics at both the group and individual level have the potential to influence the impact of any learning approach. Educational group, prior experience, gender, and a whole variety of highly specific traits such as intrinsic motivation and cognitive operational stage are just a few examples. Although attention to such factors has been patchy at best, there is preliminary evidence to suggest that some of these characteristics may influence the success of using computer simulations.
With respect to educational group, the overwhelming majority of research studies have sampled subjects in the general population, making it difficult to determine whether educational group in any way influences the effectiveness of computer simulations. Only two studies (Willing, 1988; Woodward et al.) specifically mention the presence of students with special needs in their sample. Neither study gets directly at the question of whether educational group influences the effectiveness of computer simulations. However, they do make some interesting and important observations. Willing (1988) describes her sample of 222 students as being comprised mostly of students whom were considered average but in addition special education students, students with learning disabilities, and students who were gifted. Although Willing does not thoroughly address educational group in her presentation and analysis of the results, she does share a comment by one of the teachers that even less able readers seemed at ease reading when using the interactive historical text. Findings from Woodward et al. suggest not only that computer simulations can be effective for students with learning disabilities but that they may help to normalize these students' performance to that of more average-performing peers. Students with learning disabilities who worked with a computer simulation outperformed students without learning disabilities who did not receive any treatment. In contrast, untreated students without learning disabilities outperformed students without learning disabilities who took part in a control intervention consisting of conventional, teacher-driven activities.
Like educational group, gender is a factor sometimes associated with disparate achievement, particularly in math and science subject areas. In relation to the impact of computer simulations, however, it does not appear to be an important factor. Four studies in our review (Barnea & Dori, 1999; Berlin & White, 1986; Choi & Gennaro, 1987; Huppert et al.) directly examined the influence of gender on the outcome of working with computer simulations, and none demonstrated any robust relationship. In fact, a study by Choi & Gennaro (1987) suggests that when gender gaps in achievement exist, they persist during the use of computer simulations.
In contrast, there is evidence, although at this point isolated, that prior achievement can strongly influence the effectiveness of computer simulations. Yildiz & Atkins (1996) examined how prior achievement in science influences the outcome of working with different types of multimedia computer simulations. Students' prior achievement clearly affected the calculated effect size but how so depended on the type of computer simulation. These findings raise the possibility of very complex interactions between prior achievement and the type of computer simulation being used. They suggest that both factors may be essential for teachers to consider when weighing the potential benefits of implementing computer simulations.
Huppert et al. investigated whether students' cognitive stage might influence how much they profit from working with a computer simulation. Working with a computer simulation of microorganismal growth differentially affected students' development of content understanding and science process skill depending on their cognitive stage. Interestingly, those with the highest cognitive stage (formative) experienced little improvement from working with the simulation, whereas students at the concrete or transitional operational stages notably improved. Thus, reasoning ability may be another factor influencing the usefulness of a computer simulation to a particular student.
There are many more potentially important variables that have rarely been considered or even described in research studies. For example, only a small number of studies have specified whether subjects are experienced (Choi & Gennaro, 1987; Yildiz & Atkins, 1996) or not (Bourque & Carlson, 1987) with using computers in the classroom. None have directly examined this variable's impact. More thoroughly describing the characteristics of sample populations would be an important first step toward sorting out such potentially important factors.
Teacher Training and Support
Given the unevenness of teachers' technology preparedness, training and support in using computer simulations seems like a potentially key factor in the effectiveness of using computer simulations in the classroom. As it the case with many of the other variables we've mentioned, few studies have described with much clarity or detail the nature of teacher training and support. Exceptions are Rivers & Vockell (1987) and Vasu & Tyler (1997), both of whom give quite thorough descriptions of staff development and available resources. This is another area that merits further investigation.
It has been suggested that combining computer simulation work with hands-on work may produce a better learning outcome than either method alone. Findings from Bourque & Carlson (1997) support this idea. They found that students performed best when they engaged in hands-on experimentation followed by computer simulation activities. However, Akpan & Andre (2000) report that students learned as much doing the simulated dissection as they did doing both the simulated and real dissection. This is an interesting question but one that will require additional research to squarely address.
Links to Learn More About Virtual Reality & Computer Simulations
Virtual Reality Society
The Virtual Reality Society (VRS), founded in 1994 is an international group dedicated to the discussion and advancement of virtual reality and synthetic environments. Its activities include the publication of an international journal, the organization of special interest groups, conferences, seminars and tutorials. This web site contains a rich history of article listings and publications on Virtual Reality.
Virtual Reality and Education Laboratory
This is the homepage of Virtual Reality and Education Laboratory at East Carolina University in Greenville, North Carolina. The Virtual Reality and Education Laboratory (VREL) was created in 1992 to research virtual reality (VR) and its applications to the K-12 curriculum. Many projects are being conducted through VREL by researchers Veronica Pantelidis and Dr. Lawrence Auld. This web site provides links to VR in the Schools, an internationally referred journal distributed via the Internet. There are additional links to some VR sites recommended by these authors as exemplars and interesting sites.
Virtual Reality Resources for K-12 Education
The NCSA Education & Outreach Group has compiled this web site containing links to multiple sites containing information and educational materials on Virtual Reality for Kindergarten through 12 grade classrooms.
Virtual Reality in Education: Learning in Virtual Reality
In collaboration with the National Center for Supercomputing Applications, the University of Illinois at Urbana-Champaign has created a five-year program to examine virtual reality (VR) in the classroom. One of the goals behind this program is to discover how well students can generalize their VR learning experiences outside of the classroom. This web site provides an explanation of the project with links to additional resources and Projects.
Human Interface Technology Laboratory, Washington Technology Center in Seattle
This web site is the home of the Human Interface Technology Laboratory of the Washington Technology Center in Seattle, Washington. Various Virtual Reality (VR) articles and books are referenced. In addition to the list of articles and books, the technology center provides a list of internet resources including organizations that are doing research on VR, VR simulation environments and projects about various aspects of virtual reality.
Applied Computer Simulation Lab, Oregon Research Institute
This web site is from the Oregon Research Institute. The researchers at the Applied Computer Simulation Lab have created virtual reality (VR) programs that help physically disabled children operate motorized wheelchairs successfully. This website connects the reader to articles and information about these VR projects. Another project that this team is working on involves creating virtual reality programs for deaf blind students to help them "learn orientation and mobility skills in three dimensional acoustical spaces."
Ainge, D. J. (1996). Upper primary students constructing and exploring three dimensional shapes: A comparison of virtual reality with card nets.Journal of Educational Computing Research, 14(4), 345-369.
Ainge presents information from a study that involved students in grades five, six and seven. The experimental group contained twenty students and the control group contained eleven. The program was the VREAM Virtual Reality Development System which allows for easy construction of 3D shapes. Ease of using Virtual Reality (VR) and student engagement with VR were observed informally. VR had little impact on shape visualization and name writing, but enhanced recognition. Students had no difficulty in using the VREAM program and the student's enthusiasm for virtual reality was unanimous and sustained. The author cautions that the positive results from this study must be regarded as tentative because of the small number of participants.
Akpan, J. P., & Andre, T. (2000). Using a computer simulation before dissection to help students learn anatomy. Journal of Computers in Mathematics and Science Teaching, 19 (3), 297-313.
Akpan and Andre examine the prior use of simulation of frog dissection in improving students' learning of frog anatomy and morphology. The study included 127 students ranging in age from 13-15 that were enrolled in a seventh-grade life science course in a middle school. The students had some experience in animal dissection, but no experience in the use of simulated dissection. There were four experimental conditions: simulation before dissection (SBD), dissection before simulation (DBS), simulation-only (S) or dissection only (DO). Students completed a pretest three weeks prior to the experiment and a posttest four days after the dissection was completed. Results of the study indicate that students receiving SBD and SO learned significantly more anatomy than students receiving DBS or DO. The authors suggest that computer-based simulations can offer a suitable cognitive environment in which students search for meaning, appreciate uncertainty and acquire responsibility for their own learning.
Barnea, N., & Dori, Y. J. (1999). High-school chemistry students' performance and gender differences in a computerized molecular modeling learning environment. Journal of Science Education and Technology, 8(4), 257-271.
The authors examined a new computerized molecular modeling (CMM) in teaching and learning chemistry for Israeli high schools. The study included three tenth grade experimental classes using the CMM approach and two other classes, who studied the same topic in a traditional approach, served as a control group. The authors investigated the effects of using molecular modeling on students' spatial ability, understanding of new concepts related to geometric and symbolic representations and students' perception of the model concept. In addition, each variable was examined for gender differences. Students in the experimental group performed better than control group students in all three performance areas. In most of the achievement and spatial ability tests no significant gender differences were found, but in some aspects of model perception and verbal argumentation differences existed. Teachers' and students' feedback on the CMM learning environment were found to be positive, as it helped them understand concepts in molecular geometry and bonding.
Berlin, D., & White, A. (1986). Computer simulations and the transition from concrete manipulation of objects to abstract thinking in elementary school mathematics. School Science and Mathematics, 86(6), 468-479.
In this article, the authors investigated the effects of combining interactive microcomputer simulations and concrete activities on the development of abstract thinking in elementary school mathematics. The students represented populations from two different socio-cultural backgrounds, including 57 black suburban students and 56 white rural students. There were three levels of treatment: (a) concrete-only activities, (b) combination of concrete and computer simulation activities, and (c) computer simulation-only activities. At the end of the treatment period, two paper-and-pencil instruments requiring reflective abstract thought were administered to all the participants. Results indicate that concrete and computer activities have different effects on children depending upon their socio-cultural background and gender. Learners do not react in the same way nor achieve equally well with different modes of learning activities. The authors suggest that mathematics' instruction should provide for the students' preferred mode of processing with extension and elaboration in an alternate mode of processing.
Bourque, D. R., & Carlson, G. R. (1987). Hands-on versus computer simulation methods in chemistry. Journal of Chemical Education, 64(3), 232-234.
Bourque and Carlson outline the results of a two-part study on computer-assisted simulation in chemical education. The study focused on examining and comparing the cognitive effectives of traditional hands-on laboratory exercise with a computer-simulated program on the same topic. In addition, the study sought to determine if coupling these two formats into a specific sequencing would provide optimum student learning. The participants were 51 students from general chemistry classes in high school and they worked microcomputers for the research activities. The students completed both a pretest and posttest. The results indicate that the hands-on experiment format followed by the computer-simulation format provided the highest cumulative scores for the examinations. The authors recommend using computer simulations as part of post laboratory activities in order to reinforce learning and support the learning process.
Bricken, M., & Byrne, C. M. (1992). Summer students in virtual reality: a pilot study on educational applications of virtual reality technology. Seattle, Washington: Washington University.
The goal of this study was to take a first step in evaluating the potential of virtual reality (VR) as a learning environment. The study took place at a technology-orientated summer day camp for students ages 5-18, where student activities center around hands-on exploration of new technology during one-week sessions. Information of 59 students was gathered during a 7-week period in order to evaluate VR in terms of students behavior and opinions as they used VR to construct and explore their own virtual worlds. Results indicate that students demonstrated rapid comprehension of complex concepts and skills.They also reported fascination with the software and a high desire to use VR to build expression of their knowledge and imagination. The authors concluded that VR is a significantly compelling creative environment in which to teach and learn.
Brna, P. (1987). Confronting dynamics misconceptions. Instructional Science, 16, 351-379.
The authors discuss problems students have with learning about Newtonian dynamics and kinematics focusing on the assumption that learning is promoted through confronting students with their own misconceptions. Brna explains a computer-based modeling environment entitled DYNLAB and describes as a study with high school boys in Scotland employing it.
Choi, B., & Gennaro, E. (1987). The effectiveness of using computer simulated experiments on junior high students' understanding of the volume displacement concept. Journal of Research in Science Teaching, 24(6), 539-552.
Choi and Gennaro compared the effectiveness of microcomputer simulated experiences with that of parallel instruction involving hands-on laboratory experiences for teaching the concept of volume displacement to junior high students. They also assessed the differential effect on students' understanding of the volume displacement using student gender as an additional independent variable. The researchers also compared both treatment groups in degree of retention after 45 days. The participants included 128 students from eight-grade earth science classes. It was found that the computer-simulated experiences were as effective as hands-on laboratory experiences, and those males, having had hands-on laboratory experiences performed better on the posttest than females having had the hands-on laboratory experiences. There were no significant differences in performance when comparing males with females using the computer simulation in the learning of the displacement concept. An ANOVA of the retention test scores revealed that males in both treatment conditions retained knowledge of volume displacement better than females.
Geban, O., Askar, P., & Ozkan, I. (1992). Effects of computer simulations and problem-solving approaches on high school students. Journal of Educational Research, 86(1), 5-10.
The purpose of this study was to investigate the effects of computer-simulated experiment (CSE) and the problem-solving approach on students' chemistry achievement, science process skills and attitudes toward chemistry at the high school level. The sample consisted of 200 ninth-grade students the treatment was carried out over nine weeks. Using the CSE, two experimental groups were compared as well as a control group employing a conventional approach. Four instruments were used in the study: Chemistry Achievement Test, Science Process Skill Test, Chemistry Attitude Scale, and Logical Thinking Ability Test. The results indicate that the computer-simulated experiment approach and the problem-solving approach produced significantly greater achievement in chemistry and science process skills than the conventional approach did. The CSE approach produced significantly more positive attitudes toward chemistry than the other two methods, with the conventional approach being the least effective.
Gorsky, P., & Finegold, M. (1992). Using computer simulations to restructure students' conceptions of force. Journal of Computers in Mathematics and Science Teaching, 11, 163-178.
Gorsky and Finegold report on the development and application of a series of computer programs which simulate the outcomes of students' perceptions regarding forces acting on objects at rest or in motion. The dissonance-based strategy for achieving conceptual change uses an arrow-based vector language to enable students to express their conceptual understanding.
Huppert, J., Lomask, S. M., & Lazarowitz, R. (2002). Computer simulations in the high school: Students' cognitive stages, science process skills and academic achievement in microbiology. International Journal of Science Education, 24(8), 803-821.
This study is based on a computer simulation program entitled: "The Growth Curve of Microorganisms," which required 181 tenth-grade biology students in Israel to use problem solving skills and simultaneously manipulate three independent in one simulated environment. The authors hoped to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved higher academic achievement than their counterparts in the control group. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science that require high cognitive skills.
Jackson, D. F. (1997). Case studies of microcomputer and interactive video simulations in middle school earth science teaching. Journal of Science Education and Technology, 6(2), 127-141.
The author synthesizes the results of three cases studies of middle school classrooms in which computer and video materials were used to teach topics in earth and space science through interactive simulations. The cases included a range of middle school grade levels (sixth through eighth), teacher's levels of experience (student teacher through a 16-year veteran), levels of technology uses (interactive videodisk), and classroom organization pattern in relation to technological resources (teacher-centered presentations through small-group activities). The author was present in all class sessions and gathered data by performing teacher interviews, videotaping classes, taking interpretive field notes and copying the students' worksheets. In light of these findings, suggestions are made regarding improved design principles for such materials and how middle school science teachers might better conduct lessons using simulations.<
Jiang, Z., & Potter, W. D. (1994). A computer microworld to introduce students to probability. Journal of Computers in Mathematics and Science Teaching, 13(2), 197-222.
The objective of this paper is to describe a simulation-orientated computer environment (CHANCE) for middle and high school students to learn introductory probability and a teacher experiment to evaluate its effectiveness. CHANCE is composed of five experimental sub-environments: Coins, Dice, Spinners, Thumbtack and Marbles. The authors desired detailed information from a small sample rather than a large sample so the participants included three boys (a fifth, sixth and eighth grader) and a girl (a junior). They were divided into two groups: Group 1 consisted of the younger students and Group 2 of the older. Each group worked with the investigator on a computer for two 1-hour sessions per week for five weeks. The results indicate that the teaching and learning activities carried out in the experimental environment provided by CHANCE were successful and supported the authors' belief that CHANCE has great potential in teaching and learning introductory probability. The authors caution generalizing these results, as there were only four students included in the study.
Johnson, A., Moher, T., Choo, Y., Lin, Y. J., & Kim, J. (2002). Augmenting elementary school education with VR. IEEE Computer Graphics and Applications, March/April, 6-9.
This article reviews a project in which ImmersaDesk applications have been employed in an elementary school for two years to determine if virtual environments (VEs) have helped children make sense of mathematics and scientific phenomenon. Since the beginning of the project, more than 425 students from grades K-6 have used the ImmersaDesk. The ImmersaDesk contains a 6-foot by 4-foot screen that allows 3-4 students to interact with each other while interacting with the VE on the screen. The positive feedback from the students and teachers indicate that VR can successfully augment scientific education as well as help to equalize the learning environment by engaging students in all ability levels.
Kangassalo, M. (1994). Children's independent exploration of a natural phenomenon by using a pictorial computer-based simulation. Journal of Computing in Childhood Education, 5(3/4), 285-297.
This paper is one part of an investigation whose aim was to examine to what extent the independent use of pictorial computer simulations of a natural phenomenon could be of help in the organizing of the phenomenon and the forming on an integrated picture of it. The author concentrated on describing children's exploration process, specifically 11 seven-year-old first-graders. The selected natural phenomenon was the variations in sunlight and the heat of the sum as experienced on earth related to the positions of the earth and the sun in space. The children were divided into four groups according to what kind of conceptual models they had before the use of the simulation. Children's conceptual models before the use of the simulation formed a basis from which the exploration of the phenomenon was activated. Children used the computer simulation over four weeks and each child differed as to the amount of operating time within each session (average of 65 minutes). The more developed and integrated their conceptual model, the more children's exploration contained investigating and experimenting with aim.
Kelly, P. R. (1997-98). Transfer of learning from a computer simulation as compared to a laboratory activity. Journal of Educational Technology Systems, 26(4), 345-351.
In this article, Kelly discusses the computer program he wrote that simulates a mineral identification activity in an Earth Science classroom. The research question was to determine if students who used the computer simulation could transfer their knowledge and perform as well on the New York State Regents Earth Science Exam as well as students who received instruction in a laboratory-based exercise. The results indicated no significant difference in the test scores of the two groups.
Kinzer, C. K., Sherwood, R. D., & Loofbourrow, M. C. (1989). Simulation software vs. expository text: a comparison of retention across two instructional tools. Reading Research and Instruction, 28(2), 41-49.
The authors examined the performance differences between two fifth grade classes. The first class was taught material about a food chain through a computer simulation and the second class was taught the same material by reading an expository text. The results indicated that the children in the second class, the expository text condition, did significantly better on the posttest than the students who received the information through a computer simulation program.
Lewis, E. L., Stern, J. L., & Linn, M. C. (1993). The effect of computer simulations on introductory thermodynamics understanding. Educational Technology, 33(1), 445-458.
The authors' purpose was to demonstrate the impact on eighth grade students' ability to generalize information about hydrodynamics learned through computer simulations to naturally-occurring problems. Five classes studied the reformulated Computer as Lab Partner (CLP) curriculum which makes naturally occurring events possible through computer simulation. The results indicate that the students understood the simulations and successfully integrated the hydrodynamic simulation information into real-world processes.
Michael, K. Y. (2001). The effect of a computer simulation activity versus a hands-on activity on product creativity in technology education. Journal of Technology Education, 13(1), 31-43.
The purpose of this study was to determine if computer simulated activities had a greater effect on product creativity than hands-on activity. Michael defined a creative product as "one that possesses some measure of both unusualness (originality) and usefulness." He hypothesized that there would be no difference in product creativity between the computer simulated group and the hands-on group. The subjects were seventh grade technology education students. The experimental group used Gryphon Bricks, a virtual environment that allows students to manipulate Lego-type bricks. The control group used Classic Lego Bricks. The Creative Product Semantic Scale (CPSS) was used to determine product creativity. The results indicated no differences between the two groups in regard to product creativity, originality, or usefulness.
Mintz, R. (1993). Computerized simulations as an inquiry tool. School Science and Mathematics, 93(2), 76-80.
The purpose of this study was determine if being exposed to computerized simulations expands and improves students' classroom inquiry work. The subjects in this study were fourteen and fifteen years old. The virtual environment consisted of a fish pond in which students had three consecutive assignments and a new variable was added to each assignment. The subjects asked to inquire hypotheses, conduct experiments, observe and record data and draw conclusions. As the experiments progressed, the students were able to answer questions using fewer simulation runs. The results support the author's hypothesis that exposure to computerized simulations can improve students' inquiry work.
Park, J. C. (1993). Time studies of fourth graders generating alternative solutions in a decision-making task using models and computer simulations. Journal of Computing in Childhood Education, 4(1), 57-76.
The purpose of this study was to determine whether the use of computer simulations had any affect on the time it took students to respond to a given task. The participants in this study were fourth graders who were split into four groups. They were given a decision-making task that required either hands-on manipulation of objects or computer simulated object manipulation. Three modifications of the computer simulation were implemented into the study. The first modification was computer simulation with keyboard input. The second modification was computer simulation with keyboard input and objects present for reference. The third modification was computer simulation with light input. Results indicated that students took longer to complete a task when they had to manipulate it using the computer simulation.
Rivers, R. H., & Vockell, E. (1987). Computer simulations to stimulate scientific problem solving. Journal of Research in Science Teaching, 24(5), 403-415.
The authors' purpose was to find if computerized science simulations could help students become better at scientific problem solving. There were two experimental groups: one that received guided discovery and the other group had unguided discovery. There was also a control group that received no simulations. The results indicated that the students in the guided discovery condition performed better than the unguided discovery and control groups.
Roberts, N., & Blakeslee, G. (1996). The dynamics of learning in a computer simulation environment. Journal of Science Teacher Education, 7(1) 41-58.
The authors conducted a pilot study in which they researched to better understand expert computer simulations in a Middle School Science classroom. In light of the focus on hands-on science instruction, the authors wanted to study this variable along with varying pedagogical instructional procedures. The study was conducted with 8 student participants of diverse abilities. The first half of the experiment time was in the science classroom in collaboration with the teacher. The second half of the study was conducted away from the classroom. The authors report three findings about computer simulations; (a) computer simulations can be used effectively for learning and concept development when teachers select pedagogical style based on learner needs versus student learning gains; (b) students learn more effectively when teachers directly teach students to build basic science knowledge and promote engagement; and (c) student learning is improved when teachers vary presentation style between direct instruction and student exploration. The authors conclude that in the area of computer simulation, hands-on experience is only one of several important variables in science learning.
Ronen, M., & Eliahu, M. (1999). Simulation as a home learning environment - students' views. Journal of Computer Assisted Learning, 15,258-268.
The authors conducted a pilot study designed to research the possibility of integrating simulation-based activities into an existing homework structure during a 2 month period in a 9th grade setting. Students had simulation homework weekly which consisted of a 4-6 task assignment. Student views were collected using a questionnaire, personal student interviews, teacher interviews, and a final exam related to the content of the course. According to the authors, most students favored using simulations as a home learning process. They reported that this work was more stimulating, and the procedures enabled them to be more self-regulated learners. Teachers reported to be pleasantly surprised by the outcomes in student learning using the simulations, and realized reorganization of their physics instruction should occur to optimize the computer simulations. The authors conclude that the tool of computer simulations and others should be further explored.
Rose, D., & Meyer, A. (2002). Teaching Every Student in the Digital Age: Universal Design for Learning, ASCD.
This book is the first comprehensive presentation of the principles and applications of Universal Design for Learning (UDL)--a practical, research-based framework for responding to individual learning differences and a blueprint for the modern redesign of education. As a teacher in a typical classroom, there are two things you know for sure: Your students have widely divergent needs, skills, and interests; and you're responsible for helping every one attain the same high standards. This text lays the foundation of UDL, including neuroscience research on learner differences, the effective uses of new digital media in the classroom, and how insights about students who do not "fit the mold" can inform the creation of flexible curricula that help everyone learn more effectively. The second part of the book addresses practical applications of Universal Design for learning and how UDL principles can help you.
Schwienhorst, K. (2002). Why virtual, why environments? Simulation and Gaming, 33 (2), 196-209.
This article was written to help clarify the definitions of Computer-Assisted Language Learning (CALL) and the Virtual Reality concepts and the support of each in learning. The manuscript includes a review of theoretical perspectives regarding learner autonomy including; individual-cognitive views of learning, the personal construct theory, and the experiential and experimental approaches to learning. The author notes the instructional benefits of virtual reality environments as learning tools which include greater self-awareness, support for interaction, and the enabling of real time collaboration. Finally, the author makes the call for experimental research in this area to verify the theory.
Sherwood, R. D., & Hasselbring. T. (1985/86). A comparison of student achievement across three methods of presentation of a computer-based science simulation. Computers in the Schools, 2(4), 43-50.
The authors report on the results of a study that focused on presentation methods of computer-base simulations in science. Specifically, three presentation methods were analyzed (a) computer simulations with pairs of students working on one computer, (b) computer simulation with an entire class, and (c) a game type simulation without a computer, all conditions were studied in classrooms of sixth grade students. Results indicate that there may be a small benefit to large group simulation experience, especially for immediate measures. These results imply that a computer for every student may not be necessary for students to benefit from computer "instruction" using simulations. The authors noted that student interest and some gender preferences might also influence performance in the simulation and effect measurement results.
Song, K., Han, B., & Yul Lee, W. (2000). A virtual reality application for middle school geometry class. Paper presented at the International Conference on Computers in Education/International Conference on Computer-Assisted Instruction, Taipei, Taiwan.
Stratford, S. J. (1997). A review of computer-based model research in precollege science classroom. Journal of Computers in Mathematics and Science Teaching, 16(1), 3-23.
The author conducted a 10-year review of the literature on Computer-Based models and simulations in precollege science. Three main areas of Computer-Based Models were identified in the research; (a) preprogrammed simulations, (b) creating dynamic modeling environments, and (c) programming environments for simulations. Researchers noted that not enough empirical evidence was available to provide conclusive evidence about student performance. It was noted that anecdotal evidence supported high engagement in the computer-based models for most subjects. The author concluded by posing a number of future research studies, as this line of research is still in its infancy.
Sykes, W., & Reid, R. (1990). Virtual reality in schools: The ultimate educational technology. THE Journal (Technological Horizons in Education), 27(7), 61.
The authors conducted a pilot study in elementary and high school classrooms to study the use of virtual reality technology when used as an enhancement to the traditional curriculum. The major finding was that the engagement factor when using virtual reality enabled to students to be in a more active learning role. The authors argue that although most virtual reality applications in education are in science and mathematics at this time, the technology fits all curricula, and they see great potential across content and grade applications. Additional research should be conducted to validate these initial findings.
Taylor, W. (1997). Student responses to their immersion in a virtual environment. Paper presented at the Annual Meeting of the Educational Research Association, Chicago, Illinois.
The purpose of this study was to characterize students' responses to immersion in a virtual reality environment and their perceptions of this environment. Two thousand, eight hundred and seventy-two elementary, middle school, and high school students attended a thirty-minute presentation on virtual reality and then visited an immersive virtual environment. Following this virtual reality immersion, students answered a questionnaire, rating different facets of the experience. Questionnaire results suggest that although nearly every student enjoyed the experience of navigating a virtual environment such as this one, for many of them this task was quite difficult, and for some fairly disorienting. Results also suggested that the ability to see a virtual environment and navigate through it influences the environment's perceived authenticity. The authors suggest that future research be focused on technical improvements to virtual reality environments.
Vasu, E. S., & Tyler, D.K. (1997). A comparison of the critical thinking skills and spatial ability of fifth grade children using simulation software or Logo. Journal of Computing in Childhood Education,8(4) 345-363.
The authors conducted a 3-group experimental study examining the effects of using Logo, or software using problem-solving simulations. The experimental groups were taught a 4-step problem solving approach. No significant differences were found on spatial or critical thinking skills until controlling for Logo mastery. With this control, significant differences were found for spatial scores, but not for critical thinking. The authors conclude that findings in such research take significant student learning and practice time. Additionally, teachers need substantial training to implant the program with success. The authors recommend further research to investigate further the power of simulation software.
Verzoni, K. A. (1995, October). Creating simulations: Expressing life-situated relationships in terms of algebraic equations. Paper presented at the Annual Meeting of the Northeastern Educational Research Association, Ellenville, NY.
Verzoni investigated the development of student's to see connections between mathematical equations and live like problem solving environments. Students were required to use cause and effect relationships using computer simulation software. Forty-nine eighth grade students participated in a quasi-experimental treatment/control study with a posttest only measure. The reported results suggest that simulation activities developed student abilities to make essential connections between algebraic expressions and real life relationships. The intervention occurred over 9 class periods. The author worked to capitalize on the concept of providing a purpose for algebraic work by having students create life like simulations, and appealing to the learner's own interests and background knowledge.
Willing, K. R. (1988). Computer simulations: Activating content reading. Journal of Reading, 31(5) 400-409.
The author capitalizes on the notion of student motivation and engagement in developing this descriptive study. Students ranging from elementary to high school age, and the range of abilities, students with identified disabilities to students noted as able and gifted (N=222) participated in this study. Willing focused on reading instruction while using computer simulation software in 9 classrooms for a three-week period. Teachers introduced and taught a unit using a simulation software programs. Students worked in groups of 2 to 6, as independent learning groups. Observations focused on type of reading (silent, coral, aloud, sub-vocally, and in turns), group discussions about the content, vocabulary development (use of terms and language specific to varying simulations), and outcome of the simulation (could the group help the simulation survive). The author concludes that these preliminary indicators favor the use of simulations to stimulate learner interest and cooperation to read and understand the content of the life like computer simulation.
Woodward, J., Carnine, D., & Gersten, R. (1988). Teaching problem solving through computer simulations. American Educational Research Journal 25, 1, 72-86.
The authors' purpose in this research was to study the effectiveness of computer simulations in content area instruction, in this case, health with 30 secondary students with high incidence disabilities. Participants were randomly assigned to one of two instructional groups, (a) teacher instruction with traditional practice/application practice, and (b) teacher instruction plus computer simulation. Health content was common across the groups for the 12 days of intervention. At the conclusion of the intervention, participants were tested on content facts, concepts and health-related problem solving issues. Results indicated a significant difference favoring the simulation group, with greatest difference in the problem-solving skills area. The authors recommend the combination of effective teaching and strategic instructional processes in combination with computer simulations for students to increase factual and higher order thinking skills.
Yair, Y., Mintz, R., & Litvak, S. (2001). 3-D virtual reality in science education: An implication for astronomy teaching. Journal of Computers in Mathematics and Science Education 20, 3, 293-301.
This study introduces the reader to the Virtual Environment. This report summarizes the use of this technology to reinforce the hypothesis that the experience in the three-dimensional space will increase learning and understanding of the solar system. With this technology, students are able to observe and manipulate inaccessible objects, variables, and processes in real-time. The ability to make what is abstract and intangible concrete and manipulabe enables the learner to study natural phenomena and abstract concepts. Thus, according to the authors, bridging a gap between the concrete world of nature and the abstract world of concepts and models can be accomplished with the Virtual Environment. Virtual Environments allow for powerful learning experiences to overcome the previously uni-dimensional view of the earth and space provided in texts, and maps.
Yildiz, R., & Atkins, M. (1996). The cognitive impact of multimedia simulations on 14 year old students. British Journal of Educational Technology, 27(2), 106-115.
The authors of this research designed a study to evaluate the effectiveness of three types of multimedia simulations (physical, procedural and process) when teaching the scientific concept of energy to high school students. The researchers attempted to design a study in which good experimental design was employed with 6 cells of students with a pre-post test design. The authors report that greater and more varied patterns of interaction were found for the procedural and process simulations versus the physical group. They conclude that variations in student characteristics and simulation type effect outcomes. However, the physical simulation was found to have produced greater cognitive gain than the other simulations. The authors also emphasize the need for further control and experimentation in this area.
Zietsman, A.I., & Hewson, P.W. (1986). Effect of instruction using microcomputer simulations and conceptual change strategies on science learning. Journal of Research in Science Teaching, 23, 27-39.
The focus of this research was to determine the effects of instruction using microcomputer simulations and conceptual change strategies for 74 students in high school and freshmen year of college. The computer simulation program was designed based on the conceptual change model of learning. The author's report finding significant differences in pre to post measures for students receiving the simulations these students had more accurate conceptions of the construct of velocity. They conclude that science instruction that employs conceptual change strategies is effective especially when provided by the computer simulation.
This content was developed pursuant to cooperative agreement #H324H990004 under CFDA 84.324H between CAST and the Office of Special Education Programs, U.S. Department of Education. However, the opinions expressed herein do not necessarily reflect the position or policy of the U.S. Department of Education or the Office of Special Education Programs and no endorsement by that office should be inferred.
Cite this paper as follows:
Strangman, N., & Hall, T. (2003). Virtual reality/simulations. Wakefield, MA: National Center on Accessing the General Curriculum. Retrieved [insert date] from http://aim.cast.org/learn/historyarchive/backgroundpapers/virtual_simula... | http://aim.cast.org/learn/historyarchive/backgroundpapers/virtual_simulations | 13 |
15 | Chapter 3 Functions
3.1 Function calls
In the context of programming, a function is a named sequence of statements that performs a computation. When you define a function, you specify the name and the sequence of statements. Later, you can “call” the function by name. We have already seen one example of a function call:
>>> type(32) <type 'int'>
The name of the function is type. The expression in parentheses is called the argument of the function. The result, for this function, is the type of the argument.
It is common to say that a function “takes” an argument and “returns” a result. The result is called the return value.
3.2 Type conversion functions
Python provides built-in functions that convert values from one type to another. The int function takes any value and converts it to an integer, if it can, or complains otherwise:
>>> int('32') 32 >>> int('Hello') ValueError: invalid literal for int(): Hello
int can convert floating-point values to integers, but it doesn’t round off; it chops off the fraction part:
>>> int(3.99999) 3 >>> int(-2.3) -2
float converts integers and strings to floating-point numbers:
>>> float(32) 32.0 >>> float('3.14159') 3.14159
Finally, str converts its argument to a string:
>>> str(32) '32' >>> str(3.14159) '3.14159'
3.3 Math functions
Python has a math module that provides most of the familiar mathematical functions. A module is a file that contains a collection of related functions.
Before we can use the module, we have to import it:
>>> import math
This statement creates a module object named math. If you print the module object, you get some information about it:
>>> print math <module 'math' from '/usr/lib/python2.5/lib-dynload/math.so'>
The module object contains the functions and variables defined in the module. To access one of the functions, you have to specify the name of the module and the name of the function, separated by a dot (also known as a period). This format is called dot notation.
>>> ratio = signal_power / noise_power >>> decibels = 10 * math.log10(ratio) >>> radians = 0.7 >>> height = math.sin(radians)
The first example uses
The second example finds the sine of radians. The name of the variable is a hint that sin and the other trigonometric functions (cos, tan, etc.) take arguments in radians. To convert from degrees to radians, divide by 360 and multiply by 2 π:
>>> degrees = 45 >>> radians = degrees / 360.0 * 2 * math.pi >>> math.sin(radians) 0.707106781187
The expression math.pi gets the variable pi from the math module. The value of this variable is an approximation of π, accurate to about 15 digits.
If you know your trigonometry, you can check the previous result by comparing it to the square root of two divided by two:
>>> math.sqrt(2) / 2.0 0.707106781187
So far, we have looked at the elements of a program—variables, expressions, and statements—in isolation, without talking about how to combine them.
One of the most useful features of programming languages is their ability to take small building blocks and compose them. For example, the argument of a function can be any kind of expression, including arithmetic operators:
x = math.sin(degrees / 360.0 * 2 * math.pi)
And even function calls:
x = math.exp(math.log(x+1))
Almost anywhere you can put a value, you can put an arbitrary expression, with one exception: the left side of an assignment statement has to be a variable name. Any other expression on the left side is a syntax error1.
>>> minutes = hours * 60 # right >>> hours * 60 = minutes # wrong! SyntaxError: can't assign to operator
3.5 Adding new functions
So far, we have only been using the functions that come with Python, but it is also possible to add new functions. A function definition specifies the name of a new function and the sequence of statements that execute when the function is called.
Here is an example:
def print_lyrics(): print "I'm a lumberjack, and I'm okay." print "I sleep all night and I work all day."
def is a keyword that indicates that this is a function
definition. The name of the function is
The empty parentheses after the name indicate that this function doesn’t take any arguments.
The first line of the function definition is called the header; the rest is called the body. The header has to end with a colon and the body has to be indented. By convention, the indentation is always four spaces (see Section 3.13). The body can contain any number of statements.
The strings in the print statements are enclosed in double quotes. Single quotes and double quotes do the same thing; most people use single quotes except in cases like this where a single quote (which is also an apostrophe) appears in the string.
If you type a function definition in interactive mode, the interpreter prints ellipses (...) to let you know that the definition isn’t complete:
>>> def print_lyrics(): ... print "I'm a lumberjack, and I'm okay." ... print "I sleep all night and I work all day." ...
To end the function, you have to enter an empty line (this is not necessary in a script).
Defining a function creates a variable with the same name.
>>> print print_lyrics <function print_lyrics at 0xb7e99e9c> >>> print type(print_lyrics) <type 'function'>
The value of
The syntax for calling the new function is the same as for built-in functions:
>>> print_lyrics() I'm a lumberjack, and I'm okay. I sleep all night and I work all day.
Once you have defined a function, you can use it inside another
function. For example, to repeat the previous refrain, we could write
a function called
def repeat_lyrics(): print_lyrics() print_lyrics()
And then call
>>> repeat_lyrics() I'm a lumberjack, and I'm okay. I sleep all night and I work all day. I'm a lumberjack, and I'm okay. I sleep all night and I work all day.
But that’s not really how the song goes.
3.6 Definitions and uses
Pulling together the code fragments from the previous section, the whole program looks like this:
def print_lyrics(): print "I'm a lumberjack, and I'm okay." print "I sleep all night and I work all day." def repeat_lyrics(): print_lyrics() print_lyrics() repeat_lyrics()
This program contains two function definitions:
As you might expect, you have to create a function before you can execute it. In other words, the function definition has to be executed before the first time it is called.
Exercise 1 Move the last line of this program to the top, so the function call appears before the definitions. Run the program and see what error message you get.
Exercise 2 Move the function call back to the bottom and move the definition of
3.7 Flow of execution
In order to ensure that a function is defined before its first use, you have to know the order in which statements are executed, which is called the flow of execution.
Execution always begins at the first statement of the program. Statements are executed one at a time, in order from top to bottom.
Function definitions do not alter the flow of execution of the program, but remember that statements inside the function are not executed until the function is called.
A function call is like a detour in the flow of execution. Instead of going to the next statement, the flow jumps to the body of the function, executes all the statements there, and then comes back to pick up where it left off.
That sounds simple enough, until you remember that one function can call another. While in the middle of one function, the program might have to execute the statements in another function. But while executing that new function, the program might have to execute yet another function!
Fortunately, Python is good at keeping track of where it is, so each time a function completes, the program picks up where it left off in the function that called it. When it gets to the end of the program, it terminates.
What’s the moral of this sordid tale? When you read a program, you don’t always want to read from top to bottom. Sometimes it makes more sense if you follow the flow of execution.
3.8 Parameters and arguments
Some of the built-in functions we have seen require arguments. For example, when you call math.sin you pass a number as an argument. Some functions take more than one argument: math.pow takes two, the base and the exponent.
Inside the function, the arguments are assigned to variables called parameters. Here is an example of a user-defined function that takes an argument:
def print_twice(bruce): print bruce print bruce
This function assigns the argument to a parameter named bruce. When the function is called, it prints the value of the parameter (whatever it is) twice.
This function works with any value that can be printed.
>>> print_twice('Spam') Spam Spam >>> print_twice(17) 17 17 >>> print_twice(math.pi) 3.14159265359 3.14159265359
The same rules of composition that apply to built-in functions also
apply to user-defined functions, so we can use any kind of expression
as an argument for
>>> print_twice('Spam '*4) Spam Spam Spam Spam Spam Spam Spam Spam >>> print_twice(math.cos(math.pi)) -1.0 -1.0
The argument is evaluated before the function is called, so
in the examples the expressions
You can also use a variable as an argument:
>>> michael = 'Eric, the half a bee.' >>> print_twice(michael) Eric, the half a bee. Eric, the half a bee.
The name of the variable we pass as an argument (michael) has
nothing to do with the name of the parameter (bruce). It
doesn’t matter what the value was called back home (in the caller);
3.9 Variables and parameters are local
When you create a variable inside a function, it is local, which means that it only exists inside the function. For example:
def cat_twice(part1, part2): cat = part1 + part2 print_twice(cat)
This function takes two arguments, concatenates them, and prints the result twice. Here is an example that uses it:
>>> line1 = 'Bing tiddle ' >>> line2 = 'tiddle bang.' >>> cat_twice(line1, line2) Bing tiddle tiddle bang. Bing tiddle tiddle bang.
>>> print cat NameError: name 'cat' is not defined
Parameters are also local.
For example, outside
3.10 Stack diagrams
To keep track of which variables can be used where, it is sometimes useful to draw a stack diagram. Like state diagrams, stack diagrams show the value of each variable, but they also show the function each variable belongs to.
Each function is represented by a frame. A frame is a box with the name of a function beside it and the parameters and variables of the function inside it. The stack diagram for the previous example looks like this:
The frames are arranged in a stack that indicates which function
called which, and so on. In this example,
Each parameter refers to the same value as its corresponding argument. So, part1 has the same value as line1, part2 has the same value as line2, and bruce has the same value as cat.
If an error occurs during a function call, Python prints the
name of the function, and the name of the function that called
it, and the name of the function that called that, all the
way back to
For example, if you try to access cat from within
Traceback (innermost last): File "test.py", line 13, in __main__ cat_twice(line1, line2) File "test.py", line 5, in cat_twice print_twice(cat) File "test.py", line 9, in print_twice print cat NameError: name 'cat' is not defined
This list of functions is called a traceback. It tells you what program file the error occurred in, and what line, and what functions were executing at the time. It also shows the line of code that caused the error.
The order of the functions in the traceback is the same as the order of the frames in the stack diagram. The function that is currently running is at the bottom.
3.11 Fruitful functions and void functions
Some of the functions we are using, such as the math functions, yield
results; for lack of a better name, I call them fruitful
functions. Other functions, like
When you call a fruitful function, you almost always want to do something with the result; for example, you might assign it to a variable or use it as part of an expression:
x = math.cos(radians) golden = (math.sqrt(5) + 1) / 2
When you call a function in interactive mode, Python displays the result:
>>> math.sqrt(5) 2.2360679774997898
But in a script, if you call a fruitful function all by itself, the return value is lost forever!
This script computes the square root of 5, but since it doesn’t store or display the result, it is not very useful.
Void functions might display something on the screen or have some other effect, but they don’t have a return value. If you try to assign the result to a variable, you get a special value called None.
>>> result = print_twice('Bing') Bing Bing >>> print result None
The value None is not the same as the string
>>> print type(None) <type 'NoneType'>
The functions we have written so far are all void. We will start writing fruitful functions in a few chapters.
3.12 Why functions?
It may not be clear why it is worth the trouble to divide a program into functions. There are several reasons:
If you are using a text editor to write your scripts, you might run into problems with spaces and tabs. The best way to avoid these problems is to use spaces exclusively (no tabs). Most text editors that know about Python do this by default, but some don’t.
Tabs and spaces are usually invisible, which makes them hard to debug, so try to find an editor that manages indentation for you.
Also, don’t forget to save your program before you run it. Some development environments do this automatically, but some don’t. In that case the program you are looking at in the text editor is not the same as the program you are running.
Debugging can take a long time if you keep running the same, incorrect, program over and over!
Make sure that the code you are looking at is the code you are running.
If you’re not sure, put something like
Python provides a built-in function called len that
returns the length of a string, so the value of
Write a function named
>>> right_justify('allen') allen
A function object is a value you can assign to a variable
or pass as an argument. For example,
def do_twice(f): f() f()
Here’s an example that uses
def print_spam(): print 'spam' do_twice(print_spam)
You can see my solution at thinkpython.com/code/do_four.py.
Exercise 5 This exercise2 can be done using only the statements and other features we have learned so far.
You can see my solution at thinkpython.com/code/grid.py.
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/thinkpython/html/book004.html | 13 |
19 | This lesson addresses the issue of credit, focusing on the importance of wise credit decisions, the risks lenders face, the role of interest or finance charges, and the credit user's responsibility to repay. Students read Mr. Popper's Penguins
and discuss the use of credit in the story. They play a game that demonstrates the importance of responsible use of credit.
Personal Finance Concepts: opportunity cost, credit, credit limit, interest or finance charge
Related Subject Areas: Language arts, geography, science math
Instructional Objectives: Students will be able to:
- Define opportunity cost as the next best alternative given up when a choice is made.
- Define credit as an agreement to receive goods or services now and pay for them at a later date.
- Analyze the role of risk in making credit decisions.
- Identify the responsibilities of borrowers.
Students will need a week or so to read the book, Mr. Popper's Penguins
. The lesson, which follows reading the book, will require two class periods of approximately 40 minutes each.
- Mr. Popper's Penguins, one book per student
- transparencies of Visual 1 and Activities 2 and 3
- one die for each group of 4-5 students
- copy of Activities 1, 3, and 5 for each student
- copy of Activities 2 and 4 for each group of 4-5 students
- game pieces (paper clips, pennies, lima beans, macaroni, small chips, etc.)
Procedure: Day One
- After students have read the book, Mr. Popper's Penguins, point out Antarctica on a world map. Explain that most of the world's penguins live at the South Pole. Discuss the following:
- What is it like at the South Pole? (extremely cold, windy, snow covered)
- Why did Mr. Popper have to remodel his home to accommodate Captain Cook? (It would be too warm for the penguins in an ordinary house.)
- Ask students to think about the time that Mr. Popper hired a serviceman to drill air holes in the refrigerator door and what Mr. Popper thought. (Mr. Popper was sad to pay $5 for the service man because he thought of "how many beans it would have bought for Mrs. Popper and the children.")
- Explain that Mr. Popper had to make a choice - spend $5 to change the refrigerator or buy beans for his family.
- Define opportunity cost as the next best alternative that is given up when a choice is made. Give some examples of opportunity cost from the story. (People hired Mr. Popper to either paint or wallpaper. If they chose to paint their kitchens, the opportunity cost would be wallpaper in the kitchen. Mr. Popper could either read a book or listen to the radio for entertainment. If he read a book, his opportunity cost would be listening to the radio. At the end of the book, Mr. Popper had to choose between allowing the penguins to appear in a series of movies or allowing them go to the North Pole. When he decided to allow them go to the North Pole, the opportunity cost was that they could not appear in the movies.)
- Discuss the following:
- What were the two ways that Mr. Popper could have spent his $5? (He could have paid the serviceman or bought beans for the family.)
- Which did Mr. Popper choose? (pay the service man)
- What was his opportunity cost? (buying beans for the family)
- Explain that Mr. Popper made the choice that he thought was best. He spent $5 to hire a serviceman to fix the refrigerator so Captain Cook would be comfortable. But, when the baby penguins were born, more room was needed for them to be comfortably cool. Discuss the following:
- What did Mr. Popper decide to do so that all the penguins would be cool? (He had a freezing plant installed in the basement.)
- Did you think the price would be higher or lower than the $5 the serviceman charged to change the refrigerator? Why? (Higher, because it was a bigger job.)
- Point out that the author says that "Mr. Popper had practically no money. However, he promised to pay as soon as he could, and the man let him have everything on credit."
- Explain that credit is an agreement to receive goods or services now and to pay for them at a later date. Discuss the following:
- What did Mr. Popper receive? (the freezer plant in the basement)
- Why did the man agree to let Mr. Popper have everything on credit? (Mr. Popper promised to pay as soon as he could.)
- Explain that the engineer trusted Mr. Popper to pay him back, but the engineer was taking a risk.
- Ask the students to identify the risk the engineer was taking. (Mr. Popper might not have paid him back.)
- Explain that the engineer trusted Mr. Popper for a number of reasons. He had a good job, owned his own home, had lived there for a long time, and was well known in the community. When people use credit to buy goods or services, they must prove that they are trustworthy. Often credit companies check on information about an applicant's employment and whether he or she owns a house or rents. This information helps the credit company by reducing the risk that at a borrower might default on at a credit agreement (that is, fail to repay the loan).
- Discuss the following:
- How do we know that Mr. Popper repaid his creditors? (The author says that when Mr. Popper earned income from the penguins' performances, the first thing he did "was to pay off the man who had installed the freezing plant in the basement. . . . Next he sent a check to the company who had been shipping the fresh fish all the way from the coast.")
- Why was Mr. Popper a trustworthy person? (He repaid his debt as soon as he was able.)
- Remind students that Mr. Popper used credit wisely. He was able to use credit for goods and services even though he did not have enough money to pay for them at the time. He made an agreement with the engineer to pay him when he got the money. Then Mr. Popper fulfilled his agreement by paying the man as soon as he got the first paycheck for the Performing Penguins.
- Ask students if they think that the engineer would give Mr. Popper credit again and why. (Yes, because Mr. Popper paid his debt on time.)
- Explain that when people buy things on credit, they benefit. They can use the thing they buy right away, even though they don't pay for it until some time in the future. Mr. Popper was able to have his basement remodeled immediately; he did not pay for the engineer's services until he earned income from the penguins' performances.
- Explain that when people use credit they usually must pay for the convenience. The fee that they pay for the credit is called interest or finance charge.
- Display Visual 1 and point out that if the students bought the bike with cash, it would cost $129. However, if they used credit to purchase the bike, it would cost $144 ($12 per month for 12 months). The finance charge or interest is $15 ($144 - $129 = $15).
- Discuss the following:
- Why do store owners charge interest for using credit? (They are taking a risk, and they are giving consumers the opportunity to use products before they are paid for. Also, the store is lending its money and would like a return.)
- What risks do store owners that offer credit take? (The consumer might not repay the loan.)
- What do store owners give up when they offer credit (that is, what is their opportunity cost)? (When store owners give credit, they actually lend money to customers to make their purchases. When they give credit to one customer, they give up the opportunity to use the money they've lent in another way. For example, they could use it to purchase more goods to sell; they could place the money in savings; they could use it to offer credit to another customer.)
Procedure: Day Two
- Remind students that they have learned some basics about credit from the book, Mr. Popper's Penguins.
- Explain that students will play a game requiring them to make wise decisions about using credit. Display a visual of Activity 2 and distribute a copy of Activity 1 to each student. Read the directions together.
- Point out that the students have a credit limit of $130. A credit limit is the maximum amount that a borrower may borrow from a specific lender.
- Distribute a copy of Activity 3 to each student. Display a visual of Activity 3 and demonstrate how it is completed. For example, if a player's first three rolls of the die were 3, 6, and 1, here is how the tally sheet would look.
- Divide the class into groups of 4-5 students and distribute a copy of Activity 2, game pieces, a pair of scissors, a die, and a copy of Activity 4to each group. Tell students to cut out the Credit Check Cards, shuffle them, and place them face down in the appropriate rectangle on the game board.
- Allow students time to play the game, checking to see whether they are completing the tally sheets accurately. After each team has declared a winner, discuss the following:
- What was your credit limit in the game? ($130)
- Why do stores usually set at a credit limit? (to reduce their risk)
- What are the game consequences for going over your credit limit? ($10 fee)
- How much was the interest or finance charge on the cap? ($5) the jeans? ($10) the socks? ($2) the shoes? ($15) the shirt? ($7)
- Why do lenders charge interest for credit? (Lenders could choose to save the money and earn interest on it; therefore, they expect to earn a similar return on their money when they lend it.)
- What is the benefit in the game of buying on credit? (You can own what you want right away. You don't have to wait until you have the money to afford it.)
- What is the opportunity cost in the game of buying an item with cash? (another item that you could buy in the future with the money plus the interest)
- Remind students that Mr. Popper was a responsible user of credit because he paid his debts as soon as he could. Point out that students were responsible users of credit in the game, paying for their clothes as soon as they had the money.
- Remind students that using credit has benefits - people can buy goods and services now instead of waiting. But there's an opportunity cost for using credit - the other thing that could be bought in the future with the money and the interest or finance charge.
- Finally, remind students that producers take at a risk when they offer credit to consumers. The consumers might not pay for the goods or services. On the other hand, consumers must be responsible and pay for the use of credit.
- Distribute catalogs from a local toy store. Ask each student to cut out at a picture of a toy with a price of approximately $50. Have students glue their pictures near the top of a sheet of notebook paper and write these opening sentences below the picture: "I'd like to have this toy, but I don't have enough money. I must buy it on credit. If I buy it on credit . . ."
Have students complete a paragraph that:
- identifies their opportunity cost
- describe the risk the store owner is taking
- describes the risk they are taking
- Assign groups of students to design posters that identify the responsibilities of borrowers. They should use colored markers to illustrate their posters.
Have students complete Activity 6, Assessment Activities.
Benefit: Jamal's mom will have the outfit to enjoy in time for her birthday.
Opportunity cost: Something else Jamal would like to spend the money for in the future when he must repay.
Risks: The store owner can't be sure that Jamal will repay.
Responsibilities: Jamal must pay the store the money he owes.
Benefit: Guadalupe will have the camera to use right away.
Opportunity cost: Something else Guadalupe would like to spend the money for.
Risks: The store owner can't be sure that Guadalupe will repay.
Responsibilities: Guadalupe must pay the store owner the money she owes.
Benefit: Seth will be able to enjoy CDS right now instead of waiting.
Opportunity cost: Something else Seth would like to spend the money for.
Risks: The store owner can't be sure that Seth will pay what he owes.
Responsibilities: Seth has to pay for the CD player on time.
Benefit: Dr. Porada's clinic will have more business with the new X-Ray machine.
Opportunity cost: something else that the clinic could buy with the money.
Risks: The X-Ray company can't be sure that the clinic will be able to repay on time.
Responsibilities: Dr. Porada and the clinic must pay what they owe on time.
Benefit: The students will benefit immediately from using the new equipment.
Opportunity cost: Some other school supplies that could be bought with the money.
Risks: The company can't be sure that the PTO will pay what is owed.
Responsibilities: The school PTO must pay what it owes before the deadline.
Benefit: The Nguyen family will enjoy a vacation now.
Opportunity cost: They must give up something else that they could be doing now.
Risks: The travel company doesn't know for sure if the family will pay on time.
Responsibilities: The family must pay for the trip when it returns.
- Ask students to check their local newspapers for advertisements that contain information about credit terms, such as ads for automobiles, furniture, electronic equipment, carpeting, appliances. Have them show their ads to the class, identifying the length of the credit agreement, the amount of each monthly payment (if given), and the total cost of the item, including all finance charges. After their presentations, display the ads on a bulletin board with the title: "How'd You Like to Buy Some of These?"
- Have students role-play buyers and sellers who are negotiating credit agreements. Make sure they talk about opportunity cost, risk, and responsibility. | http://www.moneymanagement.org/Budgeting-Tools/Credit-Lesson-Plans/Mr-Poppers-Penguins.aspx | 13 |
26 | - Division Between Front and Rear
- Impact of the War
- Monuments and Commemoration
- What Women Thought
- Consistency of Attitudes
Both at the time of the Great War and in its immediate aftermath it was generally considered that the war had brought about a massive change in gender relations. By showing that women could take over male roles it was thought to have done more to emancipate women than years of feminist campaigning had been able to achieve. However, recent historiography now offers a different orthodoxy, summed up by Christine Bard and Françoise Thébaud:
La guerre n’a pas émancipé les femmes. Dans les faits, elle a renforcé la hierarchie entre les sexes, bouleversé les relations entre les hommes et femmes, brouillé les identités sexuelles, et ces d’autant plus que les uns et les autres ont vécu une chronologie différente du conflit.
These arguments depend on several grounds, such as that the increase in female participation in the workforce has been exaggerated, that the war made men more hostile to feminism and women’s rights, that the issue of depopulation hampered feminism, and that the war halted the momentum that women and the feminist movement had achieved.
It will be argued that in much of France concern over gender relations was peripheral during the war. While people commented on the various new roles taken on by women, they understood these modifications in traditional terms. Some new developments were believed to be temporary adjustments that would not continue long past the ceasefire; while others were downplayed as applying only to a small minority of women or just to Paris. In most cases, pre-war ideas of gender relations maintained their importance throughout the war, offering a framework within which the changes wrought by the war were understood.
The division between the home front and the front line has often been posited as a source of hostility between men and women. The men risking their lives could only compare the experiences of the home front unfavourably. As Mary Louise Roberts writes “When soldiers returned from the front they saw their female kin, friends, and lovers assuming traditionally male jobs and family responsibilities … The war generation of men found themselves buried alive in the trenches of death, at the same time that they witnessed the women in their lives enjoying unprecedented economic opportunities. One infantryman wrote that “l’émancipation de la femme et la dislocation des familles font des étapes aussi rapides que l’avancée des Boches en territoire italien.”1 Soldiers were also upset at civilians’ lack of awareness of their suffering, and the perceived gaiety and luxury of the home front. Roberts and Stéphane Audoin‑Rouzeau both produce ample examples of this. In the immensely popular novel Le Feu, which was lauded for its authenticity, Henri Barbusse famously wrote that the distinction between the front and the rear was “a difference far deeper than that of nations and with deeper trenches.” Not only was the rest of the population utterly incapable of comprehending the horror of trench warfare, which only those who had experienced it could truly understand, but also the propaganda and censorship that was considered necessary to maintain morale threw up another barrier. There are many examples of this divide, such as in this article that appeared in the trench newspaper, Le Crapouillot:
There was an announcement: “Views of the War”. Most of the civilians got up and left, grumbling, “The war again, what a bore”. While (on the screen) the soldiers mastered the dreadful “pig’s snout”, the audience doubled up with laughter. Perhaps they would not have found the exercise so funny if they had but once had to do it in feverish haste with bells ringing in the trenches to announce the arrival of the dreadful clouds of death … The final film unrolled before us: “The battle‑fields of the Marne”. The public seemed disappointed that such a terrible battle had left so little trace, and beside me a little old lady, bored with such tranquil scenery, declared with a gentle little pout: “That’s boring: there aren’t even any bodies.
The question is whether the resentment and discontent felt by the soldiers led to any uncertainty in gender relations. Often the tirades launched against the home front were aimed more at the men perceived to be shirking, rather than women. In Un Tel de l’armée française, written in 1918, the soldier Franconi lambasted
Stratèges incohérents penchés sur des cartes dérisoires, généraux de plume et combien peu d’épée, maniant à la fois les sophismes les plus contradictoires et les armées, ancien insurgé déguisé en bon berger, tels furent nos amateurs de la guerre. Ils la firent dans les salles de rédaction, les salons académiques et les brasseries littéraires, alors que toute la jeunesse de France agonisait sur les nouveaux champs catalaniques.
The perceived luxury of the conditions in the rear, whether enjoyed by women or men was a recurring theme in the complaints made by frontline soldiers, and particularly when exploitation of the troops was involved. This song from the front attacks rich shirkers, emphasising the contrast with life in the trenches.
Pendant que les heureux, les riches et les grands
reposent dans la soie et dans les fines toiles,
nous autres les parias, nous autres les errants,
ici dans les tranchées l’on se bat et l’on crève.
In a letter Barbusse sent to La Dépêche he highlighted the worst abuses that his book had attacked: “la guerre suscite bien des égoïsmes et des cupidités. J’ai marqué quelques-uns de ces vices; j’ai parlé des embusqués, des profiteurs et des mercantis sans pitié.” The absence of any explicitly female role amongst the groups of abusers is striking. In his account of the Third Colonial Division during the war, General Puypéroux made little mention of women, when his troops rest behind the lines he mentioned only their costly living conditions, criticising those who sought to exploit the troops for material benefit. “[N]os braves troupiers s’extasient sur le bon marché de certains denrées, eux qui sont si exploités par les mercantis du front. Le résultat de cet étonnement ne se fait pas attendre longtemps… les prix augmentent de suite.” Cazals and Rousseau argue that the trench journals directed their vitriol primarily at “des embusqués, des profiteurs, des journalistes bourreurs de crânes,” In the songs of the trench journals the actions of the poilus were contrasted with the bourgeoisie rather than with women.
Noté en passant ce déplorable état d’esprit de l’artillerie, le plus mauvais sans doute de toutes les armées françaises, et d’autant plus extraordinaires que ces gens-là, surtout dans l’artillerie lourde, ont plutôt été des favorisés dans cette longue guerre.
Other differences also existed amongst those at the front, based upon class and status. In Nancy in 1919 there was a meeting of working-class mutilés who had left the Association des Mutilés et Anciens Combattants.2 Marchand, the secretary of the new group, declared “En revenant des tranchées où quoiqu’on ait dit, il n’existait aucune fraternité entre combattants bourgeois et ouvriers, les patrons ont repris leur mentalité d’avant-guerre et traitent les ouvriers en conséquence.” Indeed, several veterans’ organisations, including the Union de Poilus did not admit officers into their membership.
Some critiques of the home front did attack women, but rarely as a primary target. This article by Captain Léon Hudelle, entitled Le Poilu, was published in several trench newspapers, as well as some left wing civilian papers.
Le poilu, ce n’est pas un secrétaire d’Etat-Major et d’Intendance, ni un automobiliste, mais c’est celui que tous les automobilistes et les secrétaires d’Etat-Major et d’Intendance regardent avec dédain, avec morgue, avec insolence, presque avec mépris.
Le Poilu, c’est celui que tout le monde admire, mais dont on s’écarte lorsqu’on le voit monter dans un train, rentrer dans un café, un restaurant, dans un magasin, de peur que ses brodequins mâchent les bottines, que ses effets maculent les vestons à la dernière mode, que ses gestes effleurent les robes cloches, que ses paroles sentent trop cru …
The first part is aimed at male shirkers, the second part more at women, but it is significant that the criticism is hardly accusing them of losing femininity; in fact the reverse is implied. Louis Barthas made a similar argument in his journal
On aurait bien voulu s’arrêter cantonner dans ces petites villes si tentantes avec leurs boutiques flambant neuf, leurs bistrots accueillants, leurs femmes avenantes et rieuses qui nous envoyaient des signes amicaux au passage mais ces lieux étaient trop beaux pour nous, on les réservait aux embusqués de toute catégorie qui pullulaient à l’arrière.
When women were criticised it was often for the fault, traditionally seen as female, of living above their station. A song recorded in the journal of Antoine Bosc similarly accuses women of not taking the war seriously. “Elles rigolent des communiqués”, while they live the high life, but once again their role is entirely traditional. The song is made more powerful by focusing its attack on the wives of the poilus, often exempted from more general criticism of the rear.
“Les petites femmes des mobilisés”
Les poilus s’en vont, le cafard au front,
Trottinant parmi les cervelles.
A l’arrière l’on voit la gaieté, la joie,
Et la guerre, nul ne s’en aperçoit.
Concerts, cinéma, casino,
Tout pleins de badaud
Qui ont la vie belle.
Nos femmes s’offrent du plaisir,
Elles peuvent s’offrir ce qui leur fait plaisir,
Elles rigolent des communiqués,
Les petites femmes des mobilisés.
This criticism echoes the criticism earlier in Le Crapouillot by emphasising not just the easy living standards of those at home compared to the trenches, but the added indecency of the privileged finding the war a source of amusement. The home front was expected to be suffering, and those who were not obviously doing so were harshly criticised.
This theme is also illustrated by the criticism of the population of Châlons sur-Marne by Barthas.
Grande animation dans la rues, les embusqués avaient mis leurs képis les plus nerfs, leurs galons, leurs chevrons les plus étincelants. La plupart avaient à leur bras leur femme ou une femme avec des chapeaux fleuris, des corsages, des robes aux couleurs chatoyantes; tout ce beau monde se promenait, souriait, jasait, flirtait dans une inconscience, une quiétude parfaites.
The majority of the actions described are not objectionable in themselves, it is only in the context of the war that fine clothes and shallow pleasures are unacceptable. The last words are the most significant, what is most damning is the lack of awareness of the ordeal of the troops. The criticism of women in the trench journal La Marmite in 1916 followed a similar line, castigating the shallowness of women.
La femme a commis certaines fautes de légèreté, d’insouciance, et les jupes de 1916 ont un peu trop l’air de se ficher de tout. La femme n’a pas toujours élevé son âme jusqu’à la compréhension de l’héroisme et j’en ai connu en permission qui, avec un angélique sourire à gifler, me disaient en parlant des combats de nuit: ‘Comme ce doit être amusant!’ D’autres ne pouvaient souffrir le mot de ‘poilu’ et se pâmaient devant les mentons imberbes des Anglais. Tout cela déconcerte le soldat et il en conçoit une certaine pitié méprisante pour la femme. Les exceptions sont nombreuses, je me hâte de le dire; mais elles ne font que confirmer la règle.
The contemptuous pity the soldiers are claimed to feel for women appears to be based on women having fallen victim to the traditional vices of their sex rather than any challenging of gender roles.
These sort of criticisms recur repeatedly. When La Dépêche criticised the spring fashions in 1916 it argued that these fashions were not new and normally would be cause for amusement. Only because of the terrible circumstances that France found itself in did they become shocking and unacceptable. The contrôleur of the agricultural workforce in Anjou observed in November 1918, and again in 1919, that the workers leaving the countryside towards the town, in particular the women were “attirée par un vie plus facile, la toilette et les plaisirs variés.” For Margaret Darrow, the example of feminine fashions is an instance where women’s activities could be read in differing ways. Was their wearing new and elegant clothing a signal of indifference to the sufferings of the front, or was it a proud statement that the natural grace of French women should not be destroyed? Certainly Andrè Kahn responded positively at the front to news that Paris was returning to normal in December 1914. “C’est un honneur pour Poincaré et pour ses hommes du gouvernement que cette résurrection de la France en pleine guerre. Cela doit bigrement étonner les Boches …”
When the home front was portrayed as wholly feminine, women were often displayed in a sympathetic, traditional role. An article by Jean Longuet in Le Populaire depicted “Les couloirs du Palais de Justice retentissent sans cesse des cris déchirants, des hurlements, des malheureuses femmes dont les maris, les fils, ou les pères viennent d’être frappés de condamnations féroces.” It is not just wives, but mothers and daughters who are crying and wailing, but it appears that there are no fathers or sons there displaying their anger and grief.
La Bataille regularly castigated male profiteers in its cartoons but rarely women. Even in a rare example depicting a woman profiting from the war, a female also supplies justice; a proletarian woman strangling a bourgeois lady with her own expensive necklace. Le Populaire followed a similar line to the Bataille with a cartoon from April 1918 depicting a fat, middle-aged, male employer, criticising a young female worker for wanting to leave work at six in the evening. Georges Villard in the trench journal Plus que Toral in 1916 wrote a song that included the phrase:
“En pensant à la femme, en pensant aux enfants,
Qui vivent angoissés dans la maison muette,”
Much more gender specific was the issue of sexual infidelity. Mary Roberts argues convincingly that men at the front lived in fear of being betrayed by their spouses. Roberts goes further though by arguing that “Sexual infidelity signified the wartime reversal of gender roles because in this case, women were free and promiscuous, while men were “confined” to the army and trenches … female infidelity symbolized the isolation, alienation, and emasculation of the male combattant.” As has already been noted, there can be no contesting that men fighting in the trenches felt alienated and isolated from the rest of society. However the argument that the war and female infidelity resulted in a feeling of emasculation among soldiers is more problematic.
War has traditionally been portrayed as embodying the epitome of masculinity, and hence virility, and at the start of the Great War, it proved no exception. From August 1914 into 1915 the war was portrayed in Britain as resulting in reinvigorating a degenerate and effeminate pre-war culture amongst men, with women similarly refeminized. The French reaction was similar. In August 1914, René Bazin wrote in his cahiers intimes,
”J’entends le dialogue des officiers allemands rentrant dans leur positions d’où on les avait lancés en avant:
- - Vous n’avez pu tenir?
- - Non, un élan terrible, des troupes comme celles de Napoléon, des armées mieux maniées que les nôtres…
- - Et le désordre?
- - Pas
- - Et l’insubordination?
- - Finie
- - Et l’affaiblissement de la race?
- - Mensonge!
- - La France agonisante?
- - Allez-y voir!”
The war had given the lie to the idea of l’affaiblissement de la race. The reference to the Napoleonic army is also significant; these soldiers are just as glorious (and by implication the war is too).
The argument is that while previous wars had allowed men to display heroism through acts of personal bravery and virile attacks, the Great War was different, men were powerless against the shells and machine guns, heroism was achieved purely through survival. Jean Norton Cru gives a striking account of this.
Entre deux groupements plus petits, comme entre deux individus, il n’y a plus de lutte, sauf dans des cas très exceptionnels: presque toujours l’un des deux frappe, l’autre ne peut que courber le dos et recevoir les coups.
The French infantry could only take cover against the German trench artillery, which was impotent against the French 75s, which were in turn powerless to retaliate against German heavy artillery.
Les soldiers sont bourreaux ou victimes, chasseurs ou proie, et dans l’infanterie nous avons l’impression que nous jouâmes la plupart du temps le rôle de victime, de proie, de cible. Ce rôle ne tend guère à faire goûter la gloire des combats.
While the scale of this suffering was undoubtedly unprecedented in the First World War, the experience of war bringing death without possibility for heroism was not entirely new. Dr Samuel Johnson had given a significantly similar description to the experience of soldiers fighting nearly 150 years earlier.
The life of a modern soldier is ill-represented by heroick fiction. War has means of destruction more formidable than the cannon and the sword. Of the thousands and ten thousands, that perished in our late contests with France and Spain, a very small part ever felt the stroke of an enemy; the rest languished in tents and ships, amidst damps and putrefaction; pale, torpid, spiritless, and helpless; gasping and groaning, unpitied among men, made obdurate by long continuance of hopeless misery, and whelmed in pits, or heaved into the ocean, without notice and without rememberance. By incommodious encampments and unwholesome stations, where courage is useless, and enterprise unpracticable, fleets are silently dispeopled, and armies sluggishly melted away.
There are certainly plenty of examples of veterans lambasting the dehumanising quality of trench warfare. In Le Feu, a soldier, Bertrand: “Honte à la gloire militaire, honte aux armées, honte au métier de soldat, qui change les hommes tour à tour en stupides victimes et en ignobles bourreaux.” Jacques Rivière writing in 1921 asked “Je demande à tous les combattants … s’ils n’ont pas la sensation d’avoir été amputés de toute une partie de leur sensibilité. Nous reviens mais nous ne sommes plus les mêmes.” According to Antoine Prost: “Le soldat est un homme que la guerre déshumanise.”
Yet all these references suggest not a loss of virility, but of basic humanity. Furthermore, the surviving of the war seems to have been considered as having passed a test, of being proven. Antoine Prost’s major study of war veterans suggests that the men did not come out of the war feeling emasculated or in need to prove themselves. On the contrary they felt that, terrible though their experiences had been, they had at least gained pride in the fact that they had not been found wanting. The rhetoric of anciens combattants throughout the interwar period is filled with examples of where they assert that they have proven themselves worthy. Writers of such different persuasions as Montherlant and Drieu la Rochelle both expressed nostalgia for the “virile fraternity” of the front.
The post war activity of veterans also contradicts the idea that they were desperate to forget the war entirely. The vast majority joined organisations of Anciens Combattants, for social activities as well as for campaigning. Holt’s study of sporting activity in France shows that the war resulted in acceleration in numbers participating in shooting. In 1870 there were 300,000 registered participants, which grew to 600,000 by 1914. In the 1920s there were more than one million participants, and by 1930 there were 1.8 million. After 1930, numbers levelled off. It is reasonable to assume that a significant proportion of these newcomers were veterans, who were not put off by any military associations.
The virility of the soldiers was also constantly eulogised by non-combattants. Stéphane Audoin‑Rouzeau’s study of children’s literature featured several “… histoires développent le thème du héros qui, par sa modestie et son héroisme, conquiert le coeur d’une femme logiquement inaccessible.” There were a series of postcards during the war entitled “Graine de Poilu”. One depicted an enfant bursting out of his shell, armed with rifle and bayonet and asking “Y en a‑til encore des Boches?” Not all French children would be heroic, just the sons of the soldiers.
Furthermore, as Audoin-Rouzeau notes, the representation of combattants stressed defence of their soil, defence of their country, but most strongly of all defence of their women and children.
In C. Binet-Sanglé’s book Le Haras humain published in 1918, he described his wish to regenerate the race. His ideal masculine type seemed closely based upon the popular image of the soldier: “hommes musclés, poilus, barbus, à gros testicules, à scrotum ferme, à sperme épais”. Women were expected to have a traditional feminine form, with broad hips and large breasts. Even those non-combatants who witnessed the suffering first hand were positive about the link between virility and frontline combat. The influential psychologist Dr Dide, who worked for some time at the front, wrote in 1916:
L’acte génital tend à assurer la perpétuation de la race et le guerrier, dans sa force abstraite, se surpasse, animé qu’il est des forces de la destinée: Il n’est plus un homme, il symbolise le droit au soleil d’un peuple, le besoin de vie d’un nation, il devient synthèse de la patrie elle-même qui veut persévérer dans son être.
Hélène Dequidt has noted that those men serving in frontline medical services found their masculinity in question both by the soldiers, and also by themselves, and many sought to be transferred to frontline combat. Similarly those attending to the wounded at the rear wished they were in the frontline of the battle against death. An indication of the views of the wounded themselves was given by M.Simon, chairing a meeting organised by the Journal des mutilés to form a federation of all associations of anciens combattants in November 1917. “Je salue ensuite nos chers camarades restés au front et qui continuent la tâche rude et sublime de protéger les foyers que nous ne pouvons plus défendre.” Not only did Simon laud the sublime nature of the task, it was placed squarely within the traditional setting of the man defending the home.
If it is difficult to say that the war resulted directly in the symbolic emasculation of the male combatant, the argument that this was achieved indirectly – through female sexual infidelity – is stronger. There is no doubt that there was an increase in sex outside of marriage, illegitimate births rose significantly.3 For those whose wives and fiancées left them, this would clearly have been distressing, as would be the situation for those who stayed with their partners, knowing or suspecting that they had been unfaithful, perhaps unsure about the paternity of a child. In 1918, an article in Le Courier du Centre began: “Un drame passionnel – ils sont déjà nombreux depuis la guerre…” It described how a soldier, Yves Beauffenie, killed Jean Pestis, who up until recently had been in the same regiment, because Pestis was having an affair with his wife.
One of the letters from the front recorded by Jean Nicot identified three types of people who aroused resentment at the front.
… des industriels que la guerre enrichit, ensuite ce sont les viellards, anciens combattants de 1870 qui n’ont personne au front et parlent patriotisme, enfin, en troisième lieu, ce sont des femmes que je ne veux pas qualifier et dont les maris sont au front et qui ont près d’elles des amants recrutés parmi les embusqués ou des jeunes gens imberbes.
These unfaithful wives were implicitly contrasted with women mentioned earlier in the letter – “Des femmes en grand deuil qu’on croise dans la rue pleurent en regardant les poilus du front”.
Female infidelity was not typically portrayed as a sign of assertiveness. In the novel Daniel Sherman analyses, La joie by Maurice Genevoix, Genevoix describes the feelings of Pierre, the hero, about the embusqué who had an affair with his girlfriend. “Pendant que je me battais, pendant que je grelottais en Bochie, ce monsieur s’installait chez vous, n’en bougeait plus”. Here the entire agency in the affair is assigned to the man, who wouldn’t leave, without any impression of a wartime reversal of gender roles. This is backed up by some of the trench journals studied by Audoin‑Rouzeau. In this extract it is assumed that if women are wearing jewellery then it must have been men who were responsible for buying it. In addition, the changes it notes are all of appearance, not of character:
At last he reaches the village … He meets some country women. Oh, but how they’ve changed! No more clogs, no more apron: smart polished boots, jewellery! As the poilu says to himself: “Are there still men at the rear, to pay for all these fine things”.
Even more explicit, the following extract seems to absolve women from all responsibility for initiating infidelity, putting all the blame on the men at the rear.
How cowardly they seem to me, those men who are comfortably settled at the rear and who try to profit from the current difficult circumstances by disturbing the noble and dignified solitude of women deprived of their loved ones and their support. I cannot think of any more base or vile crime than that! While others, out there are getting shot or lie bleeding in a hospital bed, those men whose privileged position should impose on them at least a polite reserve roam like wolves round homes where the head of the household is absent. Yes there are roaming wolves.
The difficulties involved in attributing division between the sexes to combat are also highlighted by a quote made by a railwayman: “women no longer want to obey … we talk about marriage between men and women as people talk of peace between the Boches and the French.” A man in a reserved occupation made this comment; not someone who had fought on the front line, his use of “we” suggests he knew others who shared his thinking.
Civilian testimony was more often inclined to assign blame to women. Emile Rethault wrote in 1970 on the consequences of the war in the commune where he would become mayor. He believed that the departure of the vast majority of adult males meant that “L’autorité interne tomba en quenouille…” Similarly, on the subject of extra-marital affaires, Gilles Depérière denounced “les mauvais exemples, trop humains, donnés par quelques mauvais esprits, surtout féminins.” Dr Vernédal in his doctoral thesis claimed that prostitution has many more adherents: “avides surtout de plaisir, mais plus souvent de luxe et de gain” in the difficult financial times during the war.
One pitfall it is crucial to avoid is conflating the lifestyles of Parisian women and the responses that these lifestyles prompted with that of French women as a whole. Maurice Donnay noted this phenomenon during the war, claiming that foreigners have been prone to judge France by Paris, French women by Parisian women and Parisian women by “certains Parisiennes agitées”. Certain criticisms by the French of the moral conduct of women were Paris specific. Louis Barthas for instance had direct criticism to make of some women in Paris:
Par exemple, je fus choqué de la tenue de certaines Parisiennes. Appartenaient-elles au grande monde? au monde? au demi-monde? Je l’ignorais. Décolletées, ‘démolletées’, bras nus, épaules nues, elles semblaient avoir le seul souci de plaire, de se faire remarquer, attirer le regard, aiguiser les désirs des passants et cela au moment où l’angoisse étreignait tant de cœurs, où tant d’yeux pleuraient, tant de sang coulait, où se jouait le destin de la France, de l’Europe… et même du monde!
The postcard mentioned earlier of a woman lifting up her skirt to the admiring glances of foreign soldiers was specifically described as an “Attraction Parisiennes”. Rosny and Mille were both quoted in the last chapter making a distinction between the relationships amongst women and Americans in the big cities and those elsewhere. The presumed sexual behaviour of Parisian women also informed André Kahn’s dismissal of the strikes of 1917, as well as ideas of female irrationality.
Quant aux manifestations hystériques des ouvrières parisiennes, encore une fois, je les considère sans le moindre importance. Elles s’agitent parce que les printemps les énerve et qu’elles ne trouvent pas assez d’hommes pour le satisfaire.
The extent to which the mores of the capital, and particularly the Parisian elite, were seen to differ from that of those who lived in provincial cities, is highlighted by an article in La Libre Parole in September 1914 on the changes wrought by the governmental move to Bordeaux.
Il parait que l’on ne s’ennuie pas à Bordeaux pendant que nos soldats défendent la France sur les champs de bataille, à deux pas de nous, au prix de leur sang. Tandis que la population parisienne, épurée de ses politiciens arrivistes et de ses jouisseurs névrosés, conserve dans sa calme vaillance une bonne humeur pleine de dignité, tous nos histrions, nos bateleurs, nos amuseurs et amuseuses, tous les habitués des restaurants de nuit se sont transportés à Bordeaux, où ils ont trouvé dans les coulisses gouvernmentales une clientèle toute disposée à se mettre à l’unisson. On y joue la comédie, on y sable le champagne en aimable compagnie, on cherche à se remonter artificiellement un moral qui avait été un peu ébranlé lors de l’exode… Souhaitons que les colonisateurs actuel de Bordeaux fassent enfin un retour sur eux-mêmes et songent un peu plus aux épreuves que traverse la patrie.
It wasn’t just the lifestyle of the Parisians that was distinguished from that of the Bordelais, it was that they were doing so while the men from the region, “nos soldats”, were sacrificing their blood for France.
It must also be remembered that the relationship between front and rear was far from being wholly antagonistic. There were close relationships between soldiers and their families and friends that were maintained by letters and postcards. Awareness of the suffering of their loved ones must have reminded soldiers that they were not unique. André Kahn demonstrated this in writing to his wife “Tu n’es pas la seule à en souffrir. J’imagine que toutes les femmes de France en sont au même point, ne rêvent qu’au même avenir…” When Paris was bombed, soldiers did not celebrate the jolt to the profiteers of the home front, but criticised the cruelty of the Germans as murderers of the innocent. “Que nous nous battions entre hommes, je trouve ce moyen assez légal. Mais d’aller tuer les vieillards, les femmes et les enfants, c’est ignoble.”
There were also the nurses and the marraines. In both these examples soldiers would have close contact with women in positive, traditional roles. The marraines, or godmothers, were women who offered both moral support and presents to soldiers at the front, particularly those without families of their own. The role of marraines offered women the chance to give support to men at the front in a traditionally feminine role and newspapers regularly encouraged more women to contribute.
La Bataille urged, “Encore toujours plus de marraines! Les vieilles femmes, les petites filles! Toutes, pour nous chers camarades solitaires et tristes, à qui nous devons un peu de joie et d’affection.”
Nearly 3 million soldiers were hospitalised during the war, more than half of them at least twice, plus those who were afflicted by illness. Female nurses would have attended all of these. One of the extracts from Gaspard by René Benjamin that was printed in La Petite Gironde painted a glowing picture of nurses. The same newspaper also printed a description by a soldier from the region, Leo Larguier, describing being hospitalised for his wounds.
Un arrêt, et des quatre coins de la gare sur le quai désert, s’essaiment les dames et les demoiselles de la Croix-Rouge. Je ne rêve pas. Ce sont bien des anges qui apportent des corbeilles… Des souliers de velours sur les marchepieds, des mains fines, des sourires frais et des yeux qui rient, des voix de miracle et des blancheurs de paradis; tout cela pour de vieux poilus brisés qui n’ont fait que leur devoir. C’est trop, nous sommes confus, et nul n’aurait osé imaginer cet accueil, et nous nous estimons payés au centuple [...] Sur le fond sanglant de la guerre, pour les bons poilus meurtris, elles se détachent en voiles blancs et elles demeurent de petites figures françaises, avec leur grâce légère et leur goût charmant.
Larguier doesn’t just appreciate the care given by the nurses of the Red Cross, it’s their very femininity that is stressed – their soft hands, their fresh smiles, their grace and charm – as salving the pain of the bloody war.
Jean Hugo spoke of “une très jeune infirmière d’un beauté céleste, accompagnée par son grand-père, un vieux gentilhomme à moustache blanche: elle nous servit gravement du café, en silence et sans sourire.” General Puypéroux paid homage to a nurse who worked on the front with his division. He claimed she would be remembered by all the soldiers as “la personnification de la bravoure féminine et du dévouement désintéressé.” Pierre Mille described how nurses were initially reluctant to treat German prisoners who had been trying to kill their husbands or brothers, but when it was pointed out to them what might happen if German nurses took the same approach they realised what was necessary. “Elles se sont dévouées corps et âme et bientôt, d’ailleurs, l’instinct de maternité et de pitié qui est au cœur de toutes les femmes a triomphé chez elles de tout autre sentiment.” While Mille was writing to praise the nurses, his description implicitly stresses the dominance of sentimentality and instinct above reason and rationality in the actions of these women.
While those who participated in nursing were widely praised, their role was not considered to warrant parity with the men at front in terms of privileges. A circular from the war ministry stated “le bénéfice de la franchise postale militaire s’applique exclusivement aux militaires et marins mobilisés, et qu’en aucun cas, le personnel féminin employé dans les services et établissements militaires ne peut bénéficier de cette franchise. Furthermore, the worth of the nurses’ service was valued so highly because of the reflected glory from those they treated. This is illustrated by the monument to war-time nurses at Berck-Plage, which features not a nurse but a wounded soldier on a stretcher.
There was also leave from the front. While some soldiers found civil society insensitive, others reacted positively. The memoirs of Marius Hourtal contain a long passage describing a leave, where the entire trip is described positively, except for a difficult meeting with the mother of a war victim. He gave several examples of consideration being shown towards him and his companions. They were granted free admission to various Parisian attractions as they were recognised as permissionaires who had come straight from the front line. On the trip to his village his train was full and he began to fall asleep in the corridor until an old lady gently took his arm and insisted on giving him her seat, despite his attempts to refuse. At the same time his comrades were lying down all along the corridor, but the conductor didn’t wake them, understanding they were exhausted. Later another conductor stamped their passes so as to give them an extra day’s leave. Finally he arrived at his home, where he was warmly welcomed by his family “Puis ce fuit la tournée des voisins et amis du village, car tout le monde voulait me voir.” A soldier told the Petit Parisien that he didn’t need to read patriotic exhortations from the rear, but that leave was welcome. “Let them double our wine, brandy, and also leaves and not brainwash us with that claptrap”. For Octave Clauson, his enjoyment was in seeing his family, and the suspension of leave was a major blow. But there was a downside, with people saying to him on every leave “Tu es déjà là!” and also the sense that life back home was moving on without him.
It appears that there was a close correlation between the morale of the troops and their reaction to civil society. In the winter of 1915-1916, the prefect of Anjou reported to the Interior Ministry that
Les visites des permissionnaires continuent à produire dans l’ensemble leur action bienfaisante. La très grande majorité [...] fait impression par leur bonne santé physique et morale, leur bonne humeur, leur courage, leur résolution, leur assurance dans le succès final qu’ils annoncent généralement comme prochain.
At roughly same time, the sub-prefect of Cholet believed of soldiers that “leur confiance dans son issue [the end of the war] gagnent les plus indécis, les plus enclins au découragement.”
However in 1917 the situation was reversed; the soldiers were depressed and made no secret of it. According to the prefect “Ceux-ci apportent depuis quelque temps du front un état d’esprit extrêmement fâcheux et exercent autour d’eux une influence délétère. Les effets de cette influence se font ressentir partout et ont beaucoup contribué à la dépression qui s’est produite dans toute les milieux…” It may be no coincidence that Hourtal’s account was of a leave taken in 1916, while Clauson only arrived at the front in 1917.
The post-war rhetoric of the veterans’ organisations testifies to a more subtle distinction than simply a dichotomy of frontline service and home front fecklessness. Instead, distinctions were made in terms of perceived sacrifice. Thus when, in 1919 at a meeting of the Union des Poilus in Toulon, the order of a cortege was decided, it was headed by mutilés, then the war widows, and finally the poilus. Several organisations, such as the mutual society La Gallieni, had memberships made up of war widows and war wounded. Nor were the interests of widows considered to be necessarily less important. At a meeting in Rennes in 1919, made up in equal parts of war wounded and widowed women, the first two complaints it made were “Contre le licenciement des veuves de guerre employées dans l’Administrations publique et les Arsenaux.” and “Contre le non-emploi des veuves de guerre qui devenues chefs de famille du fait de la mort de leurs maris, ont acquis une priorité sacrée dans le droit au travail”. Only then did it move on to various complaints about the treatment of mutilés. When, in 1924, M. Felix of the Fédération ouvrière et paysanne des Mutilés organised a demonstration for the 16th of November, he disassociated it from the 11th of November celebrations because he believed that they were being run by the Bloc National which “n’étaient pas qualifiés”. However he did believe that an association of war widows was sufficiently qualified to organise the demonstration with. When there was a national congress of mutilés in 1919, it attempted to agree a “programme minimum des combattants”. This mainly consisted of ensuring the employment of the wounded. It was agreed that “tout ce qui a été des mutilés s’appliquera également aux veuves [de guerre]” Similarly, at a meeting of veterans in the Hautes-Pyrénées the president, M. Maumus, complained that in certain industries “dont les meilleures places ont été pris par ceux qui sont restés à l’arrière.” His next complaint was about the dismissal of war widows from their place of employment. The conference of the Union nationale des mutilés in April 1919, called for “le droit de vote et l’éligibilité à tous les degrés pour les veuves de guerre”
Thus, when these groups campaigned, their opposition was not to women taking the jobs of men, but more specifically those who had not suffered during the conflict denying employment to those who had sacrificed a limb or a husband. At a meeting of La Gallieni in May 1919, a M. Richard criticised the 15th arrondissement for “a renvoyé tout récemment 4 démobilisés qu’elle occupait, et conserve dans ses bureaux une vingtaine de jeunes filles qui n’ont rien perdu à la guerre, sont dans leur familles, et ne travaillent, selon leurs dires, que pour la voilette et leurs gants”. Richard argued it was necessary to signal such abuses to the public.4 During a meeting of the Association amicale des mutilés, reformés et anciens combattants in 1920 a man called Davillers neatly encapsulated several of the veteran movement’s grievances in demanding that “… dans les diverses administrations, les emplois sédentaires soient réservés aux veuves, aux mutilés, et non à femmes paraissent de mœurs légères, comme il s’en trouve au Ministère des pensions.” Morality and sacrifice were linked as inextricably as immorality and exploitation of the war.
Indeed the resentment of those who had fought for France may have been more developed by their treatment after the war. A poster entitled “Ceux qu’on Oublie!” drew attention to the adulation heaped on the veterans in 1918, and their subsequent neglect.
1918 “C’est la Gloire! la Victoire! l’Enthousiasme des foules! l’Elan vers les Héros!… Ce sont des promesses, l’assurance qu’elles seront tenues et que pas un seul de tous nos droits ne sera méconnu…
1922 “Quatre ans d’indifférence! les couronnes de lauriers devenues couronnes d’épines,”
Further down, the poster asserted that “Malgré la bonne volonté du Ministre des Pensions, l’Administration continue sa lutte contre les Mutilés…”
In 1919, the 14th of July celebrations in Bordeaux saw the places reserved for the victims of the war occupied by a mass of people, and they were unable to join in the celebrations. The Association des mutilés et anciens combattants de Montpellier demanded that “les mutilés ne soient pas relégués à la fin du cortège comme les années précedentes, car ils estiment que leur place est en tête de cortège” for the 11th of November procession in 1919. Again though, as Monique Luirard notes, the anger of the former soldiers in the post war period was largely directed at the male exploiters of the war, politicians, profiteers and shirkers.
Christine Bard argues that the war halted the momentum of feminist campaigning, “Nombre de changements dans la vie des femmes trop hâtivement attribués à la guerre se sont en réalité produits à la Belle Epoque. Dans la littérature apparaît, alors une “femme nouvelle”, libre, indépendante, revolt, en un mot, féministe.” Bard is correct to say that the “femme nouvelle” was a popular image in the Belle Epoque, as indeed it has been in several contemporary epochs. However, as Roberts has shown with her study of the post war “femme moderne” the existence of liberated, independent, feminist women was not in itself sufficient to create significant changes in the position of women as a whole. While there was some pre‑war emancipation – in 1907 married women gained the right to their own earnings, and in 1912 the right to bring paternity suits – this was little more impressive than post-war reforms. For Michelle Perrot though, it was the campaign for the right to vote that was derailed by the war.
In 1914 Le Journal ran a referendum on women’s suffrage and reported five hundred thousand votes in favor. The political Left, which previously held itself aloof, was converted to women’s suffrage: in 1914 Jean Jaurès openly favoured giving women the vote. But the war halted this momentum. The procrastination of the 1920s and 1930s and the Senate’s long resistance to proposals for female suffrage illustrate how women’s cause regressed during the interwar period.
This argument is not wholly convincing. Le Journal‘s poll is hardly conclusive, and it carried out a similar vote, with similar results, after the war. The political Left often made statements in favour of female suffrage, without ever considering it an issue important enough to warrant doing much about. It is also difficult to see why the procrastination and the obstructionism of the Senate would have been any different prior to the war, those being the primary qualities the Senate brought to the Third Republic throughout its existence. Furthermore, it doesn’t chime in with the international experience, where women were very rarely enfranchised without some tumultuous occurrence, such as a war or a switch to a different form of government. Perrot and Roberts also differ on the pre-war period, Perrot asserting, “The turn of the century was a time of prodigious invention and novelty which raised significant questions about the social organization of gender, but this questioning was soon silenced by the war.” Compare this to Roberts’ “They [legislators, novelists, social reformers, journalists, and feminists of all political stripes] demonstrated a strong urge to return to a pre-war era of security, a world without violent change.” In this quote Roberts is clearly talking of a perception of a lack of change, but there is still considerable gap between the two.5 The most likely explanation is that both are overestimating the impact of the war. As Thérèse Pottecher concluded in La Grande Revue in 1910, “feminism has gained sincere ground in public opinion. Yet this success is little in the face of the conquests that still need to be made over the spirit of our nation.”
Perrot’s argument on turbulent gender relations before the war is supported by Margaret Darrow’s claims that
According to a host of commentators at the end of the nineteenth century, the French family, society and nation were all in desperate straits because women were refusing to be feminine and men were not being sufficiently masculine. ‘Female emancipation’ was the leading culprit.
Almost all the fears that appeared in the post-war period over the damage done to society by women not acting accordance with the roles nature had prescribed them are echoed before 1914. In his influential book, The Sexual Question, August Forel argued “The modern tendency of women to become pleasure-seekers and to take a dislike to maternity leads to the degeneration of society. This is a grave social evil.” In 1913, Paul Leroy-Beaulieu wrote “The masculinisation of women is, from all points of view, one of the grave dangers facing contemporary civilisation.” In the same year, Theodore Joran received a prize from the Académie des sciences morales et politique for his work “Le Suffrage des Femmes” in which he asserted that the feminist argument “is only a tissue of errors, ravings and sophisms.” According to H. Thulié, writing in 1898, degenerate prostitutes whose destiny was “to be delivered over to deplorable excesses, to undergo the most abominable miseries, and to fall into the most shameful and abasing degradations whose torments are marked by the perpetual pursuit of new pleasures and the incessant satisfaction of their erotic frenzy.” Robert Nye points out that Thulié, like most observers, “saw worsening degeneracy affecting women by miring them ever more deeply in ‘female’ crimes like prostitution.”
Annie Stora-Lamarre has argued that the immediate pre-war period saw the peak of a panic about pornography and erotic literature.
Elle (the woman) se trouve à la intersection de la complaisance et de la violence qui est une constante de l’érotisme morbide et sanglant des années 1900. Sur le thème des ravages de la passion, la femme sème le plaisir, la luxure et la mort.
Alain Corbin agrees, arguing that the activities of “those who were engaged in the struggle against pornography and licentiousness intensified.” Supervision of prostitution became more severe. Pornography prosecutions peaked from 1910 to 1914, as it was believed to be feminising the nation while war loomed. The years leading up to the war also saw several novels that showed the positive effects of war in regenerating society, in a society that clearly needed such regeneration.
The ill-effects on the health of women working in the professions had been picked up as early as 1900 by the doctor Vaucaire who noted of these young women that “Les petits prodiges ont les yeux cernés, les lèvres blanches; ils sont pâles, chétifs; leurs mouvements deviennent langoureux, les muscles n’ont plus aucune souplesse, les poumons ne savent pas respirer, l’estomac ne digère pas, le peau fonctionne mal.”
The pre-war debate on hysteria was also framed in the context of social dislocation. In 1883, Henri Legrand de Saulle published Les Hystériques, which argued that, due to hereditary and social factors, women of the lower classes were greatly affected by this illness. Upper class women and, to a lesser extent, those from the middle classes were also affected. Those suffering from hysteria saw their character suffer, they became “égoïstes, capricieuses, irritables, désireuses d’attirer l’attention”. The consequences of this illness were not always negative; sometimes they could lead to acts of great self-abnegation. Thus he described a woman who saved several children from a burning house, with no thought of her own safety, as acting under the influence of hysteria. For Charles Richet, it was social changes that were responsible for hysteria: “la réalité inférieure au rêve; c’est un maladie commune aux déclassés, aux jeunes filles de la classe inférieure qui reçoivent une éducation supérieure à leur état.” Grasset remained attached to a traditional explanation “Sans vouloir manquer ici de galanterie, je ferai remarquer que la plupart des traits de caractère des hystériques ne sont que l’exagération du caractère féminin,”.
A few months before the outbreak of war the Petit Marseillais noted the progress of the “fille moderne”
Dès la fin du xixe siècle, la jeune fille moderne a pressenti ses destinées: elle a constitué, dans le sein des vieilles nations lasses, comme un sort de grande peuple neuf. C’est elle, sans appui et sans guide, qui a mené son évolution.
Although the author generally approved of the changes achieved by the modern girl, he noted that “[e]lle a été extremement maligné”.6
If it seems clear that the early years of the twentieth century were marked by significant anxiety over gender relations, is Perrot correct to suggest that only the war prevented this debate from leading to significant changes in women’s position? It seems difficult to believe that the assumptions of an improvement in conditions by contemporaries were wholly without foundation. In an article on “La femme et la guerre” that appeared in La Petite Gironde in 1916, the author commented approvingly that before the war when women had claimed legal and economic equality, men had responded with ironic disdain or brutal contempt. It had taken the catacylsm of war to alter this situation. By rendering women indispensable the war had allowed women to take the rights that previously they had only been able to ask for, as well as helping to save France. The bourgeois wife had become a nurse to the wounded, the refugees and the unfortunate, the wives of industrialists and shopkeepers had taken over their tasks. Everywhere, the article argued, women had replaced men.
The assumptions of progress for women in society were often taken for granted. Those who argued in favour of the new fashions of clothes and hair argued that they were suitable for the newly emancipated woman. A spokesman for the Institut des coiffeurs des dames de France suggested that short hair could be a sign of feminism and equality. In 1927 designer Jacques Worth wrote “The war changed women’s lives forcing them into an active life, and, in many cases, paid work.” The Carriéres féminines intellectuelles, which was published in 1923, stated that “The war has emancipated women, and the majority of professions that, up until now have been closed to them, are now opening.” The 1920s saw a huge increase in women in higher education. As Thébaud admits “The war broke down age‑old barriers and opened many prestigious positions to women.” It must be acknowledged though that these are references to a minority of educated middle class women. While the significance of their progress should not be underestimated, their experience was different to that of the vast majority of women at the time. There were changes for working women as well; the number of women in unions, which rose from 30,900 in 1900 to 89,300 in 1914, took off to reach 239,000 in 1920. The comparable male figures were 588,000, 1,026,000 and 1,355,000. Working women also left the home as a place of work; domestic service and textile piecework both declined. This may or may not be considered as necessarily a good thing, but it does show that women were not being confined to the hearth.
There was also more personal freedom in dress and hairstyle. Although the bob was controversial, it became more and more popular. It is important to remember how tight the constraints on women were before the war. For example, Hubertine Auclert was refused accommodation in a hotel because, as a single woman traveller, she was seen as being immoral. These things were much less likely to happen after a war when women had been forced/free to travel around on their own.
Marriage may also have been more pleasant for women. In the wake of the war men tended to marry older women, this being one of the ways to get round the gender imbalance caused by the war. This may have given women more equality in the marriage than there would have been with a greater age difference. If the marriage didn’t work, divorce was more available. In 1900 there were just over 7000, in 1913 15,450. 1920 and 1921 saw the peak of divorce with 29,156 and 32,557 carried out respectively. After that it settled down to around 20,000 a year. Sex may also have been les traumatic for some women, as there were more official sources of information than previously. “Although sexual education for women remained a taboo subject before the war, in the post-war years, well‑known doctors, sociologists, educators, and government officials debated it openly.” From 1925, government funded lectures on the subject by the Comité d’éducation féminine. Though most women still learned from relations and friends, those who for some reason would not or could not do so now had alternative sources of information.
These changes are important and may have had significant impact on the day to day lives of Frenchwomen in the 1920s. They do not, though, suggest that either a radical evolution in gender attitudes was brought about by the war, or indeed that in the war the opposite had occurred and that traditional interpretations were bolstered by the conflict. The book Mariage Moderne by Resclauze de Bermon highlights both the perception of social dislocation that was present during the war as well as the restricted nature of radical behaviour. It was serialised in La Petite Gironde which claimed in its advertisements that “L’auteur a analysé avec une sûreté et une franchisses saisissantes l’âme de la jeune fille, de la jeune femme d’aujourd’hui”. The book is written from the viewpoint of Yvonne, a young woman from a very respectable family. She is beautiful and feminine and she doesn’t work, her primary concern as the book starts being her dowry. Her nature as a modern woman only becomes clear when she asserts
Or, j’ai la prétention d’être de mon temps, c’est-à-dire pratique, avec tout ce que le bon goût actuel autorise de sentimentalité. Je veux que mon mari me plaise, qu’une sympathie susceptible de devenir quelque chose de beaucoup plus tendre m’attire vers lui, que son âge soit en harmonie avec le mien et aussi que par sa fortune ou par son travail, il puisse m’assurer la vie large que j’aime.
Soon afterwards she made it clear that her husband’s primary duty would be to aid her life of pleasure “Ce qu’il me faut, c’est un mari qui soit capable non seulement de me comprendre, mais de me suivre.” For this reason she rejects the mentality of Gaston, a prospective suitor who wishes to remain loyal to his roots and farm like his ancestors. She finds the prospect of marrying a gentleman farmer dull; instead she wants to live life fully.
Instead of marrying the safe Gaston, she meets a stranger called Roger and is swept off her feet by him. She agrees to marry him. The marriage goes badly, in a very traditional manner; Roger gambles unsuccessfully, and then is caught having an affair. Yvonne tries nonetheless to maintain the relationship. Roger continues to spend her money. Eventually he becomes so indebted that he kills himself.
While the book seeks to portray Yvonne as representative of a new type of emancipated women, and a product of the modern age, what is most noticeable is how much her behaviour remains within traditional female norms. She goes against the wishes of her parents, who want her to marry Gaston. However, she does not ignore their wishes entirely, she tries to gain their approval and waits until it is eventually granted. Roger was a perfectly good match socially, and it was he who was in full control over their courtship. Despite the disastrous nature of the marriage, Yvonne does not seek recourse to adultery or divorce but remains loyal to Roger and allows him to spend her dowry.
One of the major reasons why it is considered that the developments that occurred during the war years were not continued, or were reversed, is the issue of denatalité. The war had cost a vast number of the lives of young men, while at the same time displaying graphically that early twentieth-century warfare required very large armies. Clemenceau intended no exaggeration in his comment on the treaty of Versailles that
the treaty does not specify that France should commit herself to bearing many children, but that is the first thing that should have been written there. This is because if France renounces la famille nombreuse, you can put whatever fine clause in the treaty you want, you can take away all the armaments in Germany, you can do whatever you want. France will be lost because there won’t be any more French people.
The pronatalist organisations reflect this: L’alliance nationale pour l’accroissement de la population française received a considerable boost from the war. Pour la Vie was created in 1916. However, while the events of the war had heightened concerns over depopulation, the issue had been considered important for many years – in July 1914 the Petit Marseillais could claim of the question of depopulation, “puisqu’il n’en est pas de plus grave à la heure presente…” The debate over depopulation and low levels of natality was so well rehearsed that when the Comite Consultatif d’action économique of the Toulouse region asked its sub-committees to comment on the issue, it commented that “il n’a certainement pas eu la pensée de provoquer des joutes oratoires sur la décadence des pays de ‘célibataires et des filles uniques’.”
The consequences of attempts to increase the birthrate could impact on every area of a woman’s life. If the obvious example is the legislation that outlawed contraceptive propaganda and toughened the anti‑abortion laws, it had many other aspects. Those who opposed female suffrage argued (somewhat tendentiously) that countries that had adopted it had seen their birthrate fall. Others believed that working women were less likely to have children, and campaigned for their return to the hearth. Some conservatives saw even the figure of the “new woman” with her lack of breasts and hips as a rejection of nourishment and motherhood. The campaign for motherhood and the birth rate helped justify closing many nursing and day care facilities after the war.
Fears over the French population also affected French attitudes towards foreigners. Even in an admiring article on soldiers from Britain and her colonies, Pierre Mille could not escape the spectre of how marriages between foreign troops and French women might result in those women going overseas. In the Dépêche, General Z. argued that because of depopulation there would be no French people left by the year 2112. The loss of so many men in the war only exacerbated this situation, potentially halving the time until French extinction. His despairing conclusion was that “En 2112 il n’y a pas un Français dans notre pays. Tous seraient remplacés par des étrangers.” The Comité Consultatif d’action économique de la 17ème région also fretted about whether immigration was a reliable way to maintain France’s population. “Nous ne nous maintenions avant la guerre aux environs de 39.000 d’habitants que grâce à l’appoint inquiétant de l’immigration.” L’Œuvre was more resigned to the need for immigration, but hoped it could be simply a stop-gap. It contended that France’s slow population growth compared to Germany and Austria-Hungary meant there was a need to repopulate France. Naturally all possible measures needed to be taken to encourage births, but such measures would not bear fruit for 25 years and thus immigration was necessary to cover the intervening period.
It has been argued that the concern for the size of the population can be exaggerated, and that it was used as a tool to gain support for other political issues, including the removal of women from the workplace. After all, there was very little actual legislative action taken beyond the 1920 law forbidding antinatalist propaganda. Roberts argues that the even the aims of the 1920s law were not strictly demographic. Instead “… it sought specifically to bring women’s sexual practices under legislation by attacking abortion and female forms of contraception.” Roberts offers three reasons in support of this hypothesis. Firstly that it did not deal with male forms of birth control (prophylactics), secondly that the respected expert Adolphe Pinard’s opposition was disregarded, and thirdly that the deputies themselves had a small number of children.
The significance of Pinard’s opposition should not be overstated as it was countered by several other experts speaking in favour. The last argument is also unconvincing, as it is quite possible that the deputies may have thought that an increase in population was necessary for France but found it to their own taste or advantage to limit their own children. Similarly, La Bataille mocked L’Œuvre for having claimed in 1916 that “Après la guerre, madame, vous ne serez pas une ‘honnête femme’, si vous n’avez pas au moins trois enfants.” The reason for La Bataille‘s derision was not that it disagreed with the statement itself but that Gustave Téry himself only had two children. In addition, other than a shortage of men to enlist in the armies, the greatest fear that a falling population posed at the time was rural depopulation. As one of the most noticeable factors in the makeup of the French legislature was the scarcity of peasants, it was less essential that they reproduced.
The exemption of male forms of contraception is significant; the legislation was clearly attempting to create a position where men were intended to have choice over procreation and women were not. There certainly was an element of attempting to increase social control over women, but the legislation could also be seen as presenting women as the reason for denatalité. Furthermore the concern over syphilis and other sexual transmitted diseases would certainly have played a part in deciding to retain the legality of prophylactics. It should also be noted that conservative pronatalists delighted in the election results of 1919, proclaiming them a great improvement on those of 1914, so some increase in pronatalist activity might have been expected under any circumstance.
The legislative action also fitted firmly into the wartime rhetoric on the subject. Before the Congrès de l’Association Nationale d’Expansion Économique, M. Souchon delivered a speech on the needs of agriculture. When he came to natalité the audience gave a warm reception to his speech. He asserted “la question de la natalité n’est pas une question légale, la question de la natalité est une question morale”. Once again the problem was seen as propaganda: “il est certain qu’au cours de ces dernières années, des propagandes criminelles ont été faites contre la famille française, helas! par des Français!” While in general he opposed state interference, it was necessary for the law to counter this. Another speaker at the same conference noted that depopulation of the countryside was threatening to compromise national prosperity. His first recommendation was for severe measures to be taken against “odieuse propagande contre la race, trop fréquente dans les campagnes comme dans les villes.”
Souchon gives one reason why there was a limit to the legislative action taken on the issue, the dislike of many influential Frenchmen to grant the power to the state to interfere in their actions whether personal or professional. An equally pressing reason was economics. The war had done a great deal of damage to France’s financial capability, and it is unsurprising that various governments, committed in principle to encouraging les familles nombreuses, felt they were financially unable to give fiduciary incentives, or tax breaks to large families. Where there were cheap expedients then they were utilised. Thus when colonial troops were needed to make up the shortfall in French soldiers after the war, Echenberg argues that conscription was made into a systematic peace time institution in French West Africa, because this was cheaper than voluntary recruitment, which required higher pay.
It is possible that the war was part of a shift in the emphasis of the campaign to increase the birthrate, a shift from attempting to persuade the male head of the household to give his wife more children, to persuading the wife herself of the need. Pedersen’s account of the long history of the 1920 pronatalist legislation illustrates this. In 1910 Senator Lannelongue introduced a proposal aimed at increasing the birthrate by offering inducements to fathers. By 1913, this had been revised by Strauss and Besnard, who switched the focus “almost exclusively on to women’s interaction with the medical profession.” It was also more repressive and offered fewer inducements. The legislation remained stalled until 1920 when Ignace extracted a few of the repressive articles on abortion, anti-natalist propaganda and female contraception and put them forward on their own. Both houses passed them easily.
Zola’s pre-war natalist tract Fécondité glorified woman as a mother and a housekeeper, not as a factory worker. However its main argument was to glorify fertile peasant life, compared to the urban bourgeois with their child limitation strategies and individualist morality. In it, Dr. Boutain warns the hero Mathieu Froment about the perils of contraception.
One cannot deceive an organ with impunity. Imagine a stomach which one continually tantalized with an indigestible lure whose presence unceasingly called forth the blood while offering nothing to digest. Every function that is not exercised according to the normal order becomes a permanent source of danger. You stimulate a woman, contenting her only with the spasm, and you have only satisfied her desire, which is simply the enticing stimulant; you have not acceded to fertilization, which is the goal, the necessary and indispensable act. And you are surprised when this betrayed and abused organism, diverted from its proper use, reveals itself to be the seat of terrible disorders, disgraces and perversions!
Cole argues that this declaration of Dr. Boutain implies that female contraception is being used, but the whole passage seems to grant the entire agency to the man and with it the choice of whether to use contraception.
The declaration of Doctor Boutain can be contrasted with this post-war claim
Quel est le grand devoir de la femme? Enfanter, encore enfanter, toujours enfanter. Que la femme se refuse à la maternité, qu’elle la limite, qu’elle la supprime et la femme ne mérite plus ses droits; la femme n’est plus rien.
Not only does this make women’s role in society quite clear it also implies “la femme se refuse” that it is the woman who is responsible for the refusal. This is the same as the argument made by Clément Vautel in Madame ne veut pas d’enfant. Vautel’s work also contrasted the fertility of the working class with the sterility of the bourgeoisie.
However, a report on depopulation in 1917 by the Comite Consultatif d’action économique of the Tolouse region made it clear that they believed the problem was with male behaviour.
Le célibataire, surtout le célibataire fils de famille, tient en France le haut de pavé. Il occupe les hauts emplois, réussit dans la politique, échappe aux plus lourdes de nos charges, débauche nos filles, détourne nos femmes, affiche ses maîtresses, donne les plus pernicieux exemples … et est considéré.
It was necessary for him to be seen as a bad citizen, to tax him heavily, to exclude him from certain functions, occupations and offices. The report also argued that part of the opprobrium should be levelled at households without children or those with less than three, but the bachelor was the main target of their aim. Henry Spont in his book La Femme et la guerre followed a similarly traditional line, arguing that women were still defined by their motherhood, and that those without children were condemned to that unhappy fate by their rejection by men.
Aux mères françaises!
Heureuses ou non, (the married woman) elles ont justifié les espoirs de leur famille, elles ont atteint le but proposé, elles sont désormais en règle avec la nature, avec la société. [...] elles peuvent marcher la tête haute, sortir seules, promener les enfants, qui consacrent la noblesse et l’utilité de leur rôle.
Voilà des créatures dignes d’estimes, qui remplissent bien leur mission.
Mais les autres, celles que l’homme a dédaignées! Quelle tristesse, quelle humiliation de se sentir un être indésirable, encombrant qui va traîner sa vie en marge de la grande route et disparaître sans laisser des traces, après avoir trahi les plus légitimes espérances!
Spont argued that men rejected women primarily because they did not provide enough in the way of dowries. He suggested a variety of unhappy life paths that might be taken by the rejected women. Some would just go on living with their parents, or live on their own in solitary misery, others would go into employment and some would revolt. These would be the ones who end up in unions libres, where they would inevitably be betrayed. Spont indignantly denied that these women were to blame, “Est-ce leur faute? Non! Toutes ont souhaité se marier, être mères.” It was the fault of the man, too demanding, and scornful of his responsibilities.
L’Eclair du Midi came out in favour of a financial solution, in this case assistance to parents of large families, and reported that they had received a large amount of positive feedback for this idea from their readership. Once again the problem was considered to be practical rather than due to a crisis in female behaviour. Likewise, Galéot in his book L’Avenir de la race ascribed the problem to material difficulties, explicitly focussing on paternity. “Dans l’état actuel de notre organisation sociale et de nos mœurs, la paternité est pour presque universalité des citoyens un très lourd sacrifice matériel.”
Clément Chaussé in his book on pregnant women working in munitions factories suggested that the key way to increase natality was financial incentives. He made no mention of female morality, “La grossesse restera un accident tant que la vie normale n’aura pas repris son cours et tant que l’enfant sera une trop lourde charge pour ses parents.” Pierre Mille also believed that the solution was to offer financial inducements for large families, and the Lyon branch of the Ligue populaire des pères et mères de familles nombreuses launched its periodical by calling for economic and political advantages for large families.
Pronatalists also worked to convince women of the desirability of having babies. Paul Haury argued that maternity was the essence of female psychology; Fernand Boverat claimed that the infant satisfied “le plus profond des instincts qui existe chez la femme. Cartoons in the natalist journal “La femme et l’enfant” showed children bringing happiness to miserable relationships. Other natalists emphasised it more as a duty than a pleasure, Jacques Bertillon claiming “Between the violent causes of devastation and Malthusianism there is one difference, the latter calamity, even as it slowly destroys the country, makes none of its inhabitants suffer. How true it is that the interests of individuals can be entirely opposed to those of the collectivity.” Alfred Krug came to a similar conclusion in his 1918 pamphlet, Pour la repopulation. Auguste Isaac argued that society was arranged to the advantage of those without children and asked “Qui sont donc ces nigauds qui veulent avoir tant d’enfants?” Sébastien Marc, in his book Contre la Dépoulation, also suggested that it was a mental problem, but his suggested solution was a reform of the Civil Code system of inheritance.
The debate over France’s slow population growth was primarily characterised by the diversity of opinions as to causes and remedies. This is illustrated by the Congrès National de la Natalité, held in Nancy in September 1919. Alexandre Dreux, President of Nancy’s Chamber of Commerce ascribed the failing birthrate to egotism and lack of civic spirit, though he didn’t specify which sex he thought was primarily responsible. Paul Bureau demanded “purification sociale”, the family vote, subsidies for large families from the state and higher wages for large families from their employers as well as a solution to the problem of bad housing in order to rectify the situation. Auguste Isaac focussed on the ideal of a mother “qui ne soit pas obligée de travailler en usine et qui puisse s’occuper de ses enfants.”7 Isaac did mention the practical impact of the war but he didn’t claim that there had been any changes in morality resulting from the war, except that the allocation had accustomed people to accepting subsidies from the state.
While observers generally agreed that the work done by women during the war had had a negative impact on the birthrate they differed on whether this was to be a permanent change. A report by Dr Lesage for the Comité du travail féminin argued that
Certains esprits, forts et sceptiques, disent, en semant le pessimisme, que l’ouvrière ne veut plus d’enfants et que ce n’est pas la peine de créer des chambres d’allaitement. Mille fois non..! Ayons confiance en elle. Quand, en ce moment, on la voit en pleine valeur, en plein travail, en pleine fièvre patriotique, forger l’airain sacré, on est saisi d’une angoisse reconnissante. Comme le poilu à la tranchée, l’ouvrière a sauvé le pays.
Non, Mesdames et Messieurs, quand le cyclone sera calmé, nous verrons l’ouvrière reprendre la vie commune et se consacrer à ses devoirs de maternité, consciente de sa force, consciente de sa valeur, consciente de sa dignité.
Despite this diversity of views, the idea that women could actually contribute to finding a solution was rare. As he acknowledged, the deputy Charles Chaumet was unusual in making an argument in favour of female suffrage in order to gain female input into which legislative changes were needed to boost the birthrate.
Nous ne pouvons en fixer les dispositions pratiques sans la collaboration de la femme. Elle doit avoir voix au chapitre dans ces questions délicates aussi bien que lorsqu’il s’agit de l’éducation des jeunes filles et des conditions des travail. Et c’est pourquoi, au grand scandale de certains de mes plus chers amis, je suis partisan du suffrage des femmes.
The Ligue populaire des pères et mères de familles nombreuses certainly didn’t share Chaumet’s position, arguing that granting the vote to women would do nothing to change France’s population situation, and that only the introduction of the family vote could be effective. Advocating allowing widows with children the vote was the closest they came to support for female suffrage.
Françoise Thébaud notes wryly that when attempts were made to persuade the French to procreate that men were offered money, women were offered medals. While it is difficult to argue that this was not based on a condescending view of women, it may also have been an attempt to link female medals for fertility with male military decorations; both had performed the duty that nature had bestowed upon them.
This is not to say that the behaviour of women was never held to be responsible for declining birth rates. For Paul Bureau, female work in industry and commerce was the major obstacle towards an increase in the birthrate, with celibacy, concubinage and the selfishness of young married couples as secondary reasons. George d’Esparbes also blamed women working, claiming that it resulted in either “l’ignorance ou l’égoïsme” which then reduced natality. This ignorance also increased infant mortality, as women no longer were learning how to look after babies. D’Esparbes suggested the best solution was the “renvoi aux foyers des mères de famille.” However, he didn’t believe that the war was to blame for women being forced to work, claiming instead that it was too late to “détruire un système de travail établi depuis des années.” Adolphe Pinard noted that infant mortality had risen in 1916 compared to 1915. He believed that pregnant women had been seduced by high wages, and were not taking advantage of protection available to them. “Nous le savons et avec une certitude absolue: le travail de l’usine et son gain a été pour les pauvres femmes en état de gestation, pour les mères nécessiteuses, un véritable miroir à alouettes.” So he proposed forbidding entrance into a factory “à toute femme en état de gestation, à toute femme allaitant son enfant, à toute femme accouchée depuis moins de six mois.” M. Héron, in a report for L’Union du Sud-Ouest des Syndicats d’Encouragement de Motoculture, argued that depopulation was partly caused by the attractions of the towns with “le prix disproportionné des salaires offerts par l’industris, le goût de la toilette chez la femme.” In the Petit Marseillais, Durandy blamed women for wanting to look pretty rather than having babies.
For all the natalist legislation and rhetoric, the level of the French population continued to be static. “Malgré les lois, les Français étaient de plus en plus malthusiens.” Restraint, contraception and abortion were all used to control family size. In 1898, a member of the clergy wrote a letter to L’Ami du clergé about questioning in confession about contraception. “La réponse invariable sera celle-ci: ‘S’il faut avoir un enfant tous les neuf mois pour se confesser, je ne consentirai jamais’.”
Not only did people limit their own families, they were reluctant to condemn others for it either. The move to switch from trial by jury to trial by judge for abortion suggests that many people thought that abortion was understandable under certain circumstances. Public discourse on the evils of abortion was not matched by popular action against it. Similarly, as Offen points out, “In no industrializing country had women constituted so great a percentage of the labor force … yet in no country did (male) prescriptive rhetoric insist so strongly on the necessity of achieving the ideal of a sexual division of labor”. A report by the Comité Consultatif d’action économique for the Toulouse region in 1917 on the subject lamented
Le mal sur lequel nous sommes appelés à délibérer est connu. Il a été dénoncé par les rapporteurs des statistiques de dénombrement, par de courageux écrivains et conférenciers; les Chambres de commerce, l’Académie des sciences morales et politiques ont poussé le cri d’alarme; des commissions parlementaires ont délibéré sur la danger et ses palliatifs; l’intention de modérer “la course à l’abîme” a inspiré quelques timides mesures législatives…
Il ne paraît cependant pas que la conscience nationale ait la notion aigue du péril. L’amour et la fécondité sont restés en France matières à propos légers, et aucun puissant mouvement d’opinion tendant à l’avenir de la race n’a impressionné les pouvoirs publics.
These instances show the problems of assuming that the population at large accepted the rhetoric of the powerful. A further example is the issue of infant mortality. This had been a major concern for republicans like Paul Strauss before the war, and there was much debate on the issue. Rachel Fuchs, drawing on Roberts’ work, argues that after World War One there was a significant change in emphasis, from trying to reduce infant mortality to pushing motherhood as the desired role for women. Yet infant mortality began to fall at this moment, while wet nursing declined sharply from 1916 as part of a considerable improvement in infant care.
The aftermath of the war saw the erection of memorials to the war dead throughout France. Practically every commune built an individual monument to its dead. While these monuments were sincerely intended to honour the heroes of France, they also could have other meanings. Of course many communes were restricted by cost to simple steles, but other monuments featured more intricate sculpture. This often led to a great deal of controversy, particularly along the religious divide. The fights for religious or secular commemoration were often bitter. The conflict was further complicated by the 1905 Separation law, which forbade commemorative monuments to have religious emblems, other than those in cemeteries. The ministry of the interior send out a circular in April 1919, confirming that the law of 1905 remained in force. This was changed in 1924, but by then most monuments had been built.
The positioning of the monuments testifies to this. In Brittany, the vast majority were in cemeteries, elsewhere it varied between the churchyard and town hall, symbols of clericalism and secularism respectively. Occasionally the school or a public park provided a more neutral setting, though even here the école laïque carried ideological baggage.
Beyond this, Annette Becker argues that the sculptures also intended another meaning, one that crossed the religious divide:
Les femmes y sont vierges comme des saintes, hautaines dans leur chagrin de veuves, figées dans leur sens du devoir. On sent combien ces oeuvres sont une reconstruction idéologique. Les sculpteurs ont réussi ce qu’on leur demandait: ressusciter l’Union Sacrée et l’union des familles, par-delà le drame.
Daniel Sherman goes further, arguing that commemoration not only reinscribed gender codes which had been disrupted in the war but also “that commemoration itself played out, in gendered terms, a pervasive cultural unease in which nothing less than the masculine cast of politics and national citizenship was at stake.”
During the nineteenth century, republicans built on the Revolutionary ideas of republican citizenship and the citizen army to create “an inherent association of citizenship, masculinity, and military service.” This is a very important point. Those on the left who opposed the law returning to three years military service in 1914 often did so within the context of defending the nation in arms. Vincent Auriol, writing in the Midi Socialiste wanted to “défendre l’armée nationale contre les criminelles enterprises des fournisseurs intéressés et des professionnels du militarisme.” His slogan was “Vive la Nation armée! mais a bas les trois ans! a bas l’Empire! Vive la Paix Internationale!” Even when there was a distinctly pacifist tone to the message, the citizen army was not denounced. According to a report by the prefect of the Haute Garonne, the syndicalist Marty-Rolland believed that the people “ne veulent plus la guerre, mais la paix, plus d’armée permanente, mais les milices nationales.” It is noticeable that even those campaigning for the reduction in length of military service rarely oppose its existence. Only on the extreme left was there outright opposition to military service. In a pamphlet produced in May 1913 by the Fédération Communiste Anarchiste “Contre les armements, Contre la loi de 3 ans, Contre tout militarisme” which concluded by advocating desertion, it urged conscripts: “ne sois pas un fratricide, ne sois pas un assassin, sois un homme”. The assertion that not to fight showed you were a man strongly implies that the reverse was commonly held to be true.
It was also a common reason given for denying female suffrage that women had not earned it through military service, and one which feminists felt they needed to combat, childbirth often being posited as the female impôt de sang. An example of how close the association was is given by M Vignols, a syndicalist speaking in St-Malo in May 1913 against the three years law. He argued that the suffering women endure during gestation was significant, and it would be more profitable for everyone if the fruit of their labours were to work rather than be cannon fodder. He thus proposed that women launch a grève du ventre. When women sought to present themselves for registration on the electoral list in January 1914, the clerk in the Eleventh Arrondissement responded ironically by asking for their certificates proving military service. The war, which provided a genuine example of the nation under arms, should surely have reinforced this nexus. For Bard and Thébaud, “La guerre a réactivé la définition de la citoyenneté qui associe droits politiques et devoir de défendre sa patrie.” However, Sherman argues that the reverse was true.
Sherman also argues that it was felt necessary to combat the threat posed by the all‑male world created in the trenches. His argument is based on the assertion that the mutinies of 1917 constituted a threat to the patriarchal social order. Thus it was necessary for commemoration to reinforce the traditional family order. He returns to this point by suggesting that war memorials involved a subliminal choice of poses, atypical of the war, which represented masculinity as aggressive and heroic and “repressed any lingering memories of homosocial friendship.” However, Sherman also argues for the essential similarity of the various texts, commemorative speeches, novels, memoirs and advice literature, which “framed the construction of monuments and that, reciprocally, monuments helped to legitimate.” Wartime and commemorative texts however are all insistent on the existence of homosocial friendship and the growth of veterans associations was just the most obvious manifestation of a desire to continue it. Thus if there was an attempt to construct commemoration wholly in familial terms, there was also a strong ideological current seeking to retain the image of the brotherhood of the trenches.
Sherman describes how Pierre Andrianne, the hero of Maurice Genevoix’s la Joie is disgusted at the “petty bickering that marked the construction of the monument”. This is an example of how monuments were a contested site; a point Sherman makes eloquently himself in another article, where he describes the various factions that sought to control the commemoration process and their underlying motivations. An examination of the inauguration of the monument to the dead of the Savoie gives a clearer idea of the preoccupations of those in charge of constructing a memorial.
One of the first things to be noted is that the committee appointed to deal with the issue was very large – 35 people. The size and the composition of the committee suggest that the desire was to achieve inclusivity in the design and creation of the memorial, rather than to privilege an ideological position. That all 35 were male suggests the limits of this inclusivity. In fact the affair was almost wholly male, all the speeches being delivered by men. The only exception was a song “Hymne Aux Savoyards, morts pour la France” sung at the inauguration of the monument, which was written by a woman, Marie-Rose Michaud Lapeyre.8
The speeches by Borrel, deputy and president of the committee, and Juilland, the mayor of Chambéry, both spent only a short time on the war itself, Juilland claiming, “les événements sont encore trop proches pour qu’il soit besoin de les rappeler.” Instead they utilised the glorious dead as supporters for their political cause. Both claimed to be speaking on behalf of the dead. In Borrel’s view they had fought for a Republican France, for humanity, progress and justice, while for Juilland: “Ils voulaient, nos morts, que la France pût développer librement, sans crainte, a l’abri des agressions, son clair et généreux génie dans les œuvres de paix, de civilisation et de progrès.”
The next speech by Gustave Pillet, Vice-President of the Savoie veteran’s association, was very different, being almost entirely devoted to the war. While it strongly focused on the sacrifices made by soldiers, it also offered a sympathetic view of women and the supporting presence that they represented. He described those lost at the front, fathers and brothers, but also “du Mari, qui n’ignorait pas la vaillance de l’épouse, mais redoutait pour elle les brutalités de la vie;” and “du Fiancé qui apportait espoir et joyeux rêve, et auquel allait incomber l’une des plus belles tâches de la vie: fonder un foyer.” Later on he says “Hommes de tout âges, qui avez vécu la guerre à l’abri, – femmes qui n’en avez pas connu pas toutes les privations”. It is also traditionalist, speaking of the “task” of founding a family home. It does not suggest that the war shook his view of gender relations; his references to women seem to be motivated by a desire for all sections of the commune to be recognised rather than as a focus for the speech.
The final speaker was the guest of honour, President Poincaré, who devoted a long portion of his speech to individual descriptions of the battalions, before continuing on to justify his current foreign policy as based on the lessons of the war.
On the evidence of this, the preoccupations of those in charge of the monument were fairly narrowly focused on their position in the masculine political agenda of the Third Republic, with women excluded by default, rather than by intent. The fact that it took until 1928 for the inauguration to take place also suggests there was no desperate need for this particular form of commemoration to combat a threat to that masculine agenda arising from the war.
At the ceremony at Beauvais, the chair of the ceremony, M. Desgroux lauded the sacrifices made by the soldiers, but also their families, arguing that the city had given France “huit cents de ses meilleurs enfants qui l’ont sauvée par leur mort héroïque, pendant que leurs mères, leurs épouses, leurs bambins supportaient avec stoïcisme les assauts impuissants de la rage et de la barbarie teutonnes.” M. Largilliere, the Vice President of the Beauvais veterans also sought inclusiveness amongst those who had suffered.
Ils [anciens combattants] savent que vous n’oublierez pas les camarades morts et l’enseignement qu’ils ont fourni, ils savent aussi que vous êtes toujours pleins de bonté pour les mutilés, et que votre charité pour les veuves et les orphelins est sans borne.
The next speaker, M. Noel, a Senator from the Oise, like the politicians who spoke at Chambéry emphasised the current political situation, and how France needed to be vigilant to combat the eternal Prussian threat.
An interesting, if unusual, monument is the one in Equeurdreville.
Made by Emilie Rodez, one of the few female sculptors commissioned to construct a monument, it depicts a mother with two young children looking exhausted, dispirited. The inscription read “Que Maudite Soit La Guerre. Aux enfants d’Equeurdreville morts pendant la guerre 1914-1918″, makes it one of the very few overtly pacifist monuments in France. While the men responsible for the erection of the monument must have accepted the design, it is an indication that women might have had a different conception of the war than that of the male establishment.
This study, like the vast majority of studies on gender and the First World War, has focused on male views of women. Nevertheless, the patriarchal structures of Republican France seem to have enjoyed at least the tacit support of most women, and several studies have discussed the lack of feminist activity against the oppression of women. Of course this cannot be taken to imply that women unquestionably did accept male discourse on society; there were several other factors that hindered the development of feminist activity on the Anglo-American model, not least that the Interior Ministry ordered the police to refuse almost all applications for feminist street demonstrations throughout the existence of the Third Republic. There thus remains the question as to how women saw society at the time, and how the war modified their understanding of it.
Susan Kent’s study of the effects of the war on British feminism suggests that the war had a dramatic impact on feminist perceptions of gender.
With the onset of the Great War, many feminists began to modify their understandings of masculinity and femininity. Their insistence upon equality with men, and their acknowledgement of the model of sex war that accompanied that demand, gradually gave way to an ideology that emphasized women’s special sphere – a separate sphere, in fact – and carried with it an urgent belief in the relationship between the sexes as one of complementarity. Pre-war feminists had vigorously attacked the notion of separate spheres and the medical and scientific discourses upon which those spheres rested. Many feminists after the First World War, by contrast, pursued a program that championed rather than challenged the pervading ideas about masculinity and femininity…
This change was brought about primarily through the shock felt by British women at the dreadful realities of the war. Simultaneously admiring the heroism of the soldiers, and horrified at the brutality needed to fight it, to women it seemed a graphic illustration that men and women were better suited for different tasks. A flavour of this attitude is displayed in the views of Millicent Fawcett in October 1914
While the necessary, inevitable work of men as combatants is to spread death, destruction, red ruin, desolation and sorrow untold, the work of women is the exact opposite. It is … to help, to assuage, to preserve, to build up the desolate home, to bind up the broken lives, to serve the State by saving life rather than by destroying it.
As well as the resounding call for traditional roles, this statement has powerful overtones of repugnance at the conduct of war, firstly by the extended list of the evils of war, then by the final comparison of roles. Of course she acknowledges that this destruction of life is necessary and inevitable, but it is hardly a resounding endorsement of war, and her depiction of the female role as oppositional to men’s, rather than complementary, further distances women from it.
The early date of this quote shows how soon this change in attitude happened. During the first phase of the war, it was portrayed in a very traditional manner. Much of the atrocity propaganda that circulated in Britain focused on outrages against women, emphasising the heroic role being undertaken by British soldiers in protecting their womenfolk from the misfortunes that had befallen the women of Belgium and Northern France. The suffrage movement almost as one swung behind the line that a woman’s task was to take up her traditional roles.
For Kent, this change in view was in many ways based on flawed perceptions of the suffering that trench warfare was having on the men involved in it. Censorship and propaganda prevented women from making an accurate judgement on the situation.
Those women who were able to hold on to pre-war understandings about gender … were those who had experienced the war directly, at the front. Most ‘new’ feminists, by contrast saw the war from afar, from home.
It does also have to be noted that there was a boost to feminism brought about by the war work done by women, and the privations they suffered as a consequence of the fighting. Nina Boyle of the Women’s Freedom League argued, “women’s place by universal consensus of opinion is no longer the Home. It is the battlefield, the farm, the factory, the shop.” Rebecca West believed that: “Surely, never before in modern history can women have lived a life so completely parallel to that of the regular Army.”
Nevertheless, for Kent the ultimate legacy of the war in Britain was a shift from women believing that masculinity/femininity were “the products of laws, attitudes and institutions” to a belief in a “biologically determined, innate male and female sexuality.” She quotes Christabel Pankhurst writing in 1924
Some of us hoped [for] more from woman suffrage than is ever going to be accomplished. My own large anticipations were based upon ignorance (which the late war dispelled) of the magnitude of the task which we women reformers so confidently wished to undertake when the vote should be ours.
Of course it cannot be assumed that a similar process occurred in France. Indeed French feminism had been characterised before the war by the very features that Kent notes in post-war British feminism. Yet there are echoes in this report made during the war by the French section of the CIFPP “Le devoir impérieux des femmes, aujourd’hui plus que jamais est de dénoncer et de combattre cet universal mensonge [that war is "une des formes possible ou même nécessaires de l'activité humaine"], par lequel le meurtres des hommes se fait accepter à la pensée des hommes.” Jacobzone notes that “Les militaires sont à la fois admirés, convoités et redoutés”.
Jo Burr Margadant’s study of the institutrices of the Third Republic gives an interesting insight to women, who were unusual amongst the middle class in having sought paid employment before the war. This independent choice might not necessarily mark them out as feminists, but it does suggest that they were women with their own opinions on society. In lieu of the usual dignitaries who came to address departing pupils, during the war these institutrices had to take over the role of speaking at graduation ceremonies. In 1915 they delivered a resoundingly traditional message. Marie Lépine argued “To nurse, to dress wounds, to cure, to conserve, such is the role of women in peacetime.” This was all the more vital in wartime. The directrice of the college at Dax stressed how the female war effort paled in comparison with the men at the front: “boys’ lycées and collèges piously celebrate the glory of teachers and students who are suffering and dying for the nation … we feel profoundly the humbleness of our contributions.” Looking to the post-war France, Cécilia Térrène Lafleur declared, “Your brothers will be the poets, the scholars, the artists of the new France. You will be the guardians of the foyer…”9
However, Margadant argues, by the equivalent ceremonies of 1916, things had changed dramatically. No longer did the women make apologies for making the speeches themselves, while the messages contained in them were less traditional. Alice Bolleau, the directrice of the lycée in Niort argued, “Single women do not have the right to be useless and inactive. The liberal professions, the administration, commerce, even industry need their intelligence and their activity.” Similarly Mlle. Thomas, a headmistress in St Etienne, believed that
The type of girl, occupied by useless tasks, for whom a bit of embroidery, an hour of piano sufficed, once her studies ended, was already on the wane before the war. All the more reason for her disappearance today.
Their students were urged out to work by Mlle. Bousquet, who took “the beautiful motto of our heroic defenders who cry out to one another: ‘We shall go on because we must.’ Let us say, instead, “We shall work because we must.” Margaret Lépine in Agen admitted that it went against the taste of her students to cultivate the soil and that the “duty does not seem to befit you, but I tell you that it does because it is going to be the task of everyone.” Louise Thuillat Manuel admonished the graduates of Limoges that “as good French women, you will not have satisfied all your obligations just because you have fulfilled your family duties.”
Yet there are clear limits to this change in attitude, graphically illustrated by all the caveats included in the statements. Bolleau restricted her exhortations to find paid employment to single women. Bousquet said that women had to work, not that it was necessarily desirable. Thuillat Manuel believed that family duties naturally came before anything else. Most of all, for the intelligent, ambitious, middle class graduates of these schools, being encouraged to till the soil or work in “administration, commerce or even industry” could hardly have been seen as emancipatory.
For these instutrices traditional roles were still seen as the ideal, but in wartime they were not sufficient. Nevertheless they appear entirely confident in the ability of their charges to take on all the extra tasks that war made necessary. In Anjou, Mathilde Allanic believed that the war had allowed a re-evaluation of French women.
Française! Pour la majorité des habitants de notre planète, ce vocable équivalait à un synonyme de frivolité, d’étourderie, de coquetterie, d’inconscience presque anormale.” However the war had shown that the French woman had “ses qualités de ferme et lucide raison, d’intelligence initiative, de persévérance courageuse.
Nevertheless she still saw a traditional role as the ideal. “C’est dans une vie familiale, tendre et simple, que s’épanouit l’âme gracieuse des colombes de France.”
When French women spoke of feminism, their ideas were traditional. Jeannine, while arguing in favour of feminism, dismissed the American and English suffragettes “des viragos moustachues, au regard farouches et décidé, qui, tout en copiant le disgracieux costume masculin, déclaraient une guerre implacable à l’homme.” She forcefully rejected “scientific proof” of women’s inferiority in favour of an argument of equilibrium in difference, with the woman being “la force plus reposée, moins brutale, elle est la Vie enfermée dans la courbe adorables des lignes; elle est le rythme et la musique, la grace indicible, l’épanouissement, le triomphe de la chair baignée d’ombre et de lumière.” A few months earlier she argued it was necessary for women to retain their sweetness and mocked “les féministes qui exagèrent” who consider men the enemy and wear masculine clothing. Her tone suggests she is amused rather than threatened by “Ces chères combattantes!”
An exam question for secondary school girls in June 1916 proposed this question on what feminism was.
Un écrivain moderne a dit ‘il y a deux sortes de féministes: les vrai amis des femmes, ceux qui veulent leur donner le beau rôle dans l’humanité, c’est-à-dire la prépondérance par l’intelligence et le cœur; les faux amis qui désirent en faire des avocats et des députés’. Vous expliquerez cette réflexion et vous donnerez votre avis personnel sur cette question.
The 20 students who took the test all agreed with the writer. They argued that the role of women was determined by nature and it was to be mothers, to raise children and to support and encourage their husbands.
Traditionalist ideas also permeated any discussions of the role of women in society. When the Petit Marseillais wrote an article advocating giving women the vote, it printed two letters from female readers in response. The first claimed that the vote would be an unwelcome distraction from domestic duty. The second disagreed, claiming that the uniquely feminine characteristics of women would mean that their vote would have a beneficial effect on the country. Laura Downs points out that “the unpleasant discovery of the male production worker’s relative and arbitrary privilege rarely produced a movement for equal pay.” Instead women merely sought to work harder. When women did demand extra pay it was as substitutes for their husbands, not as a fair reward for their own labour: “As our husbands are all at the front, we have a right to the same wages as the men.” Jeannine called on her female readers to become marraines
Pour la marraine, c’est la joie d’accomplir une belle mission, c’est comprendre son rôle de femme pendant la guerre. N’est-ce pas très femme de dorloter les peines, d’encourager, de choyer, d’être maternelle, de sourire, de se faire gaie afin de chasser les tristes ombres qui dansent leur ronde autour du poilu.
Marcelle Capy, writing in the self-proclaimed feminist journal La Vague on the subject of soldiers returning after the war pointed out the difficulties for women who lacked a returning husband, but also unquestioningly accepted that those whose husbands did return would return to the household.
Voici l’hiver. La famille ouvrière a besoin de charbon, de vêtements, de lumière, de nourriture. Si le mari était rentré de la guerre, il pourrait aller à l’usine et la femme reste à la maison. Mais le mari est toujours soldat, et la femme est toujours obligée de conduire la barque.
Similarly, Jeannine quoted a correspondent who argued that women who became municipal councillors or entered careers lost their grace and feminine charm. Instead they should stay at home, seeking to make it comfortable and attractive. She agreed with him in theory, believing that but for a few exceptions the true vocation of a woman was to occupy the family home. However the modern world made this often impossible. Again, feminism seems to have entailed simply recognition that the ideal role for women could not always be achieved.
The prominent feminist Cécile Brunschwig, speaking at a meeting of the Ligue des Droits de l’Homme, argued that the soul of women was incarnated through the family. “N’est-ce pas autour d’elle que dans chaque civilisation se groupe la famille? N’est-ce pas elle dont la longue patience a défendu, au cours des siècles, l’intimité du foyer, la fragilité de l’enfance, la moralité de la jeunesse?” She also joined feminist colleagues Julie Siegfried and Margueritte de Witt-Schlumberger in helping to found the natalist group Pour la vie. When women did act independently, they were at pains to point out their adherence to traditional norms of feminine modesty and virtue. The Association française pour la Recherche des Disparus claimed in an advertising flier that it had “poursuit sans ostentation depuis février 1915 son œuvre de secours moral.”
Jeannine encapsulated the views of much of French feminism in her article on Le rôle social des Femmes. She argued that women had shown their worth during the war, and could have a very beneficial part to play in public life. “Devenues pour un temps – ou pour toujours, hélas! – chefs de famille [...] leur faculté d’adaptation leur a permis d’entrer hardiment dans toutes les voies pour lesquelles il semblait qu’elles ne fussent point faites”. Their contribution to public life would be rooted in their intrinsically feminine qualities.
Son action peut apporter d’immenses bienfaits dans nos sociétés imparfaites, car la femme est l’ennemie de la guerre inique, de toutes les abominations et des injustices qu’elle sent profondément.
Elle lutterait pour la disparition de l’alcoolisme et de la tuberculose; elle voudrait des lois plus équitables et serait une force dans les organisations qui combattent pour les causes vraiment humaines.
Working-class women also rarely proved to be radically more feminist than their male counterparts. At a meeting of the metal workers union in Bourges in April 1918, the star speakers were Hélène Brion and Madeleine Vernet. According to the Police report, Brion’s speech, which focused on feminism and on improving the position of women in society, “fut très écouté et souvent applaudi.” However the audience of roughly 500 people was a largely male one. “Malgré le pressent appel aux ouvrières pour assister en grand nombre à cette réunion une quarantaine ne femmes à peine se trouvaient dans la salle.”
When female journalists wrote for newspapers they rarely took radical positions and whatever the newspapers thought of the potential changes that might be wrought in women’s place in society during the war, they made little concession to them in their articles directed at women. In the Petite Gironde, the only feature that was directed at women was an occasional column, entitled the Carnet de la Femme, that concentrated on fashion, perfume, beauty and similarly feminine concerns. A typical beginning to a Carnet article referred to the “multiples travaux que nous exécutons pour nos combats: ceintures tricotées, chaussettes, chandails, mitaines, gants, plastrons, etc.” One article began “On nous reproche parfois, peut-être – (tout arrive…) – de nos occuper de toilettes et de beauté à des heures douloureuses.” They rejected this of course, but devoted that article primarily to boosting marraines organisations. This was the only diversion towards any new developments that might have affected its readership. La Semaine Féminine in the Petit Parisien followed a similar line.
In an article in La Bataille entitled Pour les Petiots, Jeannine sought to distance her paper from that of the bourgeoisie, claiming that the Bataille was not a journal of fashion and that neither she nor her readers cared about what the aristocracy were wearing. However she thought that mothers might make an exception for their babies. “Vous avez toutes, j’en suis sûre, la coquetterie de vos enfants, et comme vous avez raison.” Moreover, La Bataille also ran occasional articles by Jehanne la Chaperonnière, which focused on clothes, and these were the only articles, other than the ones seeking marraines, which addressed themselves directly to women.
The only exception to this tendency of articles aimed at women to address them simply as domesticated housewives was the conservative L’Ouest- Éclair, in which Marthe Dupuy offered several articles calling for greater rights for women in recognition of what they had achieved during the war. However, these articles stopped appearing after 1916, and the newspaper brought in a regular column entitled “Pour les Menagères” which represented its only content written either by, or explicitly for, women.
From the material given, it would be easy to think that the debate over the behaviour of women and their place in society was omnipresent in French discourse. Instead, it was largely peripheral. Newspapers only occasionally dealt with women; the reports by the regional committees assigned to oversee the administration of the war were virtually silent on the issue. Paul Cambon barely mentioned women in his wartime correspondence, except for an occasional comment about the malign influence upon the Tsar of his wife, which was, he sadly noted, a far from unprecedented state of affairs. As Thébaud notes “[a]fter the arms fell silent, tens of thousands of books were written in the hope of understanding the extraordinary events just past. [...] In all this post-war writing, however, there was little discussion of women.” When journalists did address women it was often as light relief from the serious issues of the war. In La Petite Gironde, Berthelot noted that proposals to grant female suffrage included restricting voting to women aged over thirty. “Mais, candidates parlementaires, il n’y a pas une jeune femme qui, pour la plaisir d’aller faire queue à la porte de la mairie pendant une heure, consentira à reconnaître qu’elle a passé la trentaine!” For Jules Véran in L’Eclair du Midi, women were a regular source of lighthearted material. When it was announced that no dress would measure more than four and a half metres he commented
Ne vous effrayez pas, mesdames, on ne songe pas à vous imposer un uniforme national. Vous pourrez continuer, l’hiver prochain, à vous habiller comme vous voudrez et ce sera toujours charmant, j’en suis sûr. Mais… oh! ne tremblez pas! …
When the clocks changed “Nous connaissons déjà le mois où les femmes parlaient le moins, qui est le mois de février, parce qu’il n’a que 28 jours. Nous saurons maintenant quel est le jour où elles auront le plus parlé…”
Zette, writing in Hors d’œuvre, commented on a judge who was faced with 55 cases of defamation or slander, all between women. The judge inquired if the Union Sacrée applied to women. “Et faut-il que les femmes se gourment entre elles pendant que leurs maris unissent leurs forces contre l’ennemi commun?” Zette argued that the reason for this was “La femme a des nerfs: en temps de paix, elle passait ses nerfs sur son mari, et la chose allait rarement devant les tribunaux de répression. Depuis la guerre, la femme, plus nerveuse et plus justement nerveuse, est obligée de passer ses nerfs sur ses amies et voisines.” The way to deal with it was to employ the methods of schoolmasters as if the women were children. The ambulancier, Germain Balard, explained moral dislocation arising from the war as due to women being “portées naturellement par leurs instincts”.
Even serious issues like the strikes of the midinettes and then the couturiers in 1917 were treated as amusing diversions by L’Eclair du Midi. L’Ouest-Éclair headlined news stories on the strikes alongside little cartoons, presumably to indicate the essential frivilousness of the topic. The next occasion that similar cartoons were featured was the first instalment of “Pour les Menagères”, a very traditionalist column offering practical advice to housewives.
The most noticeable constant in these comments is the traditional idea of women that informs them. The apparent resistance to significant modification of pre-war attitudes to gender relations appears repeatedly. In the French army, a regimental order of the day in August 1916 commanded soldiers not to use language that might offend the sensibilities of women employed by the regiment, while Veran joked about General Pershing’s admonition to the American troops never to tell anything confidential to a woman. L’Eclair du Midi reprinted advice given by the Turin section of the General Union of Professors on how women should behave during war, claming it was very wise. This advice was wholly consistent with an eternal idea of womanhood: don’t gossip, do not be swayed by alarmists, do not overspend, think about loved ones, not to complain, make yourself useful, admire the soldiers, be patient, and suffer stoically.
The irrationality and sentimentality of women continued to be primary themes when the actions of females were examined. According to La Dépêche, the women who had been granted the vote in the US prior to the 1916 election were expected to vote for Wilson because “les femmes inclinant à approuver celui-ci d’avoir évité la guerre aux Etats-Unis, même au prix d’humiliations nationales.” Ferri-Pisani wrote in his book on the United States how Wilson had “seduced” women into voting for him by “le romanesque de sa vie privée et son sobriquet de ‘great lover’”. The discussion in L’Ouest-Éclair on how the female voters of Illinois would vote in the 1916 election was predicated on a belief in the innate pacifism and sentimentality of women. “Logiquement, les quatre millions de suffrages féminins devraient aller au candidat socialiste [because he was the pacifist candidate]. Mais il faut compter – et largement – avec le sentiment.” As mentioned earlier, the emotional behaviour of the female US deputy who voted against the war was remarked upon negatively and indicatively.
In La Dépêche, Pierre Mille argued
Cette manière de raisonner portait surtout sur les femmes – on avait institué des parlottes des femmes. Quand les femmes se mêlent de généraliser, certaines le font avec une angélique et terrible esprit de simplification. Celles-là sont mal capable d’analyser les mots et voir ce qu’il y a dessous. Elles les prennent en bloc et logomachisent avec sentiment.
In his letters home, André Kahn pondered on the possible reasons for the conversion to Catholicism of a woman called Marcelle. “Est-ce le fruit d’un amour malheureux? Une crise de mysticisme hystérique?” That her decision could have been made through any sort of rational thinking doesn’t appear to occur to him. Eventually he finds a satisfactory answer in the malign influence of a priest.10
The conservative L’Eclair du Midi waged a consistent battle against the popularity of the cinema during the war, arguing that it could have a corrupting influence on weak minds. Jules Veran found a perfect illustration of this in a murder carried out by two young women, aged 16 and 20. Questioned by a psychiatrist “Ses bourreaux en jupons – la langue française, trop galante n’a pas de féminin pour ce mot…” revealed that they were “toquées sur lesquelles avaient agi fâcheusement les films policiers donnés par les cinémas.”
In La Petite Gironde, Berthelot responded to a paper offered to l’Académie des sciences by the prominent work scientist Jules Amar.
La conclusion de M. Amar est logique, les désordres physiologiques et moraux dont la société portera un jour le poids viendront de l’utilisation défecteuse des aptitudes chez la femme. Il est donc nécessaire de classer les femmes d’après leurs aptitudes physiologiques et psychologiques, et d’ecarter de leur travail toutes circonstances où l’effort et l’émotion ont chance d’être fréquents.
Berthelot pointed out the difficulty of removing women from all areas where emotion and effort are required. Amar himself argued that
Il n’y a pas, entre l’homme et la femme, une difference de degré intellectuel, de puissance cérébrale, de quantité d’énergie psychique; c’est tout simplement une question de qualité: les modalités du travail cérébral ne sont pas identiques. Ici, pour la femme, l’ordre sensitif l’emporte; il s’est imposé par l’habitude et l’hérédité. Là – pour l’homme – c’est, au contraire, l’ordre abstrait de la raison et de la pensée; en vertu de cette abstraction même, il s’établit une indépendance relative des fonctions motrices à l’égard des actions extérieures, et c’est ce que traduit le mot volonté. [...] Pour en revenir au cerveau humain, il semble difficile de tirer un enseignement quelconque de son poids, de ses replis, de son architectonique. L’examen de cet organe n’a permis de rien conclure, non plus, quant à la race; il a le même poids moyen chez les Australiens, Indiens, Chinois, Japonais et Malais que chez les Européens. Celui des nègres est, toutefois, moins massif et moins dense. Mais aucun rapport réel entre la quantité et la qualité, entre les facteurs mécaniques et les facteurs psychiques. Les races, comme les individus, comme les deux sexes, ne présentent aucun indice cérébral visible de leur inégalité intellectuelle.
Amar’s argument is very significant. Not only is he restating the conventional idea that men have the capacity to utilise abstract thought while women rely on an instinct, but he is giving it the weight of scientific fact. Moreover, by arguing that the difference between men and women’s intellect cannot be measured by the quantity of psychic energy, or by any visible indicator from the brain, Amar effectively renders his judgement unfalsifiable. If only outward manifestations of intellect can illuminate questions of intellectual equality or inequality, then it becomes still harder to combat preconceptions. Amar’s argument also provides another example of the frequent comparison of relations between the sexes to relations between the races.
Paul Cambon, talking about bombing raids in London, praised French women for being calmer in a crisis than their English equivalents, but nevertheless noted that there was a tendency amongst women to become emotional for no good reason. The usual calmness of English women by contrast was dismissed as intellectual inertia.
Il est curieux de constater combien les femmes anglaises sont inférieures aux nôtres au point de vue de la résistance morale et du sang-froid. Chez nous on crie pour des riens, mais on se calme quand la situation devient grave. Ici la silence et la tranquilité ordinaires des femmes ne sont qu’une signe d’inertie intellectuelle et aussitôt le danger déclaré elles perdent la tête. Nous avons un lot de dactylographes françaises qui conservent leur bonne humeur pendant la canonnade, et ne s’embarrassent pas de rentrer chez elles pendant la raid.
J.H. Rosny offered an equally backhanded compliment when advocating that women should be allowed into juries and to become magistrates. Rosny looked at Balzac and other such writers and agreed that women were indeed capricious and irresponsible, but that men were not doing much of a job at providing impartial justice either.
Indeed, praise for women took as conservative a form as did criticism or humour. In July 1916, Raymond Poincaré rendered homage to the women of France:
A vous surtout, Mesdames, j’adresse les remerciements émus et respectueux du pays. Vous avez montré ce qu’il y a chez la femme française de flamme intérieure et d’élévation morale; vous avez prouvé une fois de plus qu’elle demeure à jamais la sûre gardienne de nos traditions et l’inspiratrice des grandes vertus populaires.
For Maurice Donnay “Si les hommes sont partis pour combattre l’envahisseur, aussitôt les femmes se mobilisent pour combattre la souffrance, la misère et la douleur.” Despite his jokes about women voting, Paul Berthelot actually advocated female suffrage because
La femme sera la bonne marraine qui apportera dans la lutte avec la misère, avec l’alcoolisme, avec la tuberculose, le sentiment très précis des réalités, la connaissance profonde des divers cas et espèces, et aussi cette délicatesse de doigté qui donne la confiance au malheureux et au malade, et en lui rendant l’espoir assure le succès. [...] La grande famille française, tout comme notre foyer, a besoin de gardiennes ferventes, agiles et clairvoyantes.
Elles sauront faire face à leurs responsabilités nouvelles. Le bulletin de vote, dans leurs petites mains, ne sera pas un joujou, mais comme dit la bon Coppée, ‘Un outil de travail, une arme de combat’ pour le bon combat contre toutes les déchéances, pour le mieux être sinon pour le bonheur de tous.
Camille Mauclair in La Dépêche
Depuis trois ans et demi. La femme isolée a singulièrement progressé et mûri. Elle s’est grandie dans l’estime nationale par la façon energique et intelligente dont elle a accepté et rempli les tâches de l’homme absent. Cette conduit a plus fait pour la cause du féminisme que vingt ans de revendications théoriques. L’expérience est là. Il sembler juste, naturel et utile que la femme conquière l’électorat et bien d’autres privilèges sociaux. Mais s’il y faut applaudir, il n’en faut pas moins craindre que la force d’une telle évolution détourne de plus en plus la femme de son ancien rôle, redevenu primordial: l’amour et la fécondité au foyer.
Jeannine argued that women should be given the vote because “… l’activité qu’elles sauraient déployer dans les œuvres sociales et dans la confection de lois justes a profitable à tous.”
In this poster seeking subscribers for national bonds, the men are handing in large wads of notes, while the woman is scrabbling around for some change in her bag.
One excellent source for ideas about female abilities is the collection of depositions given to an extraparliamentary commission on the organisation of secondary education for girls that reported in 1918. Nearly 50 oral depositions were given from a wide variety of sources.
The commission began from a starting point of rejecting the “radical” idea of giving girls the same syllabus and same diploma as boys. It argued that such a “complete assimilation” would be contrary to the law of 1880. Not only that but it would display a failure to recognise the real aptitudes of women. Not only would such a move go against the true interests of society, it was contrary to nature itself. To push all young women towards the baccalauréat would be to create a female intellectual proleteriat.”
Though most of those invited to give their opinions agreed with this, there were occasional dissenting voices. Mme. Cruppi, the President of the section lettres-sciences du Conseil national des femmes françaises argued that schools for girls should offer same qualifications as for boys. M. Brunot, a professor from the Sorbonne also argued strongly in favour of equal education for both sexes. He argued that it was necessary to open to women every career to which they might “legitimately aspire.” Unlike the vast majority of participants, Brunot mentioned the war as having played a part in his thinking arguing that “ce qui n’était qu’un devoir avant la guerre pourra devenir une obligation.” However, he is not arguing that the war had opened his eyes to the qualities of women, but taking the common line that women’s efforts during the conflict demanded a reward.
M. Bernès, a member of the Conseil supérieur de l’instruction publique, was slightly more equivocal, arguing in favour of offering an equivalent programme of studies for girls as boys in secondary schools, but claiming that he didn’t envisage many girls taking the baccalauréat.
The majority of the responses emphasised traditional feminine roles though. The Inspector-General of l’instruction publique, M. Cahen, argued that the aim of the school was to create good republican mothers, evoking the motto of the École de Sèvres: Virgines futuras virorum matres respublica docet. He argued that a similar educational workload to boys would be too demanding, intellectually and physically for women, and that space needed to be reserved for teaching them women’s work.
On n’en doit pas conclure que l’éducation des jeunes filles doive ressembler tout à fait à celle des garçons. La préparation simultanée de deux ou trois examens, diplôme, brevet supérieur, baccalauréat, place les jeunes filles dans des conditions très fàcheuses au point de vue de l’hygiène intellectuelle et physique. [...] Une place doit être réservée aux travaux féminins, en particulier à la couture et à tous les arts qui s’y rattachent.
M. Darlu, an inspector general of public education, also claimed that the two sexes had different needs in education, more techinal for boys and more general for girls. Mlle. Milliard, a member of the Conseil supérieur de l’instruction publique argued that, from the ages of 7 to 11, girls should be taught to develop their moral and social side, and learn about hygiene and good housekeeping. “c’est à l’âge où la jeune fille est la plus malléable qu’il faut accentuer le caractère féminin de l’enseignement qu’elle reçoit.” From 11 onwards, different programmes should be available for girls who wished to take the baccalauréat in order to enter certain careers, for others who wanted to work in commerce or industry, while a third programme should be available for girls who wanted to look after children, the sick and so on.
One of the members of the commission, Senator Lintilhac, wondered what was wrong with allowing any girl who was interested to take the baccalauréat as “il y voit l’avantage de diminuer certains paresses et d’éveiller certains curiosités.” Milliard responded by arguing that many young girls are badly informed about the possibilities offered to them by possessing the diploma and “elle craint le snobisme et l’inutilité social d’efforts qui pourraient être mieux employés à l’acquisition d’une culture différente et tout aussi bonne que celle du baccalauréat.”
Mme. Suran-Mabire, a lycée professeur in Marseille and also a vice-President of the Fédération nationale des professeurs de lycée et du personnel de l’enseignement secondaire féminin, suggested a regional dimension by noting that in the small towns girls were happy with the diploma and fearful of studying Latin and the baccalauréat, while in Paris and the big cities they demanded the baccalauréat.
Various women working as lycée professeurs were invited to give their opinions, and they offered a wide range of opinions. Mlle. Couvreur, argued that the majority of the female population in lycées should not be educated “dans le sens de l’éducation masculine.”, while Mlle. Dugard, argued that “Il est indispensable de mettre les jeunes filles en état d’assurer leur existence et, le cas échéant, celle de leur famille; il faut les préparer aux fonctions que les hommes ne peuvent ou ne veulent plus remplir.”
Mlle. Amieux, a lycée directrice, suggested that boys should be taught maths and physics while “les futures mères aurant plutôt besoin des sciences naturelles”, but Mlle. Picot, a professeur disagreed, claiming that she had enountered plenty of female pupils who had a taste for maths, which they could do as well as their brothers and which made them no less charming as wives or excellent as mothers.” Despite the clear differences though, the underlying theme remains that the primary aim of education for girls should be to prepare them for matrimony and maternity and that if they were forced to make a living it would only be in work that men no longer were available for. They also echo the ideas transmitted in the speeches of the institutrices studied by Margadant, as discussed above.
M. Goy, a senator on the commission offered the most traditional response, arguing that the exigencies of war had driven women to take the place of men in many situations “n’y aura t-elle pas contracté des idées d’indépendance qui relâcheront les liens de la famille qui lui feront oublier le devoir le plus sacré, celui de la maternité” This tendency for women to forget their primary function needed to be halted, or else natality would drop. Additionally, Goy argued women were not of the required intelligence. They lacked intellectual power, and it was essential not to encumber France’s faculties with “étudiants médiocres, incapables d’augmenter le patrimoine intellectuel de la nation.”
Despite attitudes like this amongst members of the commission, a proposal to forbid girls from taking the baccalauréat was generally opposed. “On opposa des raisons de droit constitutionnel, on invoqua le libéralisme traditionnel de l’Université et l’utilité de laisser le choix aux jeunes filles entre un examen conçu pour elles et le baccalauréat des garçons;” Instead, the commission decided that the education given to girls by the current secondary system was “bonne dans l’ensemble”, and that they shoud seek to retain it “dans ses grandes lignes”.
The views of the commission as to educational priorities were shared elsewhere. The subcommittee of economic action in the Correze also argued that it was necessary to urgently organise “enseignement ménager” in the region.
Ayons moins de femmes savants dont les connaissances sont souvent superflues et efforcons nous de possèder dans nos jeunes filles des legions de bonnes ménagères n’ayant pas peur de la besogne, aptes à tirer profit pour le mieux de tout ce don’t elles pourront disposer et possédant déjà au sortir de l’école beaucoup de notions sur leurs futures devoirs d’épouses et de mères.
In Michel Chassagny and G. Labarre’s Précis de physique, designed for the secondary education of girls, they wrote how they had sought to exclude abstract reasoning and purely mathematical developments. Instead “[n]ous avons constamment donné pour base aux différentes théories des expériences aussi simples que démonstratives…”
These educational ideas were not restricted to France’s metropole. Guidelines set out for the teaching of girls in French Indochina advised that they should have the same programmes of teaching as the boys, but only half as much time should be allocated to it, with the other half devoted to domestic education, “l’enseignement ménager”. In addition, it was advised that their regular lessons be taught so as to adapt to their domestic education. Young girls needed to be taught through practical demonstration and application what could be taught to boys more theoretically.
It was concluded that it was necessary to teach young girls
l’hygiène, la morale et la politesse en insistant suivant l’âge des élèves sur les devoirs de la fille, de la sœur aînée, de la femme, de la mère, de la maîtresse de maison et en ne négligeant aucune occasion de redresser les notions grossières (préjugés ou superstitions) qui obscurcissent si souvent les cerveaux féminins;
It was noted that the principles of morality were not really very different between young Indochinese girls and their French equivalents. The ultimate aim of the education of these girls was to prepare them to become homemakers, and it was claimed that their aim was to ensure each one of their students became “une maîtresse de maison.”
Advertisements remained particularly traditional, with women depicted as housewives, nurses, or consumers of medical products, or else as potential customers for the latest fashion. Men appeared as tradesmen, professionals or soldiers. One of many possible examples is a Globéol advert from 1917 where a male patient was depicted being attended to by a solicitous female nurse, who was supervised by two male doctors. An advert for Pilules Pink asserted that women were “êtres faibles” and often were hiding suffering behind their smile. Their blood was poor and they risked losing their “charme naturel”. Thus they needed to take the Pilules Pink. In an advert for Malt Kneipp, an alternative to coffee, a couple were receiving counselling for their relationship. The husband was “coléreux, jaloux”, the wife “nerveuse, emportée”. The counsellor advised them their problems were due to the coffee, but the characteristics of the spouses were entirely consistent with traditional stereotypes. An advert for a medical product claimed that, far from living the high life, “Trop souvent les mères et épouses commetent une erreur en se sacrifiant continuellement pour les autres.”
It was not just commercial imagery: official posters followed a similar line. A poster advertising a “Journée du Poilu” in the Val d’Oise in December 1915 depicted a young boy in military garb and his slightly elder sister dressed as a nurse requesting money so their father could come home on leave.
In its first issue after the armistice, La Vague featured a large cartoon on its front page. Bright sunshine signified the new dawn, as did the dove with an olive branch. In the foreground a man destroyed a cannon with his hammer. Behind him a soldier comforted his wife and daughter. In the background a man ploughed the earth and factories belched smoke. It was an utterly traditional picture, in which gender roles were wholly unproblematic. Men worked and protected their families, women looked after their children. That the avowedly feminist Vague saw this as an ideal post-war scenario demonstrates the limited nature of the impact the revolutionary behaviour of women during the war had.
An article entitled La libération de la femme by Arthur Lauba offers an excellent illustration and bears quoting at some length. He argued that women had been oppressed ignominiously, but that when the war called on them to replace men, they “ont remplacé parfaitement les hommes”.
Donc, assez d’égoïsme a fait souffrir la femme, aujourd’hui sa libération totale est devenue inéluctable; plus de femmes-servantes, des femmes-soeurs. Si nous voulons vraiment empêcher le retour des abominations actuelles, libérons la glorieuse femme, reine de la maternité, des tutelles odieuses qui trop longtemps l’asservirent au détriment des intérêts supérieures de la société.
However, the liberation of women did not involve their having a post-war role similar to men, despite their ability to replace men.
Aux hommes les labeurs de forces musculaires; à la femme, le labeur sacré de la féconde maternité, de la vie sacrée qui perpétue le genre humain. Si la femme n’est pas à nos côtés avec égalité de droits et de devoirs, la lutte contre l’alcoolisme, contre le militarisme et toutes les hideuses plaies sociales, sera vaine. Elle seule, mère sublime, pourra régénérer les hommes, car c’est elle qui peut déposer dans la coeur de l’enfant les premiers ferments de liberté et d’amour. Aujourd’hui, les tergiversations ne sont plus de mise, l’heure est aux actes qui, seuls, font poids, la femme doit être rendue à la liberté. Autant pour elle que pour nous, la dépendance économique doit être supprimée. Et la femme, rendue à la plénitude de ses moyens, sera la mère respectée qui, dégagée de tous les préjugés imbéciles qui entravent l’essor du genre humain, nous donnera des fils auxquels elle apprendra l’amour et inculquera la haine de tous les tyrans et la révolte contre toutes les oppressions.
Si nous voulons tuer la guerre, libérons définitivement la femme.”
For Lauba the war had not undermined his belief in the complementary nature of men and women; it had reinforced it. The liberation that women had earned was the freedom to exercise their maternal role on a national scale.
As Margaret Darrow has argued, “What women did in the war or what was done to them by the war was explained – and explained away – as minor adaptations of a traditional feminine destiny.” Any changes in women’s behaviour due to the war were believed to be only temporary, with the expectation that a return to normality in other aspects of life would see a return to traditional behaviour. For Henri Drouin this even applied to sexuality when, writing in the 1920s, he excused lesbianism as a natural response to the absence of men in the aftermath of the war. He explained it as a transient phenomenon though, and illustrated it with the tale of a young woman who had taken a female lover. Guilty and anxious, she had consulted a doctor, who claimed that it was a natural response to her circumstances, and one that would pass. His conclusion was immediately supported by the woman declaring her love for him.
In passing, it is also notable that Italian military weakness has become proverbial.
The groups full name was La Fédération Ouvrière des Mutilés et Reformés de Guerre, Veuves et Orphelins. Once again, the distinction is made on the dividing line of those who were victims of the war.
It should be noted however that this was not setting a trend that was to follow after the war. Illegitimate births per 100 between 1866-1875 were 7.4; between 1896-1905, 8.8; and between 1926-1935, 7.9.
Interestingly, this was portrayed not as a sign of public indifference to veterans, but administrative indifference. It was believed that if such abuses were indicated to the public then they would support the veterans.
In a more recent book, Roberts has suggested that “While the issue of female identity remained at the forefront of postwar concerns, the failure of liberal beliefs to make sense of the war changed the focus of this preoccupation. The fin-de-siècle New Woman gave way to the postwar Modern Woman, who came to represent not so much a threat to (a relatively stable) liberal culture as the full-blown crisis of liberal culture itself.” Mary Louise Roberts, Disruptive Acts: The New Woman in Fin-de-Siècle France. Chicago: University of Chicago Press (2002). By contrast, this thesis argues that traditional beliefs were successful in making sense of the war (at least as regards gender relations) and ensured that the Modern Woman was no more able to transcend those beliefs than the pre-war New Woman.
For other evidence of disputed gender relations before the war, see also Annelise Maugue, L’Identité Masculine en crise: Au tournant du siècle, 1871-1914, Paris: Editions Rivages (1987), Christopher Thompson “Un troisième sexe? Les bourgeoisies et la bicyclette dans la France fin de siècle.” in Mouvement Social 192 (2000) pp. 9-39.
My emphasis, by acknowledging that women had been forced into undertaking work in the factories, Isaac is evidently reducing the blame placed on women.
This pattern of men making speeches and women’s role being restricted to singing patriotic songs is seen again in other inaugaration ceremonies.
It is noticeable that the male roles are distinctly unwarlike.
It is not specified who Marcelle is but she is clearly well known to Kahn and his correspondent, and seems to be a relative. | http://tiens76.wordpress.com/2009/01/03/world-war-1-and-gender-relations/ | 13 |
15 | Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results. Monte Carlo methods are often used when simulating physical and mathematical systems. Because of their reliance on repeated computation and random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm.
The term Monte Carlo method was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory.
There is no single Monte Carlo method; instead, the term describes a large and widely-used class of approaches. However, these approaches tend to follow a particular pattern:
For example, the value of π can be approximated using a Monte Carlo method. Draw a square of unit area on the ground, then inscribe a circle within it. Now, scatter some small objects (for example, grains of rice or sand) throughout the square. If the objects are scattered uniformly, then the proportion of objects within the circle vs objects within the square should be approximately π/4, which is the ratio of the circle's area to the square's area. Thus, if we count the number of objects in the circle, multiply by four, and divide by the total number of objects in the square (including those in the circle), we get an approximation to π.
Notice how the π approximation follows the general pattern of Monte Carlo algorithms. First, we define a domain of inputs: in this case, it's the square which circumscribes our circle. Next, we generate inputs randomly (scatter individual grains within the square), then perform a computation on each input (test whether it falls within the circle). At the end, we aggregate the results into our final result, the approximation of π. Note, also, two other common properties of Monte Carlo methods: the computation's reliance on good random numbers, and its slow convergence to a better approximation as more data points are sampled. If grains are purposefully dropped into only, for example, the center of the circle, they will not be uniformly distributed, and so our approximation will be poor. An approximation will also be poor if only a few grains are randomly dropped into the whole square. Thus, the approximation of π will become more accurate both as the grains are dropped more uniformly and as more are dropped.
Random methods of computation and experimentation (generally considered forms of stochastic simulation) can be arguably traced back to the earliest pioneers of probability theory (see, e.g., Buffon's needle, and the work on small samples by William Gosset), but are more specifically traced to the pre-electronic computing era. The general difference usually described about a Monte Carlo form of simulation is that it systematically "inverts" the typical mode of simulation, treating deterministic problems by first finding a probabilistic analog (see Simulated annealing). Previous methods of simulation and statistical sampling generally did the opposite: using simulation to test a previously understood deterministic problem. Though examples of an "inverted" approach do exist historically, they were not considered a general method until the popularity of the Monte Carlo method spread.
Perhaps the most famous early use was by Enrico Fermi in 1930, when he used a random method to calculate the properties of the newly-discovered neutron. Monte Carlo methods were central to the simulations required for the Manhattan Project, though were severely limited by the computational tools at the time. Therefore, it was only after electronic computers were first built (from 1945 on) that Monte Carlo methods began to be studied in depth. In the 1950s they were used at Los Alamos for early work relating to the development of the hydrogen bomb, and became popularized in the fields of physics, physical chemistry, and operations research. The Rand Corporation and the U.S. Air Force were two of the major organizations responsible for funding and disseminating information on Monte Carlo methods during this time, and they began to find a wide application in many different fields.
Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling.
Monte Carlo methods in finance are often used to calculate the value of companies, to evaluate investments in projects at corporate level or to evaluate financial derivatives. The Monte Carlo method is intended for financial analysts who want to construct stochastic or probabilistic financial models as opposed to the traditional static and deterministic models.
Monte Carlo methods are very important in computational physics, physical chemistry, and related applied fields, and have diverse applications from complicated quantum chromodynamics calculations to designing heat shields and aerodynamic forms.
Monte Carlo methods have also proven efficient in solving coupled integral differential equations of radiation fields and energy transport, and thus these methods have been used in global illumination computations which produce photorealistic images of virtual 3D models, with applications in video games, architecture, design, computer generated films, special effects in cinema, business, economics and other fields.
Monte Carlo methods are useful in many areas of computational mathematics, where a lucky choice can find the correct result. A classic example is Rabin's algorithm for primality testing: for any n which is not prime, a random x has at least a 75% chance of proving that n is not prime. Hence, if n is not prime, but x says that it might be, we have observed at most a 1-in-4 event. If 10 different random x say that "n is probably prime" when it is not, we have observed a one-in-a-million event. In general a Monte Carlo algorithm of this kind produces one correct answer with a guarantee n is composite, and x proves it so, but another one without, but with a guarantee of not getting this answer when it is wrong too often — in this case at most 25% of the time. See also Las Vegas algorithm for a related, but different, idea.
The opposite of Monte Carlo simulation might be considered deterministic modelling using single-point estimates. Each uncertain variable within a model is assigned a “best guess” estimate. Various combinations of each input variable are manually chosen (such as best case, worst case, and most likely case), and the results recorded for each so-called “what if” scenario. [citation: David Vose: “Risk Analysis, A Quantitative Guide,” Second Edition, p. 13, John Wiley & Sons, 2000.]
By contrast, Monte Carlo simulation considers random sampling of probability distribution functions as model inputs to produce hundreds or thousands of possible outcomes instead of a few discrete scenarios. The results provide probabilities of different outcomes occurring. [citation: Ibid, p. 16] For example, a comparison of a spreadsheet cost construction model run using traditional “what if” scenarios, and then run again with Monte Carlo simulation and Triangular probability distributions shows that the Monte Carlo analysis has a narrower range than the “what if” analysis. This is because the “what if” analysis gives equal weight to all scenarios. [citation: Ibid, p. 17, showing graph]
Monte Carlo methods provide a way out of this exponential time-increase. As long as the function in question is reasonably well-behaved, it can be estimated by randomly selecting points in 100-dimensional space, and taking some kind of average of the function values at these points. By the law of large numbers, this method will display convergence—i.e. quadrupling the number of sampled points will halve the error, regardless of the number of dimensions.
A refinement of this method is to somehow make the points random, but more likely to come from regions of high contribution to the integral than from regions of low contribution. In other words, the points should be drawn from a distribution similar in form to the integrand. Understandably, doing this precisely is just as difficult as solving the integral in the first place, but there are approximate methods available: from simply making up an integrable function thought to be similar, to one of the adaptive routines discussed in the topics listed below.
A similar approach involves using low-discrepancy sequences instead—the quasi-Monte Carlo method. Quasi-Monte Carlo methods can often be more efficient at numerical integration because the sequence "fills" the area better in a sense and samples more of the most important points that can make the simulation converge to the desired solution more quickly.
Most Monte Carlo optimization methods are based on random walks. Essentially, the program will move around a marker in multi-dimensional space, tending to move in directions which lead to a lower function, but sometimes moving against the gradient.
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.).
When analyzing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyze and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available.
The best-known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution. For details, see Mosegaard and Tarantola (1995) , or Tarantola (2005) .
What this means depends on the application, but typically they should pass a series of statistical tests. Testing that the numbers are uniformly distributed or follow another desired distribution when a large enough number of elements of the sequence are considered is one of the simplest, and most common ones.
The Wigner Monte Carlo method for nanoelectronic devices; a particle description of quantum transport and decoherence.(Brief article)(Book review)
Dec 01, 2010; 9781848211506 The Wigner Monte Carlo method for nanoelectronic devices; a particle description of quantum transport and...
Schrodinger Goes to Monte Carlo; the 'Adaptive Monte Carlo' Method Seeks Solutions for Chemical and Condensed-Matter Structures
Aug 23, 1986; Schrodinger Goes to Monte Carlo Schrodinger's equation is basic to an understanding of atoms and molecules and the structures of... | http://www.reference.com/browse/wiki/Monte_Carlo_method | 13 |
16 | A correlation is a measure or degree of relationship between two variables. A set of data can be positively correlated, negatively correlated or not correlated at all. As one set of values increases the other set tends to increase then it is called a positive correlation.
As one set of values increases the other set tends to decrease then it is called a negative correlation.
If the change in values of one set does not effect the values of the other, then the variables are said to have "no correlation" or zero correlation".
A causal relation between two events exists if the occurrence of the first causes the other. The first event is called the cause and the second event is called the effect. A correlation between two variables does not imply causation. On the other hand, if there is a causal relationship between two variables, they must be correlated.
A study shows that there is a negative correlation between a student's anxiety before a test and the student's score on the test. But we cannot say that the anxiety causes a lower score on the test; there could be other reasons—the student may not have studied well, for example. So the correlation here does not imply causation.
However, consider the positive correlation between the number of hours you spend studying for a test and the grade you get on the test. Here, there is causation as well; if you spend more time studying, it results in a higher grade.
One of the most commonly used measures of correlation is Pearson Product Moment Correlation or Pearson's correlation coefficient. It is measured using the formula,
The value of Pearson's correlation coefficient vary from –1 to +1 where –1 indicates a strong negative correlation and +1 indicates a strong positive correlation. | http://hotmath.com/hotmath_help/topics/correlation-and-causal-relation.html | 13 |
22 | Induction is a specific form of reasoning in which the premises of an argument support a conclusion, but do not ensure it. The topic of induction is important in analytic philosophy for several reasons and is discussed in several philosophical sub-fields, including logic, epistemology, and philosophy of science. However, the most important philosophical interest in induction lies in the problem of whether induction can be "justified." This problem is often called "the problem of induction" and was discovered by the Scottish philosopher David Hume (1711-1776).
Therefore, it would be worthwhile to define what philosophers mean by "induction" and to distinguish it from other forms of reasoning. It would also be helpful to present Hume’s problem of induction, Nelson Goodman’s (1906-1998) new riddle of induction, and statistical as well as probabilistic inference as potential solutions to these problems.
The sort of induction that philosophers are interested in is known as enumerative induction. Enumerative induction (or simply induction) comes in two types, "strong" induction and "weak" induction.
Strong induction has the following form:
A1 is a B1.
A2 is a B2.
An is a Bn.
Therefore, all As are Bs.
An example of strong induction is that all ravens are black because each raven that has ever been observed has been black.
But notice that one need not make such a strong inference with induction because there are two types, the other being weak induction. Weak induction has the following form:
A1 is a B1.
A2 is a B2.
An is a Bn.
Therefore, the next A will be a B.
An example of weak induction is that because every raven that has ever been observed has been black, the next observed raven will be black.
Enumerative induction should not be confused with mathematical induction. While enumerative induction concerns matters of empirical fact, mathematical induction concerns matters of mathematical fact. Specifically, mathematical induction is what mathematicians use to make claims about an infinite set of mathematical objects. Mathematical induction is different from enumerative induction because mathematical induction guarantees the truth of its conclusions since it rests on what is called an “inductive definition” (sometimes called a “recursive definition”).
Inductive definitions define sets (usually infinite sets) of mathematical objects. They consist of a base clause specifying the basic elements of the set, one or more inductive clauses specifying how additional elements are generated from existing elements, and a final clause stipulating that all of the elements in the set are either basic or in the set because of one or more applications of the inductive clause or clauses (Barwise and Etchemendy 2000, 567). For example, the set of natural numbers (N) can be inductively defined as follows:
1. 0 is an element in N 2. For any element x, if x is an element in N, then (x + 1) is an element in N. 3. Nothing else is an element in N unless it satisfies condition (1) or (2).
Thus, in this example, (1) is the base clause, (2) is the inductive clause, and (3) is the final clause. Now inductive definitions are helpful because, as mentioned before, mathematical inductions are infallible precisely because they rest on inductive definitions. Consider the following mathematical induction that proves the sum of the numbers between 0 and a natural number n (Sn) is such that Sn = ½n(n + 1), which is a result first proven by the mathematician Carl Frederick Gauss [1777-1855]:
First, we know that 0 = ½(0)(0 + 1) = 0. Now assume Sm = ½m(m + 1) for some natural number m. Then if Sm + 1 represents Sm + (m + 1), it follows that Sm + (m + 1) = ½m(m + 1) + (m + 1). Furthermore, since ½m(m + 1) + (m + 1) = ½m2 + 1.5m + 1, it follows that ½ m2 + 1.5m + 1 = (½m + ½)(n + 2). But then, (½m + ½)(n + 2) = ½(m + 1)((n + 1) + 1). Since the first subproof shows that 0 is in the set that satisfies Sn = ½n(n + 1), and the second subproof shows that for any number that satisfies Sn = ½n(n + 1), the natural number that is consecutive to it satisfies Sn = ½n(n + 1), then by the inductive definition of N, N has the same elements as the set that satisfies Sn = ½n(n + 1). Thus, Sn = ½n(n + 1) holds for all natural numbers.
Notice that the above mathematical induction is infallible because it rests on the inductive definition of N. However, unlike mathematical inductions, enumerative inductions are not infallible because they do not rest on inductive definitions.
Induction contrasts with two other important forms of reasoning: Deduction and abduction.
Deduction is a form of reasoning whereby the premises of the argument guarantee the conclusion. Or, more precisely, in a deductive argument, if the premises are true, then the conclusion is true. There are several forms of deduction, but the most basic one is modus ponens, which has the following form:
If A, then B
Deductions are unique because they guarantee the truth of their conclusions if the premises are true. Consider the following example of a deductive argument:
Either Tim runs track or he plays tennis.
Tim does not play tennis.
Therefore, Tim runs track.
There is no way that the conclusion of this argument can be false if its premises are true. Now consider the following inductive argument:
Every raven that has ever been observed has been black.
Therefore, all ravens are black.
This argument is deductively invalid because its premises can be true while its conclusion is false. For instance, some ravens could be brown although no one has seen them yet. Thus a feature of induction is that they are deductively invalid.
Abduction is a form of reasoning whereby an antecedent is inferred from its consequent. The form of abduction is below:
If A, then B
Notice that abduction is deductively invalid as well because the truth of the premises in an abductive argument does not guarantee the truth of their conclusions. For example, even if all dogs have legs, seeing legs does not imply that they belong to a dog.
Abduction is also distinct from induction, although both forms of reasoning are used amply in everyday as well as scientific reasoning. While both forms of reasoning do not guarantee the truth of their conclusions, scientists since Isaac Newton (1643-1727) have believed that induction is a stronger form of reasoning than abduction.
The problem of induction
David Hume questioned whether induction was a strong form of reasoning in his classic text, A Treatise of Human Nature. In this text, Hume argues that induction is an unjustified form of reasoning for the following reason. One believes inductions are good because nature is uniform in some deep respect. For instance, one induces that all ravens are black from a small sample of black ravens because he believes that there is a regularity of blackness among ravens, which is a particular uniformity in nature. However, why suppose there is a regularity of blackness among ravens? What justifies this assumption? Hume claims that one knows that nature is uniform either deductively or inductively. However, one admittedly cannot deduce this assumption and an attempt to induce the assumption only makes a justification of induction circular. Thus, induction is an unjustifiable form of reasoning. This is Hume's problem of induction.
Instead of becoming a skeptic about induction, Hume sought to explain how people make inductions, and considered this explanation as good of a justification of induction that could be made. Hume claimed that one make inductions because of habits. In other words, habit explains why one induces that all ravens are black from seeing nothing but black ravens beforehand.
The new riddle of induction
Nelson Goodman (1955) questioned Hume’s solution to the problem of induction in his classic text Fact, Fiction, and Forecast. Although Goodman thought Hume was an extraordinary philosopher, he believed that Hume made one crucial mistake in identifying habit as what explains induction. The mistake is that people readily develop habits to make some inductions but not others, even though they are exposed to both observations. Goodman develops the following grue example to demonstrate his point:
Suppose that all observed emeralds have been green. Then we would readily induce that the next observed emerald would be green. But why green? Suppose "grue" is a term that applies to all observed green things or unobserved blue things. Then all observed emeralds have been grue as well. Yet none of us would induce that the next observed emerald would be blue even though there would be equivalent evidence for this induction.
Goodman anticipates the objection that since "grue" is defined in terms of green and blue, green and blue are prior and more fundamental categories than grue. However, Goodman responds by pointing out that the latter is an illusion because green and blue can be defined in terms of grue and another term "bleen," where something is bleen just in case it is observed and blue or unobserved and green. Then "green" can be defined as something observed and grue or unobserved and bleen, while "blue" can be defined as something observed and bleen or unobserved and grue. Thus the new riddle of induction is not about what justifies induction, but rather, it is about why people make the inductions they do given that they have equal evidence to make several incompatible inductions?
Goodman’s solution to the new riddle of induction is that people make inductions that involve familiar terms like "green," instead of ones that involve unfamiliar terms like "grue," because familiar terms are more entrenched than unfamiliar terms, which just means that familiar terms have been used in more inductions in the past. Thus statements that incorporate entrenched terms are “projectible” and appropriate for use in inductive arguments.
Notice that Goodman’s solution is somewhat unsatisfying. While he is correct that some terms are more entrenched than others, he provides no explanation for why unbalanced entrenchment exists. In order to finish Goodman’s project, the philosopher Willard Van Orman Quine (1956-2000) theorizes that entrenched terms correspond to natural kinds.
Quine (1969) demonstrates his point with the help of a familiar puzzle from the philosopher Carl Hempel (1905-1997), known as "the ravens paradox:"
Suppose that observing several black ravens is evidence for the induction that all ravens are black. Then since the contrapositive of "All ravens are black" is "All non-black things are non-ravens," observing non-black things such as green leafs, brown basketballs, and white baseballs is also evidence for the induction that all ravens are black. But how can this be?
Quine (1969) argues that observing non-black things is not evidence for the induction that all ravens are black because non-black things do not form a natural kind and projectible terms only refer to natural kinds (e.g. "ravens" refers to ravens). Thus terms are projectible (and become entrenched) because they refer to natural kinds.
Even though this extended solution to the new riddle of induction sounds plausible, several of the terms that we use in natural language do not correspond to natural kinds, yet we still use them in inductions. A typical example from the philosophy of language is the term "game," first used by Ludwig Wittgenstein (1889-1951) to demonstrate what he called “family resemblances.”
Look at how competent English speakers use the term "game." Examples of games are Monopoly, card games, the Olympic games, war games, tic-tac-toe, and so forth. Now, what do all of these games have in common? Wittgenstein would say, “nothing,” or if there is something they all have in common, that feature is not what makes them games. So games resemble each other although they do not form a kind. Of course, even though games are not natural kinds, people make inductions with the term, "game." For example, since most Olympic games have been in industrialized cities in the recent past, most Olympic games in the near future should occur in industrialized cities.
Given the difficulty of solving the new riddle of induction, many philosophers have teamed up with mathematicians to investigate mathematical methods for handling induction. A prime method for handling induction mathematically is statistical inference, which is based on probabilistic reasoning.
Instead of asking whether all ravens are black because all observed ravens have been black, statisticians ask what is the probability that ravens are black given that an appropriate sample of ravens have been black. Here is an example of statistical reasoning:
Suppose that the average stem length out of a sample of 13 soybean plants is 21.3 cm with a standard deviation of 1.22 cm. Then the probability that the interval (20.6, 22.1) contains the average stem length for all soybean plants is .95 according to Student’s t distribution (Samuels and Witmer 2003, 189).
Despite the appeal of statistical inference, since it rests on probabilistic reasoning, it is only as valid as probability theory is at handling inductive reasoning.
Bayesianism is the most influential interpretation of probability theory and is an equally influential framework for handling induction. Given new evidence, "Bayes' theorem" is used to evaluate how much the strength of a belief in a hypothesis should change.
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or "prior probabilities"; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and "transformation groups" are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
"Cox's theorem," which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic. Nevertheless, how well probabilistic inference handles Hume’s original problem of induction as well as Goodman’s new riddle of induction is still a matter debated in contemporary philosophy and presumably will be for years to come.
- Barwise, Jon and John Etchemendy. 2000. Language, Proof and Logic. Stanford: CSLI Publications.
- Goodman, Nelson. 1955. Fact, Fiction, and Forecast. Cambridge: Harvard University Press.
- Hume, David. 2002. A Treatise of Human Nature (David F. and Mary J. Norton, eds.). Oxford: Oxford University Press.
- Quine, W.V.O. 1969. Ontological Relativity and Other Essays. New York: Columbia University Press.
- Samuels, Myra and Jeffery A. Witmer. 2003. Statistics for the Life Sciences. Upper Saddle River: Pearson Education.
- Wittgenstein, Ludwig. 2001. Philosophical Investigations (G.E.M. Anscombe, trans.). Oxford: Blackwell.
- Inductive Logic, Stanford Encyclopedia of Philosophy. Retrieved February 7, 2008.
- Deductive and Inductive Arguments, The Internet Encyclopedia of Philosophy. Retrieved February 7, 2008.
General philosophy sources
- Stanford Encyclopedia of Philosophy. Retrieved February 7, 2008.
- The Internet Encyclopedia of Philosophy. Retrieved February 7, 2008.
- Philosophy Sources on Internet EpistemeLinks. Retrieved February 7, 2008.
- Guide to Philosophy on the Internet. Retrieved February 7, 2008.
- Paideia Project Online. Retrieved February 7, 2008.
- Project Gutenberg. Retrieved February 7, 2008.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Induction_(philosophy) | 13 |
101 | In logic, an argument is a set of one or more declarative sentences (or "propositions") known as the premises along with another declarative sentence (or "proposition") known as the conclusion. A deductive argument asserts that the truth of the conclusion is a logical consequence of the premises; an inductive argument asserts that the truth of the conclusion is supported by the premises.
Each premise and the conclusion are only either true or false, not ambiguous. The sentences composing an argument are referred to as being either true or false, not as being valid or invalid; arguments are referred to as being valid or invalid, not as being true or false. Some authors refer to the premises and conclusion using the terms declarative sentence, statement, proposition, sentence, or even indicative utterance. The reason for the variety is concern about the ontological significance of the terms, proposition in particular. Whichever term is used, each premise and the conclusion must be capable of being true or false and nothing else: they are truthbearers.
Informal arguments are studied in informal logic, are presented in ordinary language and are intended for everyday discourse. Conversely, formal arguments are studied in formal logic (historically called symbolic logic, more commonly referred to as mathematical logic today) and are expressed in a formal language. Informal logic may be said to emphasize the study of argumentation, whereas formal logic emphasizes implication and inference. Informal arguments are sometimes implicit. That is, the logical structurethe relationship of claims, premises, warrants, relations of implication, and conclusionis not always spelled out and immediately visible and must sometimes be made explicit by analysis.
A deductive argument is one which, if valid, has a conclusion that is entailed by its premises. In other words, the truth of the conclusion is a logical consequence of the premises--if the premises are true, then the conclusion must be true. It would be self-contradictory to assert the premises and deny the conclusion, because the negation of the conclusion is contradictory to the truth of the premises.
Arguments may be either valid or invalid. If an argument is valid, and its premises are true, the conclusion must be true: a valid argument cannot have true premises and a false conclusion.
The validity of an argument depends, however, not on the actual truth or falsity of its premises and conclusions, but solely on whether or not the argument has a valid logical form. The validity of an argument is not a guarantee of the truth of its conclusion. A valid argument may have false premises and a false conclusion.
Logic seeks to discover the valid forms, the forms that make arguments valid arguments. An argument form is valid if and only if all arguments of that form are valid. Since the validity of an argument depends on its form, an argument can be shown to be invalid by showing that its form is invalid, and this can be done by giving another argument of the same form that has true premises but a false conclusion. In informal logic this is called a counter argument.
The form of argument can be shown by the use of symbols. For each argument form, there is a corresponding statement form, called a corresponding conditional, and an argument form is valid if and only its corresponding conditional is a logical truth. A statement form which is logically true is also said to be a valid statement form. A statement form is a logical truth if it is true under all interpretations. A statement form can be shown to be a logical truth by either (a) showing that it is a tautology or (b) by means of a proof procedure.
The corresponding conditional, of a valid argument is a necessary truth (true in all possible worlds) and so we might say that the conclusion necessarily follows from the premises, or follows of logical necessity. The conclusion of a valid argument is not necessarily true, it depends on whether the premises are true. The conclusion of a valid argument need not be a necessary truth: if it were so, it would be so independently of the premises.
For example: Some Greeks are logicians, therefore some logicians are Greeks: Valid argument; it would be self-contradictory to admit that some Greeks are logicians but deny that some (any) logicans are Greeks.
All Greeks are human and All humans are mortal therefore All Greeks are mortal. : Valid argument; if the premises are true the conclusion must be true.
Some Greeks are logicians and some logician are tiresome therefore some Greeks are tiresome. Invalid argument: the tiresome logicians might all be Romans!
Either we are all doomed or we are all saved; we are not all saved therefore we are all doomed. Valid argument; the premises entail the conclusion. (Remember that does not mean the conclusion has to be true, only if the premisses are true, and perhaps they are not, perhaps some people are saved and some people are doomed, and perhaps some neither saved nor doomed!)
Arguments can be invalid for a variety of reasons. There are well-established patterns of reasoning that render arguments that follow them invalid; these patterns are known as logical fallacies.
A sound argument is a valid argument with true premises. A sound argument, being both valid and having true premises, must have a true conclusion. Some authors (especially in earlier literature) use the term sound as synonymous with valid.
Inductive logic is the process of reasoning in which the premises of an argument are believed to support the conclusion but do not entail it. Induction is a form of reasoning that makes generalizations based on individual instances.
Mathematical induction should not be misconstrued as a form of inductive reasoning, which is considered non-rigorous in mathematics. (See Problem of induction.) In spite of the name, mathematical induction is a form of deductive reasoning and is fully rigorous.
An argument is cogent if and only if the truth of the argument's premises would render the truth of the conclusion probable (i.e., the argument is strong), and the argument's premises are, in fact, true. Cogency can be considered inductive logic's analogue to deductive logic's "soundness."
A fallacy is an invalid argument that appears valid, or a valid argument with disguised assumptions. First the premises and the conclusion must be statements, capable of being true and false. Secondly it must be asserted that the conclusion follows from the premises. In English the words therefore, so, because and hence typically separate the premises from the conclusion of an argument, but this is not necessarily so. Thus: Socrates is a man, all men are mortal therefore Socrates is mortal is clearly an argument (a valid one at that), because it is clear it is asserted that that Socrates is mortal follows from the preceding statements. However I was thirsty and therefore I drank is NOT an argument, despite its appearance. It is not being claimed that I drank is logically entailed by I was thirsty. The therefore in this sentence indicates for that reason not it follows that.
Often an argument is invalid because there is a missing premise the supply of which would make it valid. Speakers and writers will often leave out a strictly necessary premise in their reasonings if it is widely accepted and the writer does not wish to state the blindingly obvious. Example: Iron is a metal therefore it will expand when heated. (Missing premise: all metals expand when heated). On the other hand a seemingly valid argument may be found to lack a premise – a ‘hidden assumption’ – which if highlighted can show a fault in reasoning. Example: A witness reasoned: Nobody came out the front door except the milkman therefore the murderer must have left by the back door. (Hidden assumption- the milkman was not the murderer).
Whereas formal arguments are static, such as one might find in a textbook or research article, argumentative dialogue is dynamic. It serves as a published record of justification for an assertion. Arguments can also be interactive, with the proposer and the interlocutor having a symmetrical relationship. The premises are discussed, as well the validity of the intermediate inferences.
Dialectic is controversy, that is, the exchange of arguments and counter-arguments respectively advocating propositions. The outcome of the exercise might not simply be the refutation of one of the relevant points of view, but a synthesis or combination of the opposing assertions, or at least a qualitative transformation in the direction of the dialogue.
Argumentation theory, (or argumentation) embraces the arts and sciences of civil debate, dialogue, conversation, and persuasion. It studies rules of inference, logic, and procedural rules in both artificial and real world settings. Argumentation is concerned primarily with reaching conclusions through logical reasoning, that is, claims based on premises.
Statements are put forward as arguments in all disciplines and all walks of life. Logic is concerned with what consititutes an argument and what are the forms of valid arguments in all interpretations and hence in all disciplines, the subject matter being irrelevant. There are not different valid forms of argument in different subjects.
Arguments as they appear in science and mathematics (and other subjects) do not usually follow strict proof precedures; typically they are elliptical arguments (q.v.) and the rules of inference are implicit rather than explicit. An argument can be loosely said to be valid if it can be shown that, with the supply of the missing premises it has a valid argument form and demonstrateable by an accepted proof procedure.
The basis of mathematical truth has been the subject of long debate. Frege in particular sought to demonstrate (see Gottlob Frege, The Foundations of Arithemetic, 1884, and Logicism in Philosophy of mathematics) that arithmetical truths can be derived from purely logical axioms and therefore are, in the end, logical truths. The project was developed by Russell and Whitehead in their Principia Mathematica. If an argument can be cast in the form of sentences in Symbolic Logic, then it can be tested by the application of accepted proof procedures. This has been carried out for Arithemetic using Peano axioms. Be that as it may, an argument in Mathematics, as in any other discipline, can be considered valid just in case it can be shown to be of a form such that it cannot have true premises and a false conclusion.
Legal arguments (or oral arguments) are spoken presentations to a judge or appellate court by a lawyer (or parties when representing themselves) of the legal reasons why they should prevail. Oral argument at the appellate level accompanies written briefs, which also advance the argument of each party in the legal dispute. A closing argument (or summation) is the concluding statement of each party's counsel (often called an attorney in the United States) reiterating the important arguments for the trier of fact, often the jury, in a court case. A closing argument occurs after the presentation of evidence.
A political argument is an instance of a logical argument applied to politics. Political arguments are used by academics, media pundits, candidates for political office and government officials. Political arguments are also used by citizens in ordinary interactions to comment about and understand political events.
More on Arguments:
Wesley C Salmon, Logic, Prentice-Hall, New Jersey 1963 (Library of Congress Catalog Card no. 63-10528)
More on Logic:
Aristotle, Prior and Posterior Analytics, ed. and trans. John Warrington, Dent: London (everyman Library) 1964
Benson Mates, Elementary Logic, OUP, New York 1
972 (Library of Congress Catalog Card no.74-166004)
Elliot Mendelson, Introduction to Mathematical Logic,, Van Nostran Reinholds Company, New York 1964
More on Logic and Maths:
1884. Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau: W. Koebner. Translation: J. L. Austin, 1974. The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, 2nd ed. Blackwell. Gottlob Frege, The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, 1884, trans Jacquette, Pearson Longman, 2007 | http://www.reference.com/browse/put+argument | 13 |
17 | This section enables students to make links across the resources on this website and consider the future of the continent. Students are encouraged to consider values of global citizenship, environmental sustainability and resource management. Students are provided with opportunities to think about the the future of Antarctica in terms of its Treaty and other legislation and whether a growth in resource exploitation should be increased/permitted.
Within this topic students will be able to:
This topic links directly with the climate section. The introduction provides a broad overview a about climate change and its impacts on the Antarctic environment. There are also opportunities to learn about the latest research developments on climate change at the British Antarctic Survey.
Students can play a video clip from the British Antarctic Survey to learn some of the key impacts on Antarctica. Students are able to see footage of the changes in the Antarctic environment. While the clip is playing students answer some simple questions enabling them to work on their listening skills.
a) Which part of Antarctica has witnessed the largest changes in temperature? Antarctic Peninsula.
b) What temperature increase has been measured in this areas of Antarctica? 3°C
c) What changes has this temperature increase caused to the environment? break up of ice shelves, changes in coastlines, increased snow melt, retreat of glaciers
Students are able to learn more specific details about different impacts on the Antarctic environment and the therefore the global environment including: ice, wildlife and sea level change. This encourages students to think of the issue of climate change on a variety of scales. Students are provided with a basic overview of the topics with opportunities to learn more by following the links. Students are able to retrieve information from a variety of resources including video, photographs and media.
This section is divided into 4 stand alone parts. These can be dipped in and out of, divided between groups or worked through in their entirety. They are designed to give students a good understanding of a variety of issues concerning the impacts of climate change on Antarctica and its impacts worldwide. Students can also finish by considering their impact on the environment. This work links back to the climate section and the eco-tourism activities on carbon emissions.
Students gain skills by interpreting and drawing conclusions from GIS data and consider the advantages of using this method of data presentation, brainstorming and thinking about the issue of climate change on a variety of scales. There are lots of links which help to stretch more able pupils.
This section allows students to consider mineral exploitation in Antarctica. Currently mineral exploitation is banned until 2041 when it is up again for review under the Antarctic Treaty. Students gain a general overview in the introductory information and consider the future of mineral exploitation on the continent should economic, environmental and political conditions change. This debate in the future may well be one concerning economic verses environmental arguments. This is an important debate as energy and mineral resources worldwide are diminishing and the environmental consequences of exploitation elsewhere have been well documented. This provides a discussion on the potential conflicting opinions on resource exploitation.
Students consider the range of mineral wealth in Antarctica and their uses for humans. Students gain an appreciation that minerals (and therefore what we depend on minerals for) are finite.
Students learn about the regulation concerning mineral exploitation in Antarctica under the Antarctic Treaty including the Convention on the Regulation of Antarctic Mineral Resource Activities (CRAMRA) and the Madrid Protocol on the Environment. This work can be linked with the Antarctic Treaty resources on this website. Students gain understanding about how Antarctica is governed internationally where all players have a say and how global citizenship and global governance have an important role in the future of Antarctica.
This role play activity is designed for students to consider the economic, environmental and political issues surrounding the exploitation of minerals on Antarctica. Students are able to engage with differing/conflicting opinions about resource exploitation and environmental protection/degradation. Issues of conservation, stewardship, global governance and the processes involved in decision making are considered.
Students are assigned roles from the downloadable debate role cards and are encouraged to research relevant information using the links thoroughly. Students may prefer to work in groups to prepare the roles and for less able students more direction with the relevant links may be required. If conducting a debate is inappropriate, students could prepare a speech in role instead.
This work uses skills of research and interpretation of relevant information, group work, producing a one sided viewpoint but to be aware of other views and the wider context. The debate can encourage all students to participate if well chaired and discussion questions are provided to help guide the debate.
When completed, a summary discussion may be useful to summarise relevant issues/debates and to answer the debate question. What do students believe personally about mining in Antarctica and why?
This section aims to provide a good overview of the exploitation of fish within the Southern Ocean with up to date information and links to gain a more in depth information. This links with the work of CCAMLR (Commission for the Conservation of Antarctic Marine Living Resources) under the Antarctic Treaty which was set up to preserve and monitor the resource exploitation of the Southern Ocean. Students also gain an understanding of the current challenges in protecting such a large expanse of water in terms of the incidence of illegal, unregulated and unreported (IUU) fishing.
Students are introduced to the historical context, conservation and the fishing of krill. Students are given opportunities to learn more by following the links. Students/teachers may also wish to link back to other sections of this website e.g. Antarctic Treaty, Conservation (case study on the albatross) and Tourism (alien species) to gain relevant context and broaden understanding of the issues.
Students can learn about the management approaches and conservation measures in preserving and monitoring the Southern Ocean within the Convention Area. These issues provide students with current environmental debates and show the interaction between humans and the environment as well as the problems associated with trying to enforce regulations. The problem with illegal fishing highlights the problem associated with economics verses environment.
This is designed to enable students to practice skills of data handling, analysis and manipulation. Students are able to take an enquiry approach by identifying an appropriate hypothesis and making decisions on appropriate data presentation.
1. Students open illegal fishing toothfish data for the period 1996-2008. This data is the basis for this short investigation.
2. Having considered the data, students identify a suitable hypothesis and null hypothesis to perform a statistical test.
A suggestion could be:
Hypothesis suggestion: There is a negative correlation between IUU caught toothfish quantities and time.
Null hypothesis suggestion: There is no relationship between IUU caught toothfish quantities and time.
3. Students then decide on an appropriate graph to display this data. Mid-range candidates may display a bar graph which is acceptable however, a line graph would be more appropriate for continuous data.
4. Students then conduct a statistical analysis of the data in order to prove/disprove the hypothesis. This analysis is broken into three parts: a scatter graph, a line of best fit and a Pearson's Chi-test. This work is particularly appropriate to stretch higher candidates. This may also be relevant in the teaching of statistical analysis prior to engaging in fieldwork and to highlight the importance of ICT within fieldwork and Geography more widely. The test could also be done using Spearman's Rank Correlation Coefficient on paper. The help sheet is provided if needed.
Having completed the Pearson's Chi-test using excel, students should have produced an answer of:
-0.73. By checking this with a critical values table, indicates a strong negative correlation.
Tip: Correlation varies between -1 (perfect negative correlation) and 1 (perfect positive correlation), where 0 (or close to 0) indicates no correlation at all.
5-6. Students may be able to deduct that reasons for this negative correlation (i.e. that IUU catches of toothfish have declined over time) may be due to conservation measures. However, this cannot be ascertained for certain as there may be other reasons for this decline such as global warming, changes in legislation elsewhere or environmental degradation.
7. Students are asked why it is effective to use this technique in order to highlight the value of modern ICT methods in Geography. This technique allows complicated maths calculations of standard deviation to be done very quickly. It also creates meaningful values that are easier to use.
This last topic aims to consolidate and make links with many of the topics/issues/debates previously considered in this module. The materials here bring together issues already discussed including tourism, mineral exploitation and overfishing. The exploitation of Antarctica's fauna by bioprospecting is also introduced. The introduction sets the scene by asking the question whether the treaty and its linked resolutions/agreements/conventions should be upheld in their current form or modified in the light of all these conflicting demands.
This activity provides a starting point encouraging discussion on the future of Antarctica. Why does it matter if Antarctica was developed like other regions of the world? This raises issues of global citizenship and stewardship as well as moral judgements about the sustainability of exploitation of natural resources for future generations.
Students can recap (or learn for the first time) issues considered in other areas of the website including the impacts of tourists, mining and fishing. There are links provided for students to gain more depth if needed. Bioprospecting is also considered including the regulations and the Antarctic Treaty, why the continent is of interest and what current work is being done. At the end of this section students are encouraged to consider whether they think Antarctica should be exploited in the future or maintained as it is. This section is designed to bring together all the different activities that threaten the sustainability of Antarctica to enable a holistic discussion.
1. Students can listen to two conflicting opinions for the future of Antarctica and fill in the downloadable chart. This sheet helps to consolidate opposing views on the future of Antarctica. The views pull together many of the different conflicts and potential uses for the continent. Considering the economic verses the environment, stewardship, conservation and global governance. This sheet provides students with an opportunity to formulate their own opinions about the future of Antarctica.
2. Students plan and write an essay style question. Students may wish to use the downloadable plan to help them. This provides an opportunity to bring together many of the topics in this module including science, conservation, resource management, future exploitation and the Antarctic Treaty.
3. Students can peer mark the essay using a mark scheme. This provides an opportunity for students to become familiar with the way essays are marked by exam boards and to critically assess their peers in order to improve their own writing style. | http://www.discoveringantarctica.org.uk/alevel_teachers_sustainability.html | 13 |
54 | Deductive reasoning, also deductive logic or logical deduction or, informally, "top-down" logic, is the process of reasoning from one or more general statements (premises) to reach a logically certain conclusion.
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic) in the following way: In deductive reasoning, a conclusion is reached from general statements, but in inductive reasoning the conclusion is reached from specific examples. (Note, however, that the inductive reasoning mentioned here is not the same as induction used in mathematical proofs - mathematical induction is actually a form of deductive reasoning.)
Simple Example
An example of a deductive argument:
- All men are mortal.
- Aristotle is a man.
- Therefore, Aristotle is mortal.
The first premise states that all objects classified as "men" have the attribute "mortal". The second premise states that "Aristotle" is classified as a "man" – a member of the set "men". The conclusion then states that "Aristotle" must be "mortal" because he inherits this attribute from his classification as a "man".
Law of Detachment
The law of detachment (also known as affirming the antecedent and Modus ponens) is the first form of deductive reasoning. A single conditional statement is made, and a hypothesis (P) is stated. The conclusion (Q) is then deduced from the statement and the hypothesis. The most basic form is listed below:
- P→Q (conditional statement)
- P (hypothesis stated)
- Q (conclusion deduced)
In deductive reasoning, we can conclude Q from P by using the law of detachment. However, if the conclusion (Q) is given instead of the hypothesis (P) then there is no valid conclusion.
The following is an example of an argument using the law of detachment in the form of an if-then statement:
- If an angle A>90°, then A is an obtuse angle.
- A is an obtuse angle.
Since the measurement of angle A is greater than 90°, we can deduce that A is an obtuse angle.
Law of Syllogism
The law of syllogism takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. Here is the general form, with the true premise P:
- Therefore, P→R.
The following is an example:
- If Larry is sick, then he will be absent from school.
- If Larry is absent, then he will miss his classwork.
- If Larry is sick, then he will miss his classwork.
We deduced the final statement by combining the hypothesis of the first statement with the conclusion of the second statement. We also allow that this could be a false statement. This is an example of the Transitive Property
Law of Contrapositive
- Therefore we can conclude ~P.
The following are examples:
- If it is raining, then there are clouds in the sky.
- There are no clouds in the sky.
- Thus, it is not raining.
Deductive Logic: Validity and Soundness
An argument is valid if it is impossible for its premises to be true while its conclusion is false. In other words, the conclusion must be true if the premises are true. An argument can be valid even though the premises are false.
An argument is sound if it is valid and the premises are true.
It is possible to have a deductive argument that is logically valid but is not sound. Trick arguments are based off of this.
The following is an example of an argument that is valid, but not sound:
- Everyone who eats carrots is a quarterback.
- John eats carrots.
- Therefore, John is a quarterback.
The example's first premise is false – there are people who eat carrots and are not quarterbacks – but the conclusion must be true, so long as the premises are true (i.e. it is impossible for the premises to be true and the conclusion false). Therefore the argument is valid, but not sound. Generalizations are often used to make invalid arguments, such as "everyone who eats carrots is a quarterback." Everyone who eats carrots is NOT a quarterback, thus proving the flaw of such arguments.
In this example, the first statement uses categorical reasoning, saying that all carrot-eaters are definitely quarterbacks. This theory of deductive reasoning – also known as term logic – was developed by Aristotle, but was superseded by propositional (sentential) logic and predicate logic.
Deductive reasoning can be contrasted with inductive reasoning, in regards to validity and soundness. In cases of inductive reasoning, even though the premises are true and the argument is "valid", it is possible for the conclusion to be false (determined to be false with a counterexample or other means).
Hume's Skepticism
Philosopher David Hume presented grounds to doubt deduction by questioning induction. Hume's problem of induction starts by suggesting that the use of even the simplest forms of induction simply cannot be justified by inductive reasoning itself. Moreover, induction cannot be justified by deduction either. Therefore, induction cannot be justified rationally. Consequently, if induction is not yet justified, then deduction seems to be left to rationally justify itself – an objectionable conclusion to Hume.
Deductive reasoning and Education
Deductive reasoning is generally thought of as a skill that develops without any formal teaching or training. As a result of this belief, deductive reasoning skills are not taught in secondary schools, where students are expected to use reasoning more often and at a higher level. It is in high school, for example, that students have an abrupt introduction to mathematical proofs – which rely heavily on deductive reasoning.
See also
- Deduction & Induction, Research Methods Knowledge Base
- Sternberg, R. J. (2009). Cognitive Psychology. Belmont, CA: Wadsworth. p. 578. ISBN 978-0-495-50629-4.
- Guide to Logic
- Stylianides, G. J.; Stylianides (2008). "A. J.". Mathematical Thinking and Learning 10 (2): 103–133. doi:10.1080/10986060701854425.
Further reading
- Vincent F. Hendricks, Thought 2 Talk: A Crash Course in Reflection and Expression, New York: Automatic Press / VIP, 2005, ISBN 87-991013-7-8
- Philip Johnson-Laird, Ruth M. J. Byrne, Deduction, Psychology Press 1991, ISBN 978-0-86377-149-1jiii
- Zarefsky, David, Argumentation: The Study of Effective Reasoning Parts I and II, The Teaching Company 2002
|Look up deductive reasoning in Wiktionary, the free dictionary.|
- Deductive reasoning entry in the Internet Encyclopedia of Philosophy
- Epistematica:Knowledge as a Service | http://en.wikipedia.org/wiki/Deductive | 13 |
16 | Focus: Students use in-class and Internet-based simulations of the spread of an infectious disease through a population to discover the phenomenon of herd immunity.
Major Concepts: The re-emergence of some diseases can be explained by the failure to immunize enough individuals, which results in a greater proportion of susceptible individuals in a population and an increased reservoir of the infectious agent. Increases in the number of individuals with compromised immune systems (due to the stress of famine, war, crowding, or disease) also explain increases in the incidence of emerging and re-emerging infectious diseases.
Objectives: After completing this activity, students will
. be able to explain how immunizing a significant proportion of a population
against a disease prevents epidemics of that disease (herd immunity),
. be able to list factors that affect the proportion of a population that must be immunized to prevent epidemics, and
. understand how large-scale vaccination programs help control infectious diseases.
Prerequisite Knowledge: Students should be familiar with how immunization protects individuals from infectious diseases.
Basic Science-Public Health Connection: This activity introduces students to modeling as a scientific exercise. Students learn how models based on observations of disease transmission can be used to predict the likelihood of epidemics and to help public health officers recommend policies to protect the public from infectious diseases.
Global vaccination strategies are a cost-effective means of controlling many infectious diseases. Because immunized people do not develop diseases that must be treated with antimicrobial drugs, opportunities for pathogens to evolve and disseminate drug resistance genes are reduced. Thus, mass immunization reduces the need to develop newer and more expensive drugs.
As long as a disease remains endemic in some parts of the world, however, vaccination programs must be maintained everywhere, because an infected individual can travel anywhere in the world within 24 hours. Once global vaccination programs eliminate the infectious agent (as in the case of the smallpox virus), vaccination is no longer necessary and the expense of those programs is also eliminated. It is estimated that the United States has saved $17 billion so far as a result of the eradication of smallpox (which cost, according to the World Health Organization, $313 million across a 10-year period).
Lapses in vaccination programs explain the re-emergence of some infectious diseases. For example, the diphtheria outbreak in Russia in the early 1990s may have been due to lapses in vaccination programs associated with the breakup of the Soviet Union. Inadequate vaccines and failure to obtain required "booster shots" also explain some disease re-emergence. The dramatic increase in measles cases in the United States during 1989-1991 was likely caused by failure to give a second dose of the vaccine to school-age children. The American Academy of Pediatrics now recommends that all children receive a second dose of the measles vaccine at either age 4-6 or 11-12.
This activity and Activity 3, Superbugs: An Evolving Concern, both provide explanations for the re-emergence of some infectious diseases. Activity 3 explained that some re-emerging diseases are due to the evolution of antibiotic resistance among pathogens. Activity 4, Protecting the Herd, introduces students to the idea that the re-emergence of other infectious diseases can be explained by a failure to immunize a sufficient proportion of the population. On the first day of the activity, students learn that epidemics can be prevented by immunizing part of the population, leading to herd immunity. The concept of herd immunity is elaborated in the optional, second day of the activity. Here, students learn that the threshold level of immunity required to establish herd immunity (and thus prevent epidemics) varies depending on the transmissibility of the disease, the length of the infectious period, the population density, and other factors.
You will need to prepare the following materials before conducting this activity:
If you do Day 2 of the activity, you will also need the following materials:
Note to teachers: If you do not have enough computers equipped with Internet access, you will not be able to conduct the optional Day 2 of this activity.
1. Introduce the activity by distributing one copy of Master 4.1, Measles Outbreak at Western High, to each student and asking the students to read it.
The scenario described on Measles Outbreak is fictitious, but is based on an outbreak of measles that occurred in Washington State in 1996.
An alternate way to introduce the activity is to assign students to make a list of the childhood diseases that they, their parents (or someone from their parents' generation), and their grandparents (or someone from their grandparents' generation) had. Explain that "childhood diseases" means diseases that people usually have just once and do not get again (for example, chicken pox). Explain that you do not mean diseases like the flu, strep throat, and colds. On the day you wish to begin the activity, ask students to name some of these diseases, then ask them to count the number of different diseases each generation in their family had. Total these numbers across all of the students in the class and ask students to suggest why (in general) their parents and grandparents had more diseases than they did. Students likely will suggest (correctly) that vaccination against many diseases is now available.
|This is an opportunity to point out that research in microbiology and related disciplines in the last 50 years has led to the development of many vaccines in addition to the measles vaccine. Children of the 1990s who receive recommended vaccinations are protected from many infectious diseases that plagued children in the past, including diphtheria, whooping cough, measles, hepatitis B, and chicken pox.|
2. After students have read Measles Outbreak, ask them to speculate about what might have happened to cause a sudden outbreak of a disease such as measles that normally, today, is relatively rare in the United States.
Students likely will know that most children in the United States today are vaccinated against measles. They may speculate that the students at Western High were not vaccinated, or that the vaccine didn't work in their cases, or even that the pathogen causing this form of measles was somehow able to evade the immune defenses that had been triggered by the vaccinations these children received.
3. Distribute one copy of Master 4.2, A Little Sleuthing, to each student and ask the students to read the story and think about the question that ends it.
4. Point out that despite the success of the measles vaccine, there continue to be small outbreaks of measles in the United States. Explain that the key to understanding why this is true and to answering the question that ends the story about Western High lies in understanding how disease spreads in a population.
5. Explain to students that to help them understand how disease spreads in a population, they will participate in a simulation of the spread of a fictitious disease you will call the "two-day disease." Distribute two copies of Master 4.3, Following an Epidemic, to each student and display a transparency of this master. Then direct students to perform two simulations of the spread of two-day disease, according to the instructions provided, immediately following the activity.
An "epidemic" is typically defined as "more cases of a disease than is expected for that disease." Although this is not a very specific definition, it does make it clear that whether scientists call an outbreak of a disease an epidemic depends on the specific disease involved. Though there is no distinct line between an "outbreak" and an "epidemic," epidemics are generally considered to be larger in scale and longer lasting than outbreaks. Today, five cases of measles within a population could be considered an epidemic because no cases are expected.
For this simulation, assume that an epidemic is in progress if 25 percent or more of the population is sick at one time.
Observations that students might make about the table and graph that result from the first simulation include:
Observations that students might make about the table and graph that result from the second simulation include:
Tip from the field test. Do a practice run of several days of the simulation before you do the runs in which you collect data. This will allow you to address any confusion students have about the simulation and will make subsequent runs go much faster. If you have time, you may want to repeat the simulation, in particular the second simulation in which half of the class is immune. In order for students to observe herd immunity, some susceptible students in the population should not get sick. Depending on the arrangement of immune and susceptible students in the class (which is random), this may not happen the first time you run this simulation.
6. Debrief the activity by asking, "Why did an epidemic occur in the first population, but not in the second?" and "Why didn't all of the susceptible people in the second population get sick?" Introduce the term "herd immunity" and describe it as a phenomenon that occurs when most of the people in a population are immune to an infectious disease. Susceptible people in the population are protected from that disease because the infectious agent cannot be effectively transmitted.
Allow students to discuss their responses to the two questions before you introduce the term "herd immunity." Students will likely make comments such as, "Everyone sitting near John was immune, so the disease just died out." At that point, you can respond by saying, "Yes, what you have just explained is what epidemiologists call 'herd immunity'." Then you can provide a more complete definition.
|This step takes students to the major concept of the activity: The re-emergence of some diseases can be explained by immunity levels that are below the level required for herd immunity.|
7. Ask students to explain, based on their experience in the disease transmission simulation, what would happen if measles vaccinations dropped to a low level in a population.
Students should be able to explain that there would be many susceptible people in the population, so the disease would be transmitted from one to another without dying out. A measles outbreak or epidemic would occur. If students do not mention "re-emergence," emphasize this point by saying, "Yes, measles would re-emerge in the population."
8. Remind students about the measles outbreak story. Ask them to write a final paragraph to the story in which they use the term herd immunity to answer the following questions:
|Collect and review students' paragraphs to assess their understanding of the major concept of the activity. Address common misunderstandings in the next class session and read two or three of the best paragraphs to the class.|
. Why didn't the unvaccinated or inadequately vaccinated students and teacher at Western High get measles when they were children rather than as teenagers or adults?
Students should be able to explain that the unvaccinated or inadequately vaccinated students at Western High were protected by herd immunity when they were younger: Because most of the people around them were immune, the infectious agent could not be transmitted from those people.
. Why is vaccination not only a personal health issue, but also a public health issue?
Vaccination is a public health issue because maintaining high levels of immunity in a population prevents epidemics and protects the small percentage of susceptible people from the disease.
DAY 2 (Optional)
1. Open the activity by reminding students about two-day disease and the simulation that they completed. Then ask them what characteristics may vary between two-day disease and other diseases. Point out that differences in these characteristics affect the likelihood that an epidemic of a particular disease will occur and the percentage of the population that must be immune to that disease to achieve herd immunity.
Expect students to suggest that people who are sick may contact more than one person per day, may be sick (and infectious) for more than two days, may die from the disease, and may not get sick from just one contact. Students also may point out that the disease may require "intimate" rather than casual contact or it may not require person-to-person contact.
2. Ask students to predict what the results of the simulation would be if they varied each of four characteristics of the disease: virulence (the likelihood of dying from the disease), duration of infection, rate of transmission (how contagious the disease is), and level of immunity in the population. Insist that students provide some rationale for their predictions. Write their predictions on the board or a blank transparency.
To help students think about this, you may wish to ask questions such as, "Do you think there would have been an epidemic of two-day disease if people sometimes died from the disease? If so, do you think it would have been a more or less severe epidemic?"
Virulence, duration of infection, rate of transmission, and level of immunity are the four parameters that the computer simulation will allow students to vary. Students may make predictions such as, "The more virulent a disease is, the greater the likelihood of an epidemic," or "The higher the immunity level of a population, the less likely it is that an epidemic will occur."
3. Tell students that they will use a computer simulation to investigate the likelihood of an epidemic when they vary one of the four characteristics they just discussed. Distribute one copy of Master 4.5, Disease Transmission Simulation Record, to each student and ask students to organize into their teams. Assign each team one of the four characteristics to investigate and direct students to circle this characteristic on the master.
Tell students that because a larger population size is used in the computer simulation, an epidemic is defined as an outbreak of disease in which 10 percent or more of the population is sick at one time.
Copyright | Credits | Accessibility | http://science.education.nih.gov/supplements/nih1/Diseases/guide/activity4-1.htm | 13 |
18 | © 2013 Steve Campsall
|Full Contents - Home Page|
write a more effective argument
Download free revision guides
The word argument suggests an animated disagreement - but a written argument is rather different. It requires that you...
put forward a succinctly stated and well-considered point of view;
provide support for this view;
create a sense of balance by referring to other equally valid points of view;
tactfully counter these.
Your aim in this kind of writing is not necessarily to 'win' the argument; instead, it is to put forward evidence that is logical and well-considered and which acts to support your point of view and to counter the main alternative views.
The evidence you provide must be both convincing and presented convincingly.
The evidence needs to be convincing but, in an exam situation at least, it does not have to be factual, i.e. you can 'make it up'; you are allowed to make up such things as expert opinions and statistical evidence to support your argument.
Importantly, whatever evidence you do use, it must be well considered and reasonable.
Remember - whilst you might not agree with an opposing view, that doesn't make it in any way foolish to hold. You will need to take great care indeed to avoid suggesting that those who hold different views are in any way foolish for doing so.
This is such an easy pitfall that catches out very many students.
In large part, it is the degree of politeness and tact that you display when opposing other viewpoints that will win or lose your argument - and gain you the most marks!
ARGUMENT OR PERSUASION
- what is the difference?
'Writing to argue' and 'writing to persuade' both occur on school courses. They are both very similar in as much as they share the same purpose, that of seeking to influence. There are differences that will affect the style of your writing if you are to gain the highest marks.
An argument concerns an issue about which people, quite reasonably, hold different views. This suggests that other views are not necessarily wrong - just different. During the process of presenting your argument, therefore, it is reasonable that you should show that you recognise that opposing views exist, not only to hint at what a fair-minded person you are, but to give you the opportunity to counter these views tactfully in order to show why you feel that your own view is the more worthy one to hold.
Persuasion has a more single-minded goal. It is based on a personal conviction that a particular way of thinking is the only sensible way to think.
WELL-REASONED ARGUMENT vs. PASSIONATE PERSUASION!
Consider this typical scene in a teenager's life...
The party is on Friday... and, naturally, you really want to go but your parents have other ideas. They're planning a visit to Great Aunt Bertha and know how much she'd love to see how you've grown since your last visit. Persuading your mum to say yes to the party is your determined goal - because Friday is the deadline and you need an answer now.
How to go about it? First, a little calm reasoning ('Everyone from school will be there, mum. It's a social occasion and it'll help me make more friends...' ), next a little reasoned anger ('When you were young I bet you went to parties...!') and finally, a passionate plea ('Oh, do please try to see it from my position, mum. I can't turn up on Monday the only kid in the class who didn't go...!').
Now if instead of the above, you had been asked to write an article in the school magazine to present a case for a return of end-of-year school disco... well, you don't need that immediate answer, so a well-reasoned argument composed of a series of well thought out and well-supported points is likely to win the day. The pressure is on in the first case, but not in the second.
A little history will help...
ARGUMENT AND THE ANCIENT ART OF 'RHETORIC'
The art of argument and persuasion is as old as the hills... or rather, as old as the ancient Greek hills. The Greeks were famous for their teaching and learning as well as their arguing and persuasion. They called the art of using language persuasively rhetoric and, still today, any use of language that makes it seem more powerful is called rhetorical language. Two of the world's most famous 'rhetoricians' were Aristotle (the student of the famous teacher and philosopher Plato) and another Roman teacher called Cicero.
If Aristotle and Cicero were writing this web page, they would be telling you that the ideal form of argument was through the use of one thing and one thing alone... reason (which had the Greek name of logos - hence the modern word logic); however, the two recognised that 'ideal' things must always remain just that - ideas, and that human weakness would always mean that two further argument techniques would be brought into use, especially where persuasion was needed. The first of these is an appeal to character (which they called ethos - hence our term ethical) and the second? An appeal to the emotions (which they called pathos - a word we now use to suggest the power to stir sad emotions).
Writing an effective argument...
An argument should set out to answer the question 'Why?' for your viewpoint as well as show awareness and understanding of your opponent's views.
The secrets of success?
Show you understand the genre conventions of the form - that is, the format - in which you are asked to write (e.g. an article, a letter, a speech, etc.).
Find common ground - an endpoint upon which all would agree.
Show consideration of but counter with politeness and tact your opponent's views.
Use effective argumentative techniques - that is, use rhetorical devices.
Ensure your views unfold logically and persuasively - that is, create a logical structure for your argument.
Showing understanding of opposing views
Try switching roles - which points would convince you?
Showing understanding of form and conventions
Using effective argumentative techniques
Successful arguments are...
Here is a small section of the mark scheme the examiners from a major examining board use when they award a grade A:
• shows sustained awareness of the
IN MORE DETAIL...
CENTRAL or BODY PARAGRAPHS
Each year, literally thousands of students fail to achieve the marks they could. Don't be one of them ALWAYS CHECK YOUR WRITING BEFORE HANDING IN!
Read each sentence immediately after you write it
Use a variety of sentence types and styles and remember that shorter sentences are often clearer and crisper sounding. An occasional ultra-short sentence can add real impact to writing.
Read each sentence before you proceed to the next to check it is fluent, accurate and complete. Does it follow on logically from the previous sentence?
Check every paragraph
A paragraph is a series of sentences (often at least five) that develop from a single topic sentence used to introduce the point of the paragraph.
Avoid creating overly short paragraphs as this suggests either a) you do not know what a paragraph is or b) that you have no explained the point of the paragraph in sufficient detail. Try to make sure that each paragraph flows naturally on from its predecessor by using the final sentence of each paragraph to subtly 'hook' into the topic of the next paragraph.
To correct a missed paragraph simply put this mark where you want in to be: // then, in your margin write: // = new paragraph. The examiner will not mark you down for this so long as you have not forgotten all of your paragraphs.
Examine each comma
A very common error and poor style is to use a comma instead of a full stop to end a sentence. This makes two or more stylish, short and crisp sentences into one long, drawn out and boring sentence! Always end each sentence with a full stop - or a semi-colon if you know how to use this punctuation mark.
Look at every apostrophe
Apostrophes are used for two reasons - but so many students fail to use them effectively. If two words are made into one in what is called a 'contraction', an apostrophe is inserted if any letters have been missed out. So 'should not' becomes 'shouldn't'
And when one thing belongs to another thing or person, this is shown by adding apostrophe+s to the owning noun. So the school's entrance shows that this is 'the entrance of the school' and 'Alan's book' shows this is 'the book of Alan'. Similarly, you go 'to the doctor's' and 'the chemist's' as well as to 'Sainsbury's' and 'McDonald's' because you mean 'to the doctor's surgery', 'the chemist's shop' and so on.
Watch out for it's. With an apostrophe it only ever means it is or it has, as in 'it's cold' or 'it's got three toes missing'. If you mean belonging to it, as in its fur is shiny and smooth, no apostrophe is used. | http://www.englishbiz.co.uk/mainguides/argue.htm | 13 |
15 | Standard: Data Analysis & Probability
Read the NCTM Data Analysis & Probability Standard: Instructional programs from prekindergarten through grade 12 should enable all students to:
- Formulate questions that can be addressed with data and collect, organize, and display relevant data to answer them
- Select and use appropriate statistical methods to analyze data
- Develop and evaluate inferences and predictions that are based on data
- Understand and apply basic concepts of probability
Featured Topic: Probability of Dice Tosses
Students need many opportunities to informally experience the outcomes of tossing one die and then two dice. Students should play games, gather data, collate class data and analyze those results to develop a conceptual understanding of the probability of different outcomes.
- See Data Analysis: One-Die Toss Activities for more information and activities on the probability of the outcomes of tossing one die.
- See Data Analysis: Two-Dice Toss Activities for more information and activities on the probability of the outcomes of tossing two dice.
The activities found on these Mathwire.com webpages support students as they answer questions about the world around them. Learning to organize and analyze data are important life skills that students will use in both their professional and personal lives.
- Glyph Activities
- See pictures of Student Halloween Glyphs.
- See pictures of Student Turkey Glyphs.
- See pictures of Student Winter Glyphs.
Data Collection & Probability Activities
- Data Analysis Investigations encourage students to collect real, meaningful data, organize that data and analyze the data to draw conclusions and explain what they have learned. These investigations encourage students to apply mathematical analysis to real-life data and/or applications in order to investigate problems or issues.
- See Data Analysis 2 for more suggested data collection activities.
- See Data Analysis: Two-Dice Toss Activities for activities that develop student understanding of the probability of tossing two dice. This page includes many games that help students collect and analyze data.
- See Data Analysis: One-Die Toss Activities for activities that develop student understanding of the probability of tossing one-die. This page includes many one-die games that help students collect and analyze data.
- See Coin Flipping Activities for activities that develop student understanding of the probability of coin toss events.
- See Winter Data Collection for seasonal data collection and graphing ideas.
- Sampling Activities include background on this important statistical technique and the Cereal Toy Investigation that encourages students to use a simulation to analyze how many boxes of cereal they would have to buy to get all six different toys in a cereal toy promotion. A Cereal Toy java applet is included so that students can collect data quickly to increase the sample size and compare the results to their initial small sample.
- Online lesson plan for Will There Be a White Christmas This Year? encourages students to use statistical data to investigate weather patterns across the USA and construct a contour map of the probability of snow cover on Christmas Day.
Additional Mathwire.com Resources
- See more Data Analysis & Probability Math Templates: insert in sheet protectors for student use with dry erase markers or for teacher use as overhead transparencies.
- See Problem Solving Resources for open-ended assessments that involve data analysis and probability.
- See all Data Analysis & Probability Games.
- See Literature Connections for data analysis and probability.
- See more Data Analysis & Probability Links.
- See all Enrichment Activities for data analysis and probability. | http://mathwire.com/standards/dataprob.html | 13 |
32 | Part of the sequence: Rationality and Philosophy
Whether you're doing science or philosophy, flirting or playing music, the first and most important tool you are using is your mind. To use your tool well, it will help to know how it works. Today we explore how your mind makes judgments.
From Plato to Freud, many have remarked that humans seem to have more than one mind.1 Today, detailed 'dual-process' models are being tested by psychologists and neuroscientists:
Since the 1970s dual-process theories have been developed [to explain] various aspects of human psychology... Typically, one of the processes is characterized as fast, effortless, automatic, nonconscious, inflexible, heavily contextualized, and undemanding of working memory, and the other as slow, effortful, controlled, conscious, flexible, decontextualized, and demanding of working memory.2
Dual-process theories for reasoning,3 learning and memory,4 decision-making,5 belief,6 and social cognition7 are now widely accepted to be correct to some degree,8 with researchers currently working out the details.9 Dual-process theories even seem to be appropriate for some nonhuman primates.10
Naturally, some have wondered if there might be a "grand unifying dual-process theory that can incorporate them all."11 We might call such theories dual-system theories of mind,12 and several have been proposed.13 Such unified theories face problems, though. 'Type 1' (fast, nonconscious) processes probably involve many nonconscious architectures,14 and brain imaging studies show a wide variety of brain systems at work at different times when subjects engage in 'type 2' (slow, conscious) processes.15
Still, perhaps there is a sense in which one 'mind' relies mostly on type 1 processes, and a second 'mind' relies mostly on type 2 processes. One suggestion is that Mind 1 is evolutionarily old and thus shared with other animals, while Mind 2 is recently evolved and particularly developed in humans. (But not fully unique to humans, because some animals do seem to exhibit a distinction between stimulus-controlled and higher-order controlled behavior.16) But this theory faces problems. A standard motivation for dual-process theories of reasoning is the conflict between cognitive biases (from type 1 processes) and logical reasoning (type 2 processes).17 For example, logic and belief bias often conflict.18 But both logic and belief bias can be located in the pre-frontal cortex, an evolutionarily new system.19 So either Mind 1 is not entirely old, or Mind 2 is not entirely composed of type 2 processes.
We won't try to untangle these mysteries here. Instead, we'll focus on one of the most successful dual-process theories: Kahneman and Frederick's dual-process theory of judgment.20
Kahneman and Frederick propose an "attribute-substitution model of heuristic judgment" which claims that judgments result from both type 1 and type 2 processes.21 The authors explain:
The early research on judgment heuristics was guided by a simple and general hypothesis: When confronted with a difficult question, people may answer an easier one instead and are often unaware of the substitution. A person who is asked "What proportion of long-distance relationships break up within a year?" may answer as if she had been asked "Do instances of failed long-distance relationships come readily to mind?" This would be an application of the availability heuristic. A professor who has heard a candidate’s job talk and now considers the question "How likely is it that this candidate could be tenured in our department?" may answer the much easier question: "How impressive was the talk?" This would be an example of one form of the representativeness heuristic.22
Next: what is attribute substitution?
...whenever the aspect of the judgmental object that one intends to judge (the target attribute) is less readily assessed than a related property that yields a plausible answer (the heuristic attribute), individuals may unwittingly substitute the simpler assessment.22
For example, one study23 asked subjects two questions among many others: "How happy are you with your life in general?" and "How many dates did you have last month?" In this order, the correlation between the two questions was negligible. If the dating question was asked first, however, the correlation was .66. The question about dating frequency seems to evoke "an evaluation of one's romantic satisfaction" that "lingers to become the heuristic attribute when the global happiness question is subsequently encountered."22
Or, consider a question in another study: "If a sphere were dropped into an open cube, such that it just fit (the diameter of the sphere is the same as the interior width of the cube), what proportion of the volume of the cube would the sphere occupy?"24 The target attribute (the volumetric relation between cube and sphere) is difficult to assess intuitively, and it appears that subjects sought out an easier-to-assess heuristic attribute instead, substituting the question "If a circle were drawn inside a square, what proportion of the area of the square does the circle occupy?" The mean estimate for the 'sphere inside cube' problem was 74%, almost identical to the mean estimate of the 'circle inside square' problem (77%) but far larger than the correct answer for the 'sphere inside cube' problem (52%).
Attribution substitutions like this save on processing power but introduce systematic biases into our judgment.25
Some attributes are always candidates for the heuristic role in attribute substitution because they play roles in daily perception and cognition and are thus always accessible: cognitive fluency, causal propensity, surprisingness, mood, and affective valence.26 Less prevalent attributes can become accessible for substitution if recently evoked or primed.27
Supervision of intuitive judgments
Intuitive judgments, say Kahneman and Frederick, arise from processes like attribute substitution, of which we are unaware. They "bubble up" from the unconscious, after which many of them are evaluated and either endorsed or rejected by type 2 processes.
You can feel the tension28 between type 1 and type 2 processes in your own judgment when you try the Stroop task. Name the color of a list of colored words and you will find that you pause a bit when the word you see names a different color than the color it is written in, like this: green. Your unconscious, intuitive judgment uses an availability heuristic to suggest the word 'green' is shown in green, but your conscious type 2 processes quickly correct the unconscious judgment and conclude that it is written in red. You have no such momentary difficulty naming the color of this word: blue.
In many cases, type 2 processes have no trouble correcting the judgments of type 1 processes.29 But because type 2 processes are slow, they can be interrupted by time pressure.30 On the other hand, biased attribute substitution can sometimes be prevented if subjects are alerted to the possible evaluation contamination in advance.31 (This finding justifies a great deal of material on Less Wrong, which alerts you to many cognitive biases - that is, possible sources of evaluation contamination.)
Often, type 2 processes fail to correct intuitive judgments, as demonstrated time and again in the heuristics and biases literature.32 And even when type 2 processes correct intuitive judgments, the feeling that the intuitive judgments is correct may remain. Consider the famous Linda problem. Knowledge of probability theory does not extinguish the feeling (from type 1 processes) that Linda must be a feminist bank teller. As Stephen Jay Gould put it:
I know [the right answer], yet a little homunculus in my head continues to jump up and down, shouting at me, "But she can’t just be a bank teller; read the description!"33
Kahneman and Frederick's dual-process theory appears to be successful in explaining a wide range of otherwise puzzling phenomena in human judgment.34 The big picture of all this is described well by Jonathan Haidt, who imagines his conscious mind as a rider upon an elephant:
I'm holding the reins in my hands, and by pulling one way or the other I can tell the elephant to turn, to stop, or to go. I can direct things, but only when the elephant doesn't have desires of his own. When the elephant really wants to do something, I'm no match for him.
...The controlled system [can be] seen as an advisor. It's a rider placed on the elephant's back to help the elephant make better choices. The rider can see farther into the future, and the rider can learn valuable information by talking to other riders or by reading maps, but the rider cannot order the elephant around against its will...
...The elephant, in contrast, is everything else. The elephant includes gut feelings, visceral reactions, emotions, and intuitions that comprise much of the automatic system. The elephant and the rider each have their own intelligence, and when they work together well they enable the unique brilliance of human beings. But they don't always work together well.35
Next post: Your Evolved Intuitions
Previous post: Philosophy: A Diseased Discipline
1 Plato divided the soul into three parts: reason, spirit, and appetite (Annas 1981, ch. 5). Descartes held that humans operate on unconscious mechanical processes we share with animals, but that humans' additional capacities for rational thought separate us from the animals (Cottingham 1992). Leibniz said that animals are guided by inductive reasoning, which also guides 'three-fourths' of human reasoning, but that humans can also partake in 'true reasoning' — logic and mathematics (Leibniz 1714/1989, p. 208; Leibniz 1702/1989, pp. 188-191). Many thinkers, most famously Freud, have drawn a division between conscious and unconscious thinking (Whyte 1978). For a more detailed survey, see Frankish & Evans (2009). Multiple-process theories of mind stand in contrast to monistic theories of mind, for example: Johnson-Laird (1983); Braine (1990); Rips (1994). Note that dual-process theories of mind need not conflict with massively modular view of human cognition like Barrett & Kurzban (2006) or Tooby & Cosmides (1992): see Mercier & Sperber (2009). Finally, note that dual-process theories sit comfortably with current research on situated cognition: Smith & Semin (2004).
2 Frankish & Evans (2009).
3 Evans (1989, 2006, 2007); Evans & Over (1996); Sloman (1996, 2002); Stanovich (1999, 2004, 2009); Smolensky (1988); Carruthers (2002, 2006, 2009); Lieberman (2003; 2009); Gilbert (1999).
4 Sun et al. (2009); Eichenbaum & Cohen (2001); Carruthers (2006); Sherry & Schacter (1987); Berry & Dienes (1993); Reber (1993); Sun (2001).
5 Kahneman & Frederick (2002, 2005).
6 Dennett (1978, ch. 16; 1991); Cohen (1992); Frankish (2004); Verscheuren et al. (2005).
7 Smith & Collins (2009); Bargh (2006); Strack & Deutsch (2004).
8 Evans (2008); Evans & Frankish (2009). Or as Carruthers (2009) puts it, "Dual-system theories of human reasoning are now quite widely accepted, at least in outline."
9 One such detail is: When and to what extent does System 2 intervene in System 1 processes? See: Evans (2006); Stanovich (1999); De Neys (2006); Evans & Curtis-Holmes (2005); Finucane et al. (2000); Newstead et al. (1992); Evans et al. (1994); Daniel & Klaczynski (2006); Vadenoncoeur & Markovits (1999); Thompson (2009). Other important open questions are explored in Fazio & Olson (2003); Nosek (2007); Saunders (2009). For an accessible overview of the field, see Evans (2010).
10 Call & Tomasello (2005).
11 Evans (2009).
12 Dual-process and dual-system theories of the mind suggest multiple cognitive architectures, and should not be confused with theories of multiple modes of processing, or two kinds of cognitive style. One example of the latter is the supposed distinction between Eastern and Western thinking (Nisbett et al. 2001). Dual-process and dual-system theories of the mind should also be distinguished from theories that posit a continuum between one form of thinking and another (e.g. Hammond 1996; Newstead 2000; Osman 2004), since this suggests there are not separate cognitive architectures at work.
13 Evans (2003); Stanovich (1999, 2009); Evans & Over (1996); Smith & DeCoster (2000); Wilson (2002).
14 Evans (2008, 2009); Stanovich (2004); Wilson (2002).
15 Goel (2007).
16 Toates (2004, 2006).
17 Evans (1989); Evans & Over (1996); Kahneman & Frederick (2002); Klaczynski & Cottrell (2004); Sloman (1996); Stanovich (2004).
18 Evans et al. (1983); Klauer et al. (2000).
19 Evans (2009); Goel & Dolan (2003).
21 They use the terms 'system 1' and 'system 2' instead of 'type 1' and 'type 2'. Their theory is outlined in Kahneman & Frederick (2002, 2005).
22 Kahneman & Frederick (2005).
23 Strack et al. (1988).
24 Frederick & Nelson (2007).
25 Cognitive biases particularly involved in attribute substitution include the availability heuristic (Lichtenstein et al. 1978; Schwarz et al. 1991; Schwarz & Vaughn 2002), the representativeness heuristic (Kahneman & Tversky 1973; Tversky & Kahneman 1982; Bar-Hillel & Neter 1993; Agnolia 1991), and the affect heuristic (Slovic et al. 2002; Finucane et al. 2000).
26 Cognitive fluency: Jacoby & Dallas (1981); Schwarz & Vaughn (2002); Tversky & Kahneman (1973). Causal propensity: Michotte (1963); Kahneman & Varey (1990). Surprisingness: Kahneman & Miller (1986). Mood: Schwarz & Clore (1983). Affective valence: Bargh (1997); Cacioppo et al. (1993); Kahneman et al. (1999); Slovic et al. (2002); Zajonc (1980, 1997).
27 Bargh et al. (1986); Higgins & Brendl (1995). Note also that attributes must be mapped across dimensions on a common scale, and we understand to some degree the mechanism that does this: Kahneman & Frederick (2005); Ganzach and Krantz (1990); Stevens (1975).
28 Also see De Neys et al. (2010).
29 Gilbert (1989).
30 Finucane et al. (2000).
31 Schwarz & Clore (1983); Schwarz (1996).
32 Gilovich et al. (2002); Kahneman et al. (1982); Pohl (2005); Gilovich (1993); Hastie & Dawes (2009).
33 Gould (1991), p. 469.
34 See the overview in Kahneman & Frederick (2005).
35 Haidt (2006), pp. 4, 17.
Agnolia (1991). Development of judgmental heuristics and logical reasoning: Training counteracts the representativeness heuristic. Cognitive Development, 6: 195–217.
Annas (1981). An introduction to Plato's republic. Oxford University Press.
Bar-Hillel & Neter (1993). How alike is it versus how likely is it: A disjunction fallacy in probability judgments. Journal of Personality and Social Psychology, 41: 671–680.
Bargh (1997). The automaticity of everyday life. Advances in social cognition, 10. Erlbaum.
Bargh, Bond, Lombardi, & Tota (1986). The additive nature of chronic and temporary sources of construct accessibility. Journal of Personality and Social Psychology, 50(5): 869–878.
Bargh (2006). Social psychology and the unconscious. Psychology Press.
Barrett & Kurzban (2006). Modularity in cognition: Framing the debate. Psychological Review, 113: 628-647.
Berry & Dienes (1993). Implicit learning. Erlbaum.
Braine (1990). The 'natural logic' approach to reasoning. In Overton (ed.), Reasoning, necessity and logic: Developmental perspectives. Psychology Press.
Cacioppo, Priester, & Berntson (1993). Rudimentary determinants of attitudes: II. Arm flexion and extension have differential effects on attitudes. Journal of Personality and Social Psychology, 65: 5–17.
Call & Tomasello (2005). Reasoning and thinking in nonhuman primates. In Holyoak & Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning (pp. 607-632). Cambridge University Press.
Carruthers (2002). The cognitive functions of language. Behavioral and Brain Sciences, 25: 657-719.
Carruthers (2006). The architecture of the mind. Oxford University Press.
Carruthers (2009). An architecture for dual reasoning. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 109-128). Oxford University Press.
Cohen (1992). An essay on belief and acceptance. Oxford University Press.
Cottingham (1992). Cartesian dualism: Theology, metaphysics, and science. In Cottingham (ed.), The Cambridge companion to Descartes (pp. 236-257). Cambridge University Press.
Daniel & Klaczynski (2006). Developmental and individual differences in conditional reasoning: Effects of logic instructions and alternative antecedents. Child Development, 77: 339-354.
De Neys, Moyens, & Vansteenwegen (2010). Feeling we're biased: Autonomic arousal and reasoning conflict. Cognitive, affective, and behavioral neuroscience, 10(2): 208-216.
Dennett (1978). Brainstorms: Philosophical essays on mind and psychology. MIT Press.
Dennett (1991). Two contrasts: Folk craft versus folk science and belief versus opinion. In Greenwood (ed.), The future of folk psychology: Intentionality and cognitive science (pp. 135-148). Cambridge University Press.
De Neys (2006). Dual processing in reasoning: Two systems but one reasoner. Psychological Science, 17: 428-433.
Eichenbaum & Cohen (2001). From conditioning to conscious reflection: Memory systems of the brain. Oxford University Press.
Evans (1989). Bias in human reasoning: Causes and consequences. Erlbaum.
Evans (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7: 454-459.
Evans (2006). The heuristic-analytic theory of reasoning: Extension and evaluation. Psychonomic Bulletin and Review, 13: 378-395.
Evans (2007). Hypothetical Thinking: Dual processes in reasoning and judgment. Psychology Press.
Evans (2008). Dual-processing accounts of reasoning, judgment and social cognition. Annual Review of Psychology, 59: 255-278.
Evans (2009). How many dual-process theories do we need? One, two, or many? In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 33-54). Oxford University Press.
Evans (2010). Thinking Twice: Two minds in one brain. Oxford University Press.
Evans & Over (1996). Rationality and Reasoning. Psychology Press.
Evans & Frankish, eds. (2009). In Two Minds: Dual Processes and Beyond. Oxford University Press.
Evans & Curtis-Holmes (2005). Rapid responding increases belief bias: Evidence for the dual-process theory of reasoning. Thinking & Reasoning, 11: 382-389.
Evans, Barston, & Pollard (1983). On the conflict between logic and belief in syllogistic reasoning. Memory & Cognition, 11: 295-306.
Evans, Newstead, Allen, & Pollard (1994). Debiasing by instruction: The case of belief bias. European Journal of Cognitive Psychology, 6: 263-285.
Fazio & Olson (2003). Implicit measures in social cognition research: Their meaning and uses. Annual Review of Psychology, 54: 297-327.
Finucane, Alhakami, Slovic, & Johnson (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13: 1-17.
Frankish (2004). Mind and Supermind. Cambridge University Press.
Frankish & Evans (2009). The duality of mind: An historical perspective. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 1-29). Oxford University Press.
Frederick & Nelson (2007). Attribute substitution in the estimation of volumetric relationships: Psychophysical phenomena underscore judgmental heuristics. Manuscript in preparation. Massachusetts Institute of Technology.
Ganzach and Krantz (1990). The psychology of moderate prediction: I. Experience with multiple determination. Organizational Behavior and Human Decision Processes, 47: 177–204.
Gilbert (1989). Thinking lightly about others: Automatic components of the social inference process. In Uleman & Bargh (eds.), Unintended thought (pp. 189–211). Guilford Press.
Gilbert (1999). What the mind's not. In Chaiken & Trope (eds.), Dual-process theories in social psychology (pp. 3–11 ). Guilford Press.
Gilovich (1993). How we know what isn't so. Free Press.
Gilovich, Griffin, & Kahneman, eds. (2002). Heuristics and biases: the psychology of intuitive judgment. Cambridge University Press.
Goel (2007). Cognitive neuroscience of deductive reasoning. In Holyoak & Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning (pp. 475-492). Cambridge University Press.
Goel & Dolan (2003). Explaining modulation of reasoning by belief. Cognition, 87: B11-B22.
Gould (1991). Bully for brontosaurus. Reflections in natural history. Norton.
Hammond (1996). Human judgment and social policy. Oxford University Press.
Haidt (2006). The happiness hypothesis: Finding modern truth in ancient wisdom. Basic Books.
Hastie & Dawes, eds. (2009). Rational Choice in an Uncertain World, 2nd ed. Sage.
Higgins & Brendl (1995). Accessibility and applicability: Some 'activation rules' influencing judgment. Journal of Experimental Social Psychology, 31: 218–243.
Jacoby & Dallas (1981). On the relationship between autobiographical memory and perceptual learning. Journal of Experimental Psychology: General, 3: 306–340.
Johnson-Laird (1983). Mental Models. Cambridge University Press.
Kahneman & Tversky (1973). On the psychology of prediction. Psychological Review, 80: 237–251.
Kahneman et al. (1999). Economic preferences or attitude expressions? An analysis of dollar responses to public issues. Journal of Risk and Uncertainty, 19: 203–235.
Kahneman & Frederick (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In Gilovich, Griffin, & Kahneman (eds.), Heuristics and biases: The psychology of intuitive judgment (pp. 49-81). Cambridge University Press.
Kahneman & Frederick (2005). A model of heuristic judgment. In Holyoak & Morrison (eds.), The Cambridge Handbook of Thinking and Reasoning (pp. 267-294). Cambridge University Press.
Kahneman & Miller (1986). Norm theory: Comparing reality with its alternatives. Psychological Review, 93: 136–153.
Kahneman & Varey (1990). Propensities and counterfactuals: The loser that almost won. Journal of Personality and Social Psychology, 59(6): 1101–1110.
Klaczynski & Cottrell (2004). A dual-process approach to cognitive development: The case of children's understanding of sunk cost decisions. Thinking and Reasoning, 10: 147-174.
Kahneman, Slovic, & Tversky, eds. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press.
Klauer, Musch, & Naumer (2000). On belief bias in syllogistic reasoning. Psychological Review, 107: 852-884.
Leibniz (1702/1989). Letter to Queen Sophie Charlotte of Prussia, on what is independent of sense and matter. In Leibniz, Philosophical essays (pp. 186-192). Hackett.
Leibniz (1714/1989). Principles of nature and grace, based on reason. In Leibniz, Philosophical essays (pp. 206-213). Hackett.
Lichtenstein, Slovic, Fischhoff, Layman, & Combs (1978). Judged Frequency of Lethal Events. Journal of Experimental Psychology: Human Learning and Memory, 4(6): 551-578.
Lieberman (2003). Reflective and reflexive judgment processes: A social cognitive neuroscience approach. In Forgas, Williams, & von Hippel (eds.), Social judgments: Implicit and explicit processes (pp. 44-67). Cambridge University Press.
Lieberman (2009). What zombies can't do: A social cognitive neuroscience approach to the irreducibility of reflective consciousness. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 293-316). Oxford University Press.
Mercier & Sperber (2009). Intuitive and reflective inferences. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 149-170). Oxford University Press.
Michotte (1963). The perception of causality. Basic Books.
Newstead (2000). Are there two different types of thinking? Behavioral and Brain Sciences, 23: 690-691.
Newstead, Pollard, & Evans (1992). The source of belief bias effects in syllogistic reasoning. Cognition, 45: 257-284.
Nisbett, Peng, Choi, & Norenzayan (2001). Culture and systems of thought: Holistic vs analytic cognition. Psychological Review, 108: 291-310.
Nosek (2007). Implicit-explicit relations. Current Directions in Psychological Science, 16: 65-69.
Osman (2004). An evaluation of dual-process theories of reasoning. Psychonomic Bulletin and Review, 11: 988-1010.
Pohl, ed. (2005). Cognitive illusions: a handbook on fallacies and biases in thinking, judgment and memory. Psychology Press.
Reber (1993). Implicit learning and tacit knowledge. Oxford University Press.
Rips (1994). The psychology of proof: Deductive reasoning in human thinking. MIT Press.
Saunders (2009). Reason and intuition in the moral life: A dual-process account of moral justification. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 335-354). Oxford University Press.
Schwarz, Bless, Strack, Klumpp, Rittenauer-Schatka, & Simons (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61: 195–202.
Schwarz & Clore (1983). Mood, misattribution, and judgments of well-being: Informative and directive functions of affective states. Journal of Personality and Social Psychology, 45(3): 513–523.
Schwarz (1996). Cognition and communication: Judgmental biases, research methods, and the logic of conversation. Erlbaum.
Schwarz & Vaughn (2002). The availability heuristic revisited: Ease of recall and content of recall as distinct sources of information. In Gilovich, Griffin, & Kahneman (eds.), Heuristics & biases: The psychology of intuitive judgment (pp. 103–119). Cambridge University Press.
Sherry & Schacter (1987). The evolution of multiple memory systems. Psychological Review, 94: 439-454.
Sloman (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119: 1-23.
Sloman (2002). Two systems of reasoning. In Gilovich, Griffin, & Kahneman (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press.
Slovic, Finucane, Peters, & MacGregor (2002). Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics. Journal of Socio-Economics, 31: 329-342.
Smith & Collins (2009). Dual-process models: A social psychological perspective. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 197-216). Oxford University Press.
Smith & DeCoster (2000). Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and Social Psychology Review, 4: 108-131.
Smith & Semin (2004). Socially situated cognition: Cognition in its social context. Advances in experimental social psychology, 36: 53-117.
Smolensky (1988). On the proper treatment of connectionism. Behavioral and Brain Sciences, 11: 1-23.
Stanovich (1999). Who is rational? Studies of individual differences in reasoning. Psychology Press.
Stanovich (2004). The robot's rebellion: Finding meaning in the age of Darwin. Chicago University Press.
Stanovich (2009). Distinguishing the reflective, algorithmic, and autonomous minds: Is it time for a tri-process theory? In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 55-88). Oxford University Press.
Strack & Deutsch (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review, 8: 220-247.
Strack, Martin, & Schwarz (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology, 1: 429–442.
Stevens (1975). Psychophysics: Introduction to its perceptual, neural, and social prospects. Wiley.
Sun (2001). Duality of mind: A bottom-up approach towards cognition. Psychology Press.
Sun, Lane, & Mathews (2009). The two systems of learning: An architectural perspective. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 239-262). Oxford University Press.
Toates (2004). 'In two minds' - consideration of evolutionary precursors permits a more integrative theory. Trends in Cognitive Sciences, 8: 57.
Toates (2006). A model of the heirarchy of behaviour, cognition, and consciousness. Consciousness and Cognition, 15: 75-118.
Thompson (2009). Dual-process theories: A metacognitive perspective. In Evans & Franklin (eds.), In Two Minds: Dual Processes and Beyond (pp. 171-195). Oxford University Press.
Tooby & Cosmides (1992). The psychological foundations of culture. In Barkow, Cosmides, & Tooby (eds.), The Adapted Mind (pp. 19-136). Oxford University Press.
Tversky & Kahneman (1973). Availability: a heuristic for judging frequency and probability. Cognitive Psychology, 5(2): 207–232.
Tversky & Kahneman (1982). Judgments of and by representativeness. In Kahneman, Slovic, & Tversky (eds.), Judgment under uncertainty: Heuristics and biases (pp. 84–98). Cambridge University Press.
Vadenoncoeur & Markovits (1999). The effect of instructions and information retrieval on accepting the premises in a conditional reasoning task. Thinking & Reasoning, 5: 97-113.
Verscheuren, Schaeken, & d'Ydewalle (2005). A dual-process specification of causal conditional reasoning. Thinking & Reasoning, 11: 239-278.
Whyte (1978). The unconscious before Freud. St. Martin's Press.
Wilson (2002). Strangers to ourselves: Discovering the adaptive unconscious. Belknap Press.
Zajonc (1980). Feeling and thinking: Preferences need no inferences. American Psychologist, 35(2): 151–175.
Zajonc (1997). Emotions. In Gilbert, Fiske, & Lindzey (eds.), Handbook of social psychology (4th ed., pp. 591–632). Oxford University Press. | http://lesswrong.com/lw/531/how_you_make_judgments_the_elephant_and_its_rider/ | 13 |
30 | The concept of correlation can best be presented with an example. Figure 7-13 shows the key elements of a radar system. A specially designed antenna transmits a short burst of radio wave energy in a selected direction. If the propagating wave strikes an object, such as the helicopter in this illustration, a small fraction of the energy is reflected back toward a radio receiver located near the transmitter. The transmitted pulse is a specific shape that we have selected, such as the triangle shown in this example. The received signal will consist of two parts: (1) a shifted and scaled version of the transmitted pulse, and (2) random noise, resulting from interfering radio waves, thermal noise in the electronics, etc. Since radio signals travel at a known rate, the speed of light, the shift between the transmitted and received pulse is a direct measure of the distance to the object being detected. This is the problem: given a signal of some known shape, what is the best way to determine where (or if) the signal occurs in another signal. Correlation is the answer.
Correlation is a mathematical operation that is very similar to convolution. Just as with convolution, correlation uses two signals to produce a third signal. This third signal is called the cross-correlation of the two input signals. If a signal is correlated with itself, the resulting signal is instead called the autocorrelation. The convolution machine was presented in the last chapter to show how convolution is performed. Figure 7-14 is a similar
illustration of a correlation machine. The received signal, x[n], and the cross-correlation signal, y[n], are fixed on the page. The waveform we are looking for, t[n], commonly called the target signal, is contained within the correlation machine. Each sample in y[n] is calculated by moving the correlation machine left or right until it points to the sample being worked on. Next, the indicated samples from the received signal fall into the correlation machine, and are multiplied by the corresponding points in the target signal. The sum of these products then moves into the proper sample in the cross-correlation signal.
The amplitude of each sample in the cross-correlation signal is a measure of how much the received signal resembles the target signal, at that location. This means that a peak will occur in the cross-correlation signal for every target signal that is present in the received signal. In other words, the value of the cross-correlation is maximized when the target signal is aligned with the same features in the received signal.
What if the target signal contains samples with a negative value? Nothing changes. Imagine that the correlation machine is positioned such that the target signal is perfectly aligned with the matching waveform in the received signal. As samples from the received signal fall into the correlation machine, they are multiplied by their matching samples in the target signal. Neglecting noise, a positive sample will be multiplied by itself, resulting in a positive number. Likewise, a negative sample will be multiplied by itself, also resulting in a positive number. Even if the target signal is completely negative, the peak in the cross-correlation will still be positive.
If there is noise on the received signal, there will also be noise on the cross-correlation signal. It is an unavoidable fact that random noise looks a certain amount like any target signal you can choose. The noise on the cross-correlation signal is simply measuring this similarity. Except for this noise, the peak generated in the cross-correlation signal is symmetrical between its left and right. This is true even if the target signal isn't symmetrical. In addition, the width of the peak is twice the width of the target signal. Remember, the cross-correlation is trying to detect the target signal, not recreate it. There is no reason to expect that the peak will even look like the target signal.
Correlation is the optimal technique for detecting a known waveform in random noise. That is, the peak is higher above the noise using correlation than can be produced by any other linear system. (To be perfectly correct, it is only optimal for random white noise). Using correlation to detect a known waveform is frequently called matched filtering. More on this in Chapter 17.
The correlation machine and convolution machine are identical, except for one small difference. As discussed in the last chapter, the signal inside of the convolution machine is flipped left-for-right. This means that samples numbers: 1, 2, 3 … run from the right to the left. In the correlation machine this flip doesn't take place, and the samples run in the normal direction.
Since this signal reversal is the only difference between the two operations, it is possible to represent correlation using the same mathematics as convolution. This requires preflipping one of the two signals being correlated, so that the left-for-right flip inherent in convolution is canceled. For instance, when a[n] and b[n], are convolved to produce c[n], the equation is written: a[n] * b[n] = c[n]. In comparison, the cross-correlation of a[n] and b[n] can be written: a[n] * b[-n] = c[n]. That is, flipping b[n] left-for-right is accomplished by reversing the sign of the index, i.e., b[-n].
Don't let the mathematical similarity between convolution and correlation fool you; they represent very different DSP procedures. Convolution is the relationship between a system's input signal, output signal, and impulse response. Correlation is a way to detect a known waveform in a noisy background. The similar mathematics is only a convenient coincidence. | http://www.dspguide.com/ch7/3.htm | 13 |
59 | Optimization (computer science)
From Wikipedia, the free encyclopedia
In computing, optimization is the process of modifying a system to make some aspect of it work more efficiently or use less resources. For instance, a computer program may be optimized so that it executes more rapidly, or is capable of operating within a reduced amount of memory storage, or draws less battery power in a portable computer. The system may be a single computer program, a collection of computers or even an entire network such as the Internet.
Although the word "optimization" shares the same root as "optimal," it is rare for the process of optimization to produce a truly optimal system. The optimized system will typically only be optimal in one sense or for one audience. One might reduce the amount of time that a program takes to perform some task at the price of making it consume more memory - or in an application where memory space is at a premium, one might deliberately choose a slower algorithm in order to use less memory. There may well be no “one size fits all” design which works well in all cases, so engineers make tradeoffs to optimize the attributes of greatest interest. Additionally, the effort required to make a piece of software completely optimal (ie incapable of any further improvement) is typically more than is reasonable for the benefits that would be accrued; so the process of optimisation may be halted before a completely optimal solution has been reached. Fortunately, it is often the case that the greatest improvements come early on in the process.
Optimization can occur at a number of levels. At the highest level the design may be optimized to make best use of the available resources. The implementation of this design will benefit from the use of efficient algorithms and the coding of these algorithms will benefit from the writing of good quality code. Use of an optimizing compiler can help ensure that the executable program is optimized. At the lowest level, it is possible to bypass the compiler completely and write assembly code by hand. With modern optimizing compilers and the greater complexity of recent CPUs, it takes great skill to write code that is better than the compiler can generate and few projects ever have to resort to this ultimate optimization step. Because optimization often relies on making use of special cases and performing complex trade offs an optimized program can often be more difficult for programmers to comprehend, which can contain in more faults than the unoptimized version.
Computational tasks are often performed in several manners with varying efficiency. For example, consider the following C code snippet to sum all integers from 1 to N:
int i, sum = 0; for (i = 1; i <= N; i++) sum += i; print ("sum: %d\n", sum);
This code can (assuming no overflow) be rewritten using a mathematical formula like:
int sum = (N * (N+1)) / 2; printf ("sum: %d\n", sum);
The optimization, often done automatically, is therefore to pick a method that is more computationally efficient while retaining the same functionality. However, a significant improvement in performance can often be achieved by solving only the actual problem and removing extraneous functionality.
Optimization is not always an obvious or intuitive process. In the example above, the ‘optimized’ version might actually be slower than the original software if N were sufficiently small and the computer were much faster at performing addition and looping operations than multiplications and divisions.
Optimization will generally focus on improving just one or two aspects of performance: execution time, memory usage, disk space, bandwidth, power consumption or some other resource. This will usually require a tradeoff — where one is optimized at the expense of others. For example, increasing the size of cache improves runtime performance, but also increases the memory consumption. Other common tradeoffs include code clarity and conciseness.
There are cases where the programmer doing the optimisation must decide to make the software more optimal for some operations but at the price of making other operations less efficient. These tradeoffs may often be of a non-technical nature - such as when a competitor has published a benchmark result that must be beaten in order to improve commercial success but perhaps at the price of making normal usage of the software less efficient. Such changes are sometimes jokingly referred to as pessimizations.
Different fields
In operations research, optimization is the problem of determining the inputs of a function that minimize or maximize its value. Sometimes constraints are imposed on the values that the inputs can take; this problem is known as constrained optimization.
Typical problems have such a large number of possibilities that a programming organization can only afford a “good enough” solution.
Optimization requires finding a bottleneck: the critical part of the code that is the primary consumer of the needed resource. As a rule of thumb, improving 20% of the code is responsible for 80% of the results (see also Pareto principle).
The Pareto principle (also known as the 80-20 rule, the law of the vital few and the principle of factor sparsity) states that for many phenomena, 80% of the consequences stem from 20% of the causes. The idea has rule-of-thumb application in many places, but it is commonly misused. For example, it is a misuse to state that a solution to a problem "fits the 80-20 rule" just because it fits 80% of the cases. In computer science, the Pareto principle can be applied to resource optimization by observing that 80% of the resources are typically used by 20% of the operations. In software engineering, it is often a better approximation that 90% of the execution time of a computer program is spent executing 10% of the code (known as the 90/10 law in this context).
The architectural design of a system overwhelmingly affects its performance. The choice of algorithm affects efficiency more than any other item of the design. More complex algorithms and data structures perform well with many items, while simple algorithms are more suitable for small amounts of data — the setup and initialization time of the more complex algorithm can outweigh the benefit.
In some cases, adding more memory can help to make a program run faster. For example, a filtering program will commonly read each line and filter and output that line immediately. This only uses enough memory for one line, but performance is typically poor. Performance can be greatly improved by reading the entire file then writing the filtered result, though this uses much more memory. Caching the result is similarly effective, though also requiring larger memory use.
When to optimize
Optimization can reduce readability and add code that is used only to improve the performance. This may complicate programs or systems, making them harder to maintain and debug. As a result, optimization or performance tuning is often performed at the end of the development stage.
Donald Knuth said,
- "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." (Code Complete, Page 594)
Tony Hoare commented,
- "I agree with this. It's usually not worth spending a lot of time micro-optimizing code before its obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine tuning at a later stage will fix any problems."
“Premature optimization” is a phrase used to describe a situation where a programmer lets performance considerations affect the design of a piece of code. This can result in a design that is not as clean as it could have been or code that is incorrect, because the code is complicated by the optimization and the programmer is distracted by optimizing.
An alternative approach is to design first, code from the design and then profile/benchmark the resulting code to see what should be optimized. A simple and elegant design is often easier to optimize at this stage, and profiling may reveal unexpected performance problems that would not have been addressed by premature optimization.
In practice, it is often necessary to keep performance goals in mind when first designing software, but the programmer balances the goals of design and optimization.
Interpreted languages
If the programmer discards the original commented and formatted code, maintainability is sacrificed. It becomes harder for a human to read, debug and subsequently modify or extend the code once it has been optimized.
This anti-pattern can be avoided by keeping a copy of the original code, and by working from the original code when making changes to optimized software.
Optimization during code development using macros takes on different forms in different languages. In high-level languages such as C, macros are implemented using textual substitution, and so their benefit is mostly limited to avoiding function-call overhead.
In many functional programming languages, however, macros are implemented using compile-time evaluation and substitution of non-textual, compiled code. Because of this difference, it is possible to perform complex compile-time computations, moving some work out of the resulting program. Lisp originated this style of macro, and such macros are often called “Lisp-like macros.”
As with any optimization, however, it is often difficult to predict where such tools will have the most impact before a project is complete.
Automated and manual optimization
See also Category:Compiler optimizations
Optimization can be automated by compilers or performed by programmers. Gains are usually limited for local optimization, and larger for global optimizations. Usually, the most powerful optimization is to find a superior algorithm.
Optimizing a whole system is usually done by human beings because the system is too complex for automated optimizers. In this technique, programmers or system administrators explicitly change code so that the system performs better. Although it can produce better efficiency, it is far more expensive than automated optimizations.
First of all, it is extremely important to use a profiler to find the sections of the program that is taking the most resources - the bottleneck. Programmers usually think they have a clear idea of where the bottleneck is, but intuition is frequently wrong. Optimizing an unimportant piece of code will typically do little to help the overall performance.
When the bottleneck is localized, optimization usually starts with a rethinking of the algorithm used in the program: more often than not, a particular algorithm can be specifically tailored to a particular problem, yielding better performance than a generic algorithm. For example, the task of sorting a huge list of items is usually done with a quicksort routine, which is one of the most efficient generic algorithms. But if some characteristic of the items is exploitable (for example, they are already arranged in some particular order), a different method can be used, or even a custom-made sort routine.
After one is reasonably sure that the best algorithm is selected, code optimization can start: loops can be unrolled (for lower loop overhead, although this can often lead to lower speed, due to overloading the processor's instruction cache), data types as small as possible can be used, integer arithmetic can be used instead of floating-point, and so on.
Performance bottlenecks can be due to language limitations rather than algorithms or data structures used in the program. Sometimes, a critical part of the program can be re-written in a different programming language that gives more direct access to the underlying machine. For example, it is common for very high-level languages like Python to have modules written in C for greater speed. Programs already written in C can have modules written in assembly. Programs written in D can use the inline assembler.
Rewriting pays off because of a general rule known as the 90/10 law, which states that 90% of the time is spent in 10% of the code, and only 10% of the time in the remaining 90% of the code. So putting intellectual effort into optimizing just a small part of the program can have a huge effect on the overall speed if the correct part(s) can be located.
Manual optimization often has the side-effect of undermining readability. Thus code optimizations should be carefully documented and their effect on future development evaluated.
The program that does the automated optimization is called an optimizer. Most optimizers are embedded in compilers and operate during compilation. Optimizers often can tailor the generated code to specific processors.
Today, automated optimizations are almost exclusively limited to compiler optimization.
Time taken for optimization
On some occasions, the time taken for optimization in itself may be an issue.
In a software project, optimizing code usually does not add a new feature, and worse, it might break existing functionalities. Because optimized code has lesser readability than a straightforward code, optimization may well hurt the maintainability of the program as well. In short, optimization becomes a cost and it is important to be sure that the investment pays off.
The optimizer (a program that does optimization) may have to be optimized as well. The compilation with the optimizer being turned on usually takes more time, though this is only a problem when the program is significantly large. In particular, for just-in-time compilers the performance of the optimizer is a key in improving execution speed. Usually spending more time can yield better code, but it is also the precious computer time that we want to save; thus in practice tuning the performance requires the trade-off between the time taken for optimization and the reduction in the execution time gained by optimizing code.
Code optimization can be broadly categorized as platform dependent and platform independent techniques. Platform independent techniques are generic techniques and are effective for most of digital signal processors (DSP) platforms. Such as loop unrolling. Reduction in function calls. Memory efficient routines. Reduction in conditions etc. Platform dependent techniques involve instruction level parallelism, data level parallelism, cache optimization techniques i.e. parameters that differ among various platforms.
- “The order in which the operations shall be performed in every particular case is a very interesting and curious question, on which our space does not permit us fully to enter. In almost every computation a great variety of arrangements for the succession of the processes is possible, and various considerations must influence the selection amongst them for the purposes of a Calculating Engine. One essential object is to choose that arrangement which shall tend to reduce to a minimum the time necessary for completing the calculation.” - Ada Byron's notes on the analytical engine 1842.
- “More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity.” - W.A. Wulf
- “We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.” - Knuth, paraphrasing Hoare
- “Bottlenecks occur in surprising places, so don't try to second guess and put in a speed hack until you have proven that's where the bottleneck is.” - Rob Pike
- “The First Rule of Program Optimization: Don't do it. The Second Rule of Program Optimization (for experts only!): Don't do it yet.” - Michael A. Jackson
See also
- Compiler optimization
- Abstract interpretation
- Control flow graph
- Lazy evaluation
- Loop optimization
- Low level virtual machine
- Memory locality
- Performance analysis (profiling)
- Queueing theory
- Speculative execution
- SSA form
- Worst-case execution time
- ^ Knuth, Donald: Structured Programming with Goto Statements. Computing Surveys 6:4 (1974), 261–301.
- Jon Bentley: Writing Efficient Programs, ISBN 0-13-970251-2.
- Donald Knuth: The Art of Computer Programming
External links
- Programming Optimization
- C,C++ optimization
- C optimization tutorial
- Software Optimization at Link-time And Run-time
- Profiling and optimizing Ruby code
- Article "A Plea for Lean Software" by Niklaus Wirth
- Description from the Portland Pattern Repository
- Performance tuning of Computer Networks
- An article describing high-level optimization
- Optimization for video games (gpu and cpu) | http://www.bazpedia.com/en/o/p/t/Optimization_(computer_science).html | 13 |
29 | Put the VT to work in your classroom
Analyzing the Language of Presidential Debates
Lesson Question:How can a presidential candidate's linguistic patterns in a debate further reveal his or her political agenda?
Lesson Overview:In this lesson, students will analyze an excerpt from a 1960 debate between presidential candidates Richard Nixon and John F. Kennedy. Then, students will watch or read a debate between current presidential or vice-presidential candidates and reflect on how their verbal patterns may relate to their overall political positions as well.
Length of Lesson:One hour to one hour and a half
Instructional Objectives:Students will:
- evaluate word choice and linguistic patterns in a historic presidential debate
- watch or read a debate between current presidential or vice-presidential candidates
- analyze word choice and linguistic patterns in a self-selected debate transcript excerpt
Reading and analyzing an excerpt from a historic presidential debate:
- Explain to students that in anticipation of the upcoming presidential and vice-presidential debates they will be analyzing an excerpt from a historic 1960 debate between Richard Nixon and John F. Kennedy — one of the first televised debates between presidential candidates.
- Distribute copies of the October 7, 1960 debate excerpt between Nixon and Kennedy [click here to download], and have students follow these directions: "Carefully read both Nixon's and Kennedy's responses to debate panelist Harold R. Levy's question to Vice President Nixon on the topic of 'party labels.' As you are reading, try to make note of how Nixon and Kennedy emphasize particular words and phrases in their responses. Underline those words and phrases that stand out to you. Which words or phrases do the candidates tend to repeat? What words or phrases are familiar or unfamiliar to you? What other verbal patterns do you detect in these candidates' responses?"
Exploring repetition as a means of emphasis:
- Elicit students' comments about any language patterns they detected in Nixon's and Kennedy's responses to Levy's question, and list any specific words or phrases that they underlined on the board. Steer students to pay close attention to the candidates' usage of the word "party." Why do students think that Kennedy uses the word "party" four more times than Nixon, even though Nixon's response is twice as long as Kennedy's response?
- Display the Visual Thesaurus word map for the word "party" on the white board and have students identify which definition of "party" is relevant to the debate responses (i.e., "an organization to gain political power"). You could also then click on this definition for "party" to reveal the multitude of political parties that exist or have existed beyond the Democratic and Republican parties (e.g., the Black Panthers, the Know-Nothing Party, the Free Soil Party, etc.).
- Establish that repetition is a common form of emphasis and that Nixon's reluctance to use the word "party" may be related to his greater point: that a candidate should be judged as an individual, rather than as a mere representative of a party.
Investigating the term "free world" with the help of the VT:
- Call attention to Nixon's usage of the term "free world" and his repeated stance that the 1960 election race between himself and Kennedy would determine "leadership for the whole free world."
- Display the Visual Thesaurus word map for the term "Free World" and reveal that Nixon was referring to "anti-Communist countries" by using this phrase.
- Explain to students that Nixon and Kennedy were candidates vying for presidency during the Cold War, the post-WWII "cold" conflict between Western allies (headed by the U.S.) and Communist countries in the East (led by the Soviet Union).
- If students are curious to learn more about this period in history, they could investigate the terms "cold war" and "Communist" by using the VT, or by using on-line reference site such as The Cold War Museum or the CNN "Perspectives" series on the Cold War.
Analyzing language usage in a contemporary debate:
- Assign students the task of watching a recording of a debate. The 2008 presidential and vice-presidential debates are available for viewing on websites such as the New York Times and MSNBC.
- Ideally, students could watch one of the debates in its entirety and then examine a transcript of the debate after viewing. (Debate transcripts are available in newspapers and on on-line news sites, such as the New York Times.)
- Have students use the "Analyzing a Candidate's Verbal Patterns" sheet [click here to download] to record their general reactions to the debate. Who do they think "won" the debate? Why? How did each candidate try to convince viewers of his or her point of view? In general, what verbal or rhetorical patterns did each candidate use in an effort to reinforce their points?
- Direct students to choose a particular question posed by a debate moderator or panelist and to more closely examine each candidate's responses to this question. Have students write the question on the "Analyzing a Candidate's Verbal Patterns" sheet [click here to download] and then supply brief summaries of candidates' responses to the question. For example, Kennedy's response to Levy's question could be summarized as: "Party does matter, and a candidate's party affiliation can tell voters a lot about what a candidate stands for."
- Have students next examine how candidates used specific words and phrases in their responses. Which words or phrases did candidates repeat or emphasize and why? Direct students to choose at least one of these words or phrases to investigate by using the Visual Thesaurus.
Holding a roundtable analysis of a debate:
- Rearrange students' desks so that there is a central "roundtable" of students' desks in the center of the room.
- The students in the center of the room should act as debate analysts who are discussing the pros and cons of each candidate's debate performances, much like the political analysts who are featured immediately following a televised debate.
- Begin the discussion by having a roundtable participant read excerpts from his or her comments on the "Analyzing a Candidate's Verbal Patterns." Then, other roundtable participants can either agree or disagree with the original commentator's viewpoint. Students who are observing the discussion can also interject comments or questions during the discussion. Try to encourage a lively discussion that seeks to answer the central question: "How do candidates' word choices reflect their greater points or political positions?"
Extending the Lesson:
- By using the Commission on Presidential Debates web site (www.debates.org), students could compare and contrast transcripts of historic debates with more contemporary debates and draw some conclusions about how political language and rhetoric has changed throughout recent American history.
- Students could also read a few of the debate transcripts on www.debates.org and then decide who "won" each debate, based on clarity, eloquence and logic. Did the debate "winners" go on to win their elections? Students could create charts to display their opinions and findings.
- Assess each student's analysis of the Nixon-Kennedy debate excerpt by reading their responses to the warm-up questions.
- Assess each student's analysis of a contemporary debate by determining if the student's opinions about the debate performances have been adequately supported by examples of verbal patterns that they identified in the debate transcript.
Standard 20. Understands the roles of political parties, campaigns, elections, and associations and groups in American politics
Level III (Grade: 6-8)
2. Knows the various kinds of elections (e.g., primary and general, local and state, congressional, presidential, recall)
3. Understands the ways in which individuals can participate in political parties, campaigns, and elections
Level IV (Grade: 9-12)
6. Understands the significance of campaigns and elections in the American political system, and knows current criticisms of campaigns and proposals for their reform
United States History
Standard 27. Understands how the Cold War and conflicts in Korea and Vietnam influenced domestic and international politics
Level III (Grades 7-8)
1. Understands major events in U.S. foreign policy during the early Cold War period (e.g., the origins of the Cold War and the advent of nuclear politics, U.S. response to the Chinese Revolution, causes of the Korean War and resulting international tensions, the implementation of the U.S. containment policy, the circumstances that led to the Marshall Plan and its accomplishments)
2. Understands the differences between the foreign policies of Kennedy and Johnson (e.g., changes in U.S. foreign policy toward the Soviet Union and the reasons for these changes, changing foreign policy toward Latin America, the Kennedy administration's Cuban policy)
Level IV (Grades 9-12)
4. Understands factors that contributed to the development of the Cold War (e.g., the mutual suspicions and divisions fragmenting the Grand Alliance at the end of World War II, U.S. support for "self-determination" and the U.S.S.R's desire for security in Eastern Europe, the practice of "atomic diplomacy")
Standard 5. Uses the general skills and strategies of the reading process
Level III (Grades 6-8)
3. Uses a variety of strategies to extend reading vocabulary (e.g., uses analogies, idioms, similes, metaphors to infer the meaning of literal and figurative phrases; uses definition, restatement, example, comparison and contrast to verify word meanings; identifies shades of meaning; knows denotative and connotative meanings; knows vocabulary related to different content areas and current events; uses rhyming dictionaries, classification books, etymological dictionaries)
5. Understands specific devices an author uses to accomplish his or her purpose (e.g., persuasive techniques, style, word choice, language structure)
6. Reflects on what has been learned after reading and formulates ideas, opinions, and personal responses to texts
Level IV (Grades 9-12)
1. Uses context to understand figurative, idiomatic, and technical meanings of terms
2. Extends general and specialized reading vocabulary (e.g., interprets the meaning of codes, symbols, abbreviations, and acronyms; uses Latin, Greek, Anglo-Saxon roots and affixes to infer meaning; understands subject-area terminology; understands word relationships, such as analogies or synonyms and antonyms; uses cognates; understands allusions to mythology and other literature; understands connotative and denotative meanings)
4. Understands writing techniques used to influence the reader and accomplish an author's purpose (e.g., organizational patterns, figures of speech, tone, literary and technical language, formal and informal language, narrative perspective)
6. Understands the philosophical assumptions and basic beliefs underlying an author's work (e.g., point of view, attitude, and values conveyed by specific language; clarity and consistency of political assumptions)
Standard 9. Uses viewing skills and strategies to understand and interpret visual media
Level III (Grade: 6-8)
2. Uses a variety of criteria to evaluate and form viewpoints of visual media (e.g., evaluates the effectiveness of informational media, such as web sites, documentaries, news programs; recognizes a range of viewpoints and arguments; establishes criteria for selecting or avoiding specific programs)
Level IV (Grade: 9-12)
2. Uses a variety of criteria (e.g., clarity, accuracy, effectiveness, bias, relevance of facts) to evaluate informational media (e.g., web sites, documentaries, news programs) | http://www.visualthesaurus.com/cm/lessons/analyzing-the-language-of-presidential-debates/ | 13 |
20 | Go to the previous, next section.
Macros enable you to define new control constructs and other language features. A macro is defined much like a function, but instead of telling how to compute a value, it tells how to compute another Lisp expression which will in turn compute the value. We call this expression the expansion of the macro.
Macros can do this because they operate on the unevaluated expressions for the arguments, not on the argument values as functions do. They can therefore construct an expansion containing these argument expressions or parts of them.
If you are using a macro to do something an ordinary function could do, just for the sake of speed, consider using an inline function instead. See section Inline Functions.
Suppose we would like to define a Lisp construct to increment a
variable value, much like the
++ operator in C. We would like to
(inc x) and have the effect of
(setq x (1+ x)).
Here's a macro definition that does the job:
(defmacro inc (var) (list 'setq var (list '1+ var)))
When this is called with
(inc x), the argument
x---not the value of
x. The body
of the macro uses this to construct the expansion, which is
x (1+ x)). Once the macro definition returns this expansion, Lisp
proceeds to evaluate it, thus incrementing
A macro call looks just like a function call in that it is a list which starts with the name of the macro. The rest of the elements of the list are the arguments of the macro.
Evaluation of the macro call begins like evaluation of a function call except for one crucial difference: the macro arguments are the actual expressions appearing in the macro call. They are not evaluated before they are given to the macro definition. By contrast, the arguments of a function are results of evaluating the elements of the function call list.
Having obtained the arguments, Lisp invokes the macro definition just
as a function is invoked. The argument variables of the macro are bound
to the argument values from the macro call, or to a list of them in the
case of a
&rest argument. And the macro body executes and
returns its value just as a function body does.
The second crucial difference between macros and functions is that the value returned by the macro body is not the value of the macro call. Instead, it is an alternate expression for computing that value, also known as the expansion of the macro. The Lisp interpreter proceeds to evaluate the expansion as soon as it comes back from the macro.
Since the expansion is evaluated in the normal manner, it may contain calls to other macros. It may even be a call to the same macro, though this is unusual.
You can see the expansion of a given macro call by calling
Function: macroexpand form &optional environment
This function expands form, if it is a macro call. If the result
is another macro call, it is expanded in turn, until something which is
not a macro call results. That is the value returned by
macroexpand. If form is not a macro call to begin with, it
is returned as given.
macroexpand does not look at the subexpressions of
form (although some macro definitions may do so). Even if they
are macro calls themselves,
macroexpand does not expand them.
macroexpand does not expand calls to inline functions.
Normally there is no need for that, since a call to an inline function is
no harder to understand than a call to an ordinary function.
If environment is provided, it specifies an alist of macro definitions that shadow the currently defined macros. This is used by byte compilation.
(defmacro inc (var) (list 'setq var (list '1+ var))) => inc (macroexpand '(inc r)) => (setq r (1+ r)) (defmacro inc2 (var1 var2) (list 'progn (list 'inc var1) (list 'inc var2))) => inc2 (macroexpand '(inc2 r s)) => (progn (inc r) (inc s)) ;
incnot expanded here.
You might ask why we take the trouble to compute an expansion for a macro and then evaluate the expansion. Why not have the macro body produce the desired results directly? The reason has to do with compilation.
When a macro call appears in a Lisp program being compiled, the Lisp compiler calls the macro definition just as the interpreter would, and receives an expansion. But instead of evaluating this expansion, it compiles the expansion as if it had appeared directly in the program. As a result, the compiled code produces the value and side effects intended for the macro, but executes at full compiled speed. This would not work if the macro body computed the value and side effects itself--they would be computed at compile time, which is not useful.
In order for compilation of macro calls to work, the macros must be
defined in Lisp when the calls to them are compiled. The compiler has a
special feature to help you do this: if a file being compiled contains a
defmacro form, the macro is defined temporarily for the rest of
the compilation of that file. To use this feature, you must define the
macro in the same file where it is used and before its first use.
While byte-compiling a file, any
require calls at top-level are
executed. One way to ensure that necessary macro definitions are
available during compilation is to require the file that defines them.
See section Features.
A Lisp macro is a list whose CAR is
macro. Its CDR should
be a function; expansion of the macro works by applying the function
apply) to the list of unevaluated argument-expressions
from the macro call.
It is possible to use an anonymous Lisp macro just like an anonymous
function, but this is never done, because it does not make sense to pass
an anonymous macro to mapping functions such as
practice, all Lisp macros have names, and they are usually defined with
the special form
Special Form: defmacro name argument-list body-forms...
defmacro defines the symbol name as a macro that looks
(macro lambda argument-list . body-forms)
This macro object is stored in the function cell of name. The
value returned by evaluating the
defmacro form is name, but
usually we ignore this value.
The shape and meaning of argument-list is the same as in a
function, and the keywords
&optional may be used
(see section Advanced Features of Argument Lists). Macros may have a documentation string, but
interactive declaration is ignored since macros cannot be
It could prove rather awkward to write macros of significant size,
simply due to the number of times the function
list needs to be
called. To make writing these forms easier, a macro ``'
(often called backquote) exists.
Backquote allows you to quote a list, but selectively evaluate
elements of that list. In the simplest case, it is identical to the
quote (see section Quoting). For example, these
two forms yield identical results:
(` (a list of (+ 2 3) elements)) => (a list of (+ 2 3) elements) (quote (a list of (+ 2 3) elements)) => (a list of (+ 2 3) elements)
By inserting a special marker, `,', inside of the argument to backquote, it is possible to evaluate desired portions of the argument:
(list 'a 'list 'of (+ 2 3) 'elements) => (a list of 5 elements) (` (a list of (, (+ 2 3)) elements)) => (a list of 5 elements)
It is also possible to have an evaluated list spliced into the
resulting list by using the special marker `,@'. The elements of
the spliced list become elements at the same level as the other elements
of the resulting list. The equivalent code without using
often unreadable. Here are some examples:
(setq some-list '(2 3)) => (2 3) (cons 1 (append some-list '(4) some-list)) => (1 2 3 4 2 3) (` (1 (,@ some-list) 4 (,@ some-list))) => (1 2 3 4 2 3) (setq list '(hack foo bar)) => (hack foo bar) (cons 'use (cons 'the (cons 'words (append (cdr list) '(as elements))))) => (use the words foo bar as elements) (` (use the words (,@ (cdr list)) as elements (,@ nil))) => (use the words foo bar as elements)
The reason for
(,@ nil) is to avoid a bug in Emacs version 18.
The bug occurs when a call to
,@ is followed only by constant
(` (use the words (,@ (cdr list)) as elements))
would not work, though it really ought to.
(,@ nil) avoids the
problem by being a nonconstant element that does not affect the result.
Macro: ` list
This macro returns list as
quote would, except that the
list is copied each time this expression is evaluated, and any sublist
of the form
(, subexp) is replaced by the value of
subexp. Any sublist of the form
is replaced by evaluating listexp and splicing its elements
into the containing list in place of this sublist. (A single sublist
can in this way be replaced by any number of new elements in the
There are certain contexts in which `,' would not be recognized and should not be used:
;; Use of a `,' expression as the CDR of a list. (` (a . (, 1))) ; Not
(a . 1)=> (a \, 1) ;; Use of `,' in a vector. (` [a (, 1) c]) ; Not
[a 1 c]error--> Wrong type argument ;; Use of a `,' as the entire argument of ``'. (` (, 2)) ; Not 2 => (\, 2)
Common Lisp note: in Common Lisp, `,' and `,@' are implemented as reader macros, so they do not require parentheses. Emacs Lisp implements them as functions because reader macros are not supported (to save space).
The basic facts of macro expansion have all been described above, but there consequences are often counterintuitive. This section describes some important consequences that can lead to trouble, and rules to follow to avoid trouble.
When defining a macro you must pay attention to the number of times the arguments will be evaluated when the expansion is executed. The following macro (used to facilitate iteration) illustrates the problem. This macro allows us to write a simple "for" loop such as one might find in Pascal.
(defmacro for (var from init to final do &rest body) "Execute a simple \"for\" loop, e.g., (for i from 1 to 10 do (print i))." (list 'let (list (list var init)) (cons 'while (cons (list '<= var final) (append body (list (list 'inc var))))))) => for (for i from 1 to 3 do (setq square (* i i)) (princ (format "\n%d %d" i square))) ==> (let ((i 1)) (while (<= i 3) (setq square (* i i)) (princ (format "%d %d" i square)) (inc i))) -|1 1 -|2 4 -|3 9 => nil
do in this macro are
"syntactic sugar"; they are entirely ignored. The idea is that you
will write noise words (such as
in those positions in the macro call.)
This macro suffers from the defect that final is evaluated on
every iteration. If final is a constant, this is not a problem.
If it is a more complex form, say
this can slow down the execution significantly. If final has side
effects, executing it more than once is probably incorrect.
A well-designed macro definition takes steps to avoid this problem by
producing an expansion that evaluates the argument expressions exactly
once unless repeated evaluation is part of the intended purpose of the
macro. Here is a correct expansion for the
(let ((i 1) (max 3)) (while (<= i max) (setq square (* i i)) (princ (format "%d %d" i square)) (inc i)))
Here is a macro definition that creates this expansion:
(defmacro for (var from init to final do &rest body) "Execute a simple for loop: (for i from 1 to 10 do (print i))." (` (let (((, var) (, init)) (max (, final))) (while (<= (, var) max) (,@ body) (inc (, var))))))
Unfortunately, this introduces another problem.
The new definition of
for has a new problem: it introduces a
local variable named
max which the user does not expect. This
causes trouble in examples such as the following:
(let ((max 0)) (for x from 0 to 10 do (let ((this (frob x))) (if (< max this) (setq max this)))))
The references to
max inside the body of the
are supposed to refer to the user's binding of
max, really access
the binding made by
The way to correct this is to use an uninterned symbol instead of
max (see section Creating and Interning Symbols). The uninterned symbol can be
bound and referred to just like any other symbol, but since it is created
for, we know that it cannot appear in the user's program.
Since it is not interned, there is no way the user can put it into the
program later. It will never appear anywhere except where put by
for. Here is a definition of
for which works this way:
(defmacro for (var from init to final do &rest body) "Execute a simple for loop: (for i from 1 to 10 do (print i))." (let ((tempvar (make-symbol "max"))) (` (let (((, var) (, init)) ((, tempvar) (, final))) (while (<= (, var) (, tempvar)) (,@ body) (inc (, var)))))))
This creates an uninterned symbol named
max and puts it in the
expansion instead of the usual interned symbol
max that appears
in expressions ordinarily.
Another problem can happen if you evaluate any of the macro argument
expressions during the computation of the expansion, such as by calling
eval (see section Eval). If the argument is supposed to refer to the
user's variables, you may have trouble if the user happens to use a
variable with the same name as one of the macro arguments. Inside the
macro body, the macro argument binding is the most local binding of this
variable, so any references inside the form being evaluated do refer
to it. Here is an example:
(defmacro foo (a) (list 'setq (eval a) t)) => foo (setq x 'b) (foo x) ==> (setq b t) => t ; and
bhas been set. ;; but (setq a 'b) (foo a) ==> (setq 'b t) ; invalid! error--> Symbol's value is void: b
It makes a difference whether the user types
a conflicts with the macro argument variable
In general it is best to avoid calling
eval in a macro
definition at all.
Occasionally problems result from the fact that a macro call is expanded each time it is evaluated in an interpreted function, but is expanded only once (during compilation) for a compiled function. If the macro definition has side effects, they will work differently depending on how many times the macro is expanded.
In particular, constructing objects is a kind of side effect. If the macro is called once, then the objects are constructed only once. In other words, the same structure of objects is used each time the macro call is executed. In interpreted operation, the macro is reexpanded each time, producing a fresh collection of objects each time. Usually this does not matter--the objects have the same contents whether they are shared or not. But if the surrounding program does side effects on the objects, it makes a difference whether they are shared. Here is an example:
(defmacro new-object () (list 'quote (cons nil nil))) (defun initialize (condition) (let ((object (new-object))) (if condition (setcar object condition)) object))
initialize is interpreted, a new list
constructed each time
initialize is called. Thus, no side effect
survives between calls. If
initialize is compiled, then the
new-object is expanded during compilation, producing a
(nil) that is reused and altered each time
initialize is called.
Go to the previous, next section. | http://www.slac.stanford.edu/comp/unix/gnu-info/elisp_13.html | 13 |
17 | Comparative Advantage: Definition and Examples
- 0:57 Comparative Advantage
- 1:30 Examining Opportunity Costs
- 4:31 Specialization
- 5:07 Absolute Advantage
- 8:21 Summary
Did You Know…
This lesson is part of a free course that leads to real college credit accepted by 2,900 colleges.
Understand the definition of comparative advantage, using two goods as an example.This key lesson incorporates the basic foundations of economics into one foundational theory explaining what goods and services that people and nations should produce and for whom they should produce it.
In the late 1700s, the famous economist Adam Smith wrote this in the second chapter of his book The Wealth of Nations:
'It is the maxim of every prudent master of a family, never to attempt to make at home what it will cost him more to make than to buy...What is prudence in the conduct of every private family, can scarce be folly in that of a great kingdom.'
He's observing two important principles of economics. The first one is that nations behave in the same way as individuals do: economically. Whatever is economical for people is also economical on a macroeconomic, or large, scale. The second thing he's observing is what we call the Law of Comparative Advantage. When a person or a nation has a lower opportunity cost in the production of a good, we say they have a comparative advantage in the production of that good.
Everyone has something that they can produce at a lower opportunity cost than others. This theory teaches us that a person or a nation should specialize in the good that they have a comparative advantage in.
Examining Opportunity Costs
So, let's explore this concept of comparative advantage using some examples from everyday life. For example, Sally can either produce 3 term papers in one hour or bake 12 chocolate chip cookies. Now let's add a second person, Adam, and talk about the same two activities, but as we'll see, Adam has different opportunity costs than Sally does. Adam is capable of producing either 8 term papers or 4 cookies in an hour. If we express both of these opportunity costs as equations, then we have:
For Sally, 3 term papers = 12 cookies.
For Adam, 8 term papers = 4 cookies.
We can ask two different questions about opportunity cost because we have two different goods. The first question we want to know is: what is the opportunity cost of producing 1 term paper? Reducing these equations down separately gives us:
For Sally, 1 term paper = 4 cookies. For Adam, 1 term paper = 0.5 cookies.
So, in this case, who has the lowest opportunity cost of producing 1 term paper? Adam does. Now, let's look at the same scenario from the opposite perspective and answer the second question: what is the opportunity cost of producing 1 cookie?
Now, I know that, in reality, no one is going to produce exactly 1 cookie unless it were a very, very big cookie, but when we reduce the equations down to 1 cookie, we can easily compare on an apples-to-apples basis (or cookie-to-cookie basis). So, let's take a look at the equations again:
For Sally, we have 12 cookies = 3 term papers.
For Adam, we have 4 cookies = 8 term papers.
Reducing these equations down gives us 1 cookie = 0.25 term papers for Sally, and for Adam 1 cookie = 2 term papers.
So, how do we decide who should produce term papers and who should be produce cookies? According to who has the lowest opportunity costs. That's what the law of comparative advantage says.
Who has the lowest opportunity cost of baking cookies? Sally does. Who has the lowest opportunity cost of producing term papers? Adam does.
So, we have two goods and two different people who have two different opportunity costs. The law of comparative advantage tells us that both of these people (Adam and Sally) will be better off if instead of both producing term papers and cookies, they decide to specialize in producing one good and trade with each other to obtain the other good.
This leads us to the conclusion that we should specialize. Individuals should specialize in the goods or services they produce. Firms and corporations should also specialize in what they have a lower opportunity cost of producing, and nations should specialize, as well. Whoever has the lowest cost relative to someone else can trade with them, and everyone gains something by trading.
Now that we've explored the law of comparative advantage, we need to make an important distinction. When a person or country has an absolute advantage, that means they can produce more of a good or service with the same amount of resources than other people or countries can. Another way to say it is they can produce it more cheaply than anybody else. This is a measure of how productive a person or country is when they produce a good or service. For example, let's say that country A can produce a ton of wheat in less time than any other nation with the same amount of resources. In this case, country A has an absolute advantage in the production of wheat.
Let's take another look at Sally and Adam, this time from the perspective of their labor productivity. As you can see, it takes Sally a 1/4 hour to produce 1 cookie, which is lower than the 1 hour that it takes Adam. Therefore, Sally is the most productive. She has an absolute advantage in the production of cookies.
In addition, it takes Sally 1 full hour to produce a term paper, while Adam can produce the same term paper in half the time - it's a 1/2 hour to produce a term paper for Adam; therefore, Adam has an absolute advantage in producing term papers.
But the theory of comparative advantage is based on lower opportunity costs, not based on absolute advantage. It is possible to have the absolute advantage in the production of two goods (in other words, you have the ability to make both goods the quickest, cheapest, and the best) and still benefit from trading with someone else who has a lower opportunity cost.
For example, let's say country A can either produce 10 cars or 10 computers. This means that they have the exact same opportunity costs for these two goods, and we can reduce this equation down to 1 car = 1 computer. Now, if country B can produce either 4 cars or 8 computers, then their opportunity cost of producing 1 computer is equal to 1/2 a car (after we reduce that equation down).
I want you to notice something, though. On the corresponding graph, you can see that country A can produce cars and computers better, faster and cheaper than country B because their production possibility curve is outward (or to the right) of country B's production possibility curve. But look at the slope of country B's curve. It is steeper. Even though country A has an absolute advantage in the production of cars and computers, it still makes sense for them to trade with country B, who has a lower opportunity cost of producing computers. (It's 1/2 a car!) So, the law of comparative advantage leads us to the conclusion that these two countries will trade with each other - cars for computers. Country B will specialize in computers (because they have the lowest opportunity cost in this) while country A will specialize in cars.
To summarize what we've learned in this lesson, the law of comparative advantage says that a person or a nation should specialize in the good they produce at the lowest opportunity cost. Everyone has something that they can produce at a lower opportunity cost than others, and by trading with others, everyone is better off.
Chapters in Economics 102: Macroeconomics
- 1. Scarcity, Choice, and The Production Possibilities Curve (5 lessons)
- 2. Comparative Advantage, Specialization and Exchange (3 lessons)
- 3. Demand, Supply and Market Equilibrium (6 lessons)
- 4. Measuring the Economy (5 lessons)
- 5. Inflation Measurement and Adjustment (10 lessons)
- 6. Understanding Unemployment (4 lessons)
- 7. Aggregate Demand and Supply (7 lessons)
People are saying…
"This just saved me about $2,000 and 1 year of my life." — Student
"I learned in 20 minutes what it took 3 months to learn in class." — Student | http://education-portal.com/academy/lesson/comparative-advantaged-definition-and-examples.html | 13 |
16 | Python Exceptions Handling
Python provides two very important features to handle any unexpected error in your Python programs and to add debugging capabilities in them:
Exception Handling: This would be covered in this tutorial. Here is a list standard Exceptions available in Python: Standard Exceptions.
Assertions: This would be covered in Assertions in Python tutorial.
What is Exception?
An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions. In general, when a Python script encounters a situation that it can't cope with, it raises an exception. An exception is a Python object that represents an error.
When a Python script raises an exception, it must either handle the exception immediately otherwise it would terminate and come out.
Handling an exception:
If you have some suspicious code that may raise an exception, you can defend your program by placing the suspicious code in a try: block. After the try: block, include an except: statement, followed by a block of code which handles the problem as elegantly as possible.
Here is simple syntax of try....except...else blocks:
try: You do your operations here; ...................... except ExceptionI: If there is ExceptionI, then execute this block. except ExceptionII: If there is ExceptionII, then execute this block. ...................... else: If there is no exception then execute this block.
Here are few important points above the above mentioned syntax:
A single try statement can have multiple except statements. This is useful when the try block contains statements that may throw different types of exceptions.
You can also provide a generic except clause, which handles any exception.
After the except clause(s), you can include an else-clause. The code in the else-block executes if the code in the try: block does not raise an exception.
The else-block is a good place for code that does not need the try: block's protection.
Here is simple example which opens a file and writes the content in the file and comes out gracefully because there is no problem at all:
#!/usr/bin/python try: fh = open("testfile", "w") fh.write("This is my test file for exception handling!!") except IOError: print "Error: can\'t find file or read data" else: print "Written content in the file successfully" fh.close()
This will produce following result:
Written content in the file successfully
Here is one more simple example which tries to open a file where you do not have permission to write in the file so it raises an exception:
#!/usr/bin/python try: fh = open("testfile", "w") fh.write("This is my test file for exception handling!!") except IOError: print "Error: can\'t find file or read data" else: print "Written content in the file successfully"
This will produce following result:
Error: can't find file or read data
The except clause with no exceptions:
You can also use the except statement with no exceptions defined as follows:
try: You do your operations here; ...................... except: If there is any exception, then execute this block. ...................... else: If there is no exception then execute this block.
This kind of a try-except statement catches all the exceptions that occur. Using this kind of try-except statement is not considered a good programming practice, though, because it catches all exceptions but does not make the programmer identify the root cause of the problem that may occur.
The except clause with multiple exceptions:
You can also use the same except statement to handle multiple exceptions as follows:
try: You do your operations here; ...................... except(Exception1[, Exception2[,...ExceptionN]]]): If there is any exception from the given exception list, then execute this block. ...................... else: If there is no exception then execute this block.
The try-finally clause:
You can use a finally: block along with a try: block. The finally block is a place to put any code that must execute, whether the try-block raised an exception or not. The syntax of the try-finally statement is this:
try: You do your operations here; ...................... Due to any exception, this may be skipped. finally: This would always be executed. ......................
Note that you can provide except clause(s), or a finally clause, but not both. You can not use else clause as well along with a finally clause.
#!/usr/bin/python try: fh = open("testfile", "w") fh.write("This is my test file for exception handling!!") finally: print "Error: can\'t find file or read data"
If you do not have permission to open the file in writing mode then this will produce following result:
Error: can't find file or read data
Same example can be written more cleanly as follows:
#!/usr/bin/python try: fh = open("testfile", "w") try: fh.write("This is my test file for exception handling!!") finally: fh.close() except IOError: print "Error: can\'t find file or read data"
When an exception is thrown in the try block, the execution immediately passes to the finally block. After all the statements in the finally block are executed, the exception is raised again and is handled in the except statements if present in the next higher layer of the try-except statement.
Argument of an Exception:
An exception can have an argument, which is a value that gives additional information about the problem. The contents of the argument vary by exception. You capture an exception's argument by supplying a variable in the except clause as follows:
try: You do your operations here; ...................... except ExceptionType, Argument: You can print value of Argument here...
If you are writing the code to handle a single exception, you can have a variable follow the name of the exception in the except statement. If you are trapping multiple exceptions, you can have a variable follow the tuple of the exception.
This variable will receive the value of the exception mostly containing the cause of the exception. The variable can receive a single value or multiple values in the form of a tuple. This tuple usually contains the error string, the error number, and an error location.
Following is an example for a single exception:
#!/usr/bin/python # Define a function here. def temp_convert(var): try: return int(var) except ValueError, Argument: print "The argument does not contain numbers\n", Argument # Call above function here. temp_convert("xyz");
This would produce following result:
The argument does not contain numbers invalid literal for int() with base 10: 'xyz'
Raising an exceptions:
You can raise exceptions in several ways by using the raise statement. The general syntax for the raise statement.
raise [Exception [, args [, traceback]]]
Here Exception is the type of exception (for example, NameError) and argument is a value for the exception argument. The argument is optional; if not supplied, the exception argument is None.
The final argument, traceback, is also optional (and rarely used in practice), and, if present, is the traceback object used for the exception
An exception can be a string, a class, or an object. Most of the exceptions that the Python core raises are classes, with an argument that is an instance of the class. Defining new exceptions is quite easy and can be done as follows:
def functionName( level ): if level < 1: raise "Invalid level!", level # The code below to this would not be executed # if we raise the exception
Note: In order to catch an exception, an "except" clause must refer to the same exception thrown either class object or simple string. For example to capture above exception we must write our except clause as follows:
try: Business Logic here... except "Invalid level!": Exception handling here... else: Rest of the code here...
Python also allows you to create your own exceptions by deriving classes from the standard built-in exceptions.
Here is an example related to RuntimeError. Here a class is created that is subclassed from RuntimeError. This is useful when you need to display more specific information when an exception is caught.
In the try block, the user-defined exception is raised and caught in the except block. The variable e is used to create an instance of the class Networkerror.
class Networkerror(RuntimeError): def __init__(self, arg): self.args = arg
So once you defined above class, you can raise your exception as follows:
try: raise Networkerror("Bad hostname") except Networkerror,e: print e.args | http://www.tutorialspoint.com/cgi-bin/printversion.cgi?tutorial=python&file=python_exceptions.htm | 13 |
15 | Number & Operations for Teachers
Copyright David & Cynthia Thomas, 2009
Modeling Multiplication with Regrouping
In addition to understanding the meaning of multiplication and division and their properties, students must also become fluent in performing complex arithmetic computations involving these operations. For many students, the standard algorithms associated with these operations are a mystery and a source of considerable frustration.
Figure 3.11: Modeling Multiplication
Figure 3.11 presents a concept model and expanded algorithm that emphasizes the meaning of multiplication and explains why the standard algorithm works. In this model, the number 17 is represented as the sum 10 + 7. The ten is modeled as a strip and the seven as a string of ones. This quantity is then multiplied by two, seen as two identical rows, each containing a strip and seven ones. The expanded algorithm is an interpretation of the distributive law: 2(17) = 2(10 + 7) = 2(10) + 2(7) = 20 + 14 = 34.
A more complex example is seen in Figure 3.12. This figure shows an area model for the product 43 x 25 that could be assembled on a table top using base ten blocks or drawn on a piece of graph paper. Note that each large square is 10 x 10 = 100 square units and that the long, narrow rectangles are 1 x 10 = 10 square units. Beside the concept model is the standard algorithm for performing such calculations. For many students, the rules associated with this algorithm appear arbitrary and are easily confused.
The purpose of the expanded algorithm is to add meaning to the computational process and to associate each partial product in that process with a shaded portion of the Concept Model. The first step in constructing this algorithm is to rewrite the factors as 40 + 3 and 20 + 5. The arrows in the expanded algorithm indicate the four partial products formed. Each partial product is written down in a manner that displays its true value, not in the abbreviated form seen in the standard algorithm. As seen in the section labeled Partial Products as Areas, each partial product listed in the Expanded Algorithm is seen to represent a different area of the shaded Concept Model.
Further comparison of the Standard Algorithm and the Expanded Algorithm may be used to justify why the Standard Algorithm works, if that is important. But students who prefer the openness of the Expanded Algorithm should feel free to adopt that procedure as their favorite, if they so desire.
Figure 3.12: Concept Model and Expanded Algorithm for Multiplication of Whole Numbers
Sketch a concept model for the indicated product 11 x 32 | http://math-ed.com/NO/NO_3_5_reading1.htm | 13 |
29 | 6.1 Correlation between Variables
In the previous section we saw how to create crosstabs tables, relating one variable with another and we computed the Chi-Square statistics to tell us if the variables are independent or not. While this type of analysis is very useful for categorical data, for numerical data the resulting tables would (usually) be too big to be useful. Therefore we need to learn different methods for dealing with numerical variables to decide whether two such variables are related.
Example: Suppose that 5 students were asked their high school GPA and their College GPA, with the answers as follows:
Student HS GPA College GPA A 3.8 2.8 B 3.1 2.2 C 4.0 3.5 D 2.5 1.9 E 3.3 2.5
We want to know: is high school and college GPA related according to this data, and if they are related, how can I use the high school GPA to predict the college GPA?
There are two answers to give:
- first, are they related, and
- second, how are they related.
Casually looking at this data it seems clear that the college GPA is always worse than the high school one, and the smaller the high school GPA the smaller the college GPA. But how strong a relationship, if any, seems difficult to quantify.
We will first discuss how to compute and interpret the so-called correlation coefficient to help decide whether two numeric variables are related or not. In other words, it can answer our first question. We will answer the second question in later sections. First, let's define the correlation coefficient mathematically.
Definition of the Correlation Coefficient
If your data is given in (x,y) pairs, then compute the following quantities:
where the "sigma" symbol indicates summation and n stands for the number of data points. With these quantities computed, the correlation coefficient is defined as:
These formulas are, indeed, quiet a "hand-full" but with a little effort we can manually compute the correlation coefficient just fine.
To compute the correlation coefficient for our above GPA example we make a table containing both variables, with additional columns for their squares as well as their product as follows:
Student HS GPA
x2 y2 x*y A 3.8 2.8 3.82 = 14.44 2.82 = 7.84 3.8*2.8 = 10.64 B 3.1 2.2 3.12 = 9.61 2.22 = 4.84 3.1*2.2 = 6.82 C 4.0 3.5 4.02 = 16.00 3.52 = 12.25 4.0*3.5 = 14.00 D 2.5 1.9 2.52 = 6.25 1.92 = 3.61 2.5*1.9 = 4.75 E 3.3 2.5 3.32 = 10.89 2.52 = 6.25 3.3*2.5 = 8.25 Sum 16.7 12.9 57.19 34.79 44.46
The last row contains the sum of the x's, y's, x-squared, y-squared, and x*y, which are precisely the quantities that we need to compute Sxx, Syy, and Sxy. In this case we can compute these quantities as follows:
- Sxx = 57.19 - 16.7 * 16.7 / 5 = 1.412
- Syy = 34.79 - 12.9*12.9 / 5 = 1.508
- Sxy = 44.46 - 16.7 * 12.9 / 5 = 1.374
so that the correlation coefficient for this data is: 1.374 / sqrt(1.412 * 1.508) = 0.9416
Interpretation of the Correlation CoefficientThe correlation coefficient as defined above measures how strong a linear relationship exists between two numeric variables x and y. Specifically:
- The correlation coefficient is always a number between -1.0 and +1.0.
- If the correlation coefficient is close to +1.0, then there is a strong positive linear relationship between x and y. In other words, if x increases, y also increases.
- If the correlation coefficient is close to -1.0, then there is a strong negative linear relationship between x and y. In other words, if x increases, y will decrease.
- The closer to zero the correlation coefficient is, the less of a linear relationship between x and y exists
In the above example the correlation coefficient is very close to +1. Therefore we can conclude that there indeed is a strong positive relationship between high school GPA and college GPA in this particular example.
Using Excel to computer the Correlation Coefficient
While the table above certainly helps in computing the correlation coefficient, it is still a lot of work, especially if there are lots of (x, y) data points. Even using Excel to help compute the table seems like a lot of work. However, Excel has a convenient function to quickly compute the correlation coefficient without us having to construct a complicated table.
The Excel built-in function=CORREL(RANGE1, RANGE2)
returns the correlation coefficient of the the cells in RANGE1 and the cells in RANGE2. All arguments should be numbers, and no cell should be empty.
Example: To use this Excel function to compute the correlation coefficient for the previous GPA example, we would enter the data and the formulas as follows:
Consider the following artificial example: some data for x and y (which have no particular meaning right now) is listed below, in a "case A", "case B", and "case C" situation.
x = 10, y = 20
x = 20, y = 40
x = 30, y = 60
x = 40, y = 80
x = 50, y = 100
x = 10, y = 200
x = 20, y = 160
x = 30, y = 120
x = 40, y = 80
x = 50, y = 40
x = 10, y = 100
x = 20, y = 20
x = 30, y = 200
x = 40, y = 50
x = 50, y = 100
Just looking at this data, it seems pretty obvious that:
- in case A there should be a strong positive relationship between x and y
- in case B there should be a strong negative relationship between x and y
- in case C there should be no apparent relationship between x and y
Indeed, using Excel to compute each correlation coefficient (we will explain the procedure below), confirms this:
- in case A, the coefficient is +1.0, i.e. strong positive correlation
- in case B, the coefficient is -1.0, i.e. strong negative correlation
- in case C, the coefficient is 0.069, i.e. no correlation
Note that in "real world" data, the correlation is almost never as clear-cut as in this artificial example.
Example: In a previous section we looked at an Excel data set that shows various information about employees. Here is the spreadsheet data, but the salary is left as an actual number instead of a category (as we previously had).
Download this file into Excel and find out whether there is a linear relationship between the salary and the years of education of an employee.
- Download the above spreadsheet and start MS Excel with that worksheet as input.
- Find an empty cell anywhere in your spreadsheet
- Select the first input range (corresponding to the salary) by dragging the mouse across all cells containing numbers in the "Salary" column
- Select the second input range (corresponding to the months on the job) by dragging the mouse across the "Months on the Job" column containing numbers
and hit RETURN
- Excel will compute the correlation coefficient. In our example, it turn out that the correlation coefficient for this data is 0.66
Since the correlation coefficient is 0.66 it means that there is indeed some positive relation between years of schooling and salary earnings. But since the value is not that close to +1.0, the relationship is not strong. | http://pirate.shu.edu/~wachsmut/Teaching/MATH1101/Relations/correlation.html | 13 |
33 | Analyzing Readings Using the Modes
This chapter is a very simple, brief introduction to
the rhetorical modes. Using the modes is like putting together the pieces
of a puzzle: most major paragraphs use at least one mode, and most papers use
several modes. The modes are useful in particular in helping writers learn
how to develop paragraphs, create longer papers in many subjects and disciplines
in college, and complete careful analyses of college readings.
The modes may help you survive writing assignments
in a wide variety of subjects and disciplines. This is because some
instructors give assignments using the names of the modes as key words: e.g.,
"Write a comparison-contrast paper on the psychology of Sigmund Freud and the
philosophy of Plato." For this reason, this chapter may be of help to you
as a reference--to help you understand just what the instructor is expecting.
A rhetorical mode is a strategy--a way or method of presenting a
subject—through writing or speech. Some of the better known rhetorical
modes are, for example, "argument" and "cause and effect."
There are literally dozens, perhaps hundreds, of strategies or methods for
presenting subjects; however, the modes are among the most basic. Instructors
have used rhetorical modes to teach writing or public speaking since ancient
Greek times over two thousand years ago, perhaps longer. Knowing the modes
can help us understand the organization--the methodology--of most kinds of
writings or other presentations.
The basic modes are presented below in alphabetical order. Though you can
study and practice the modes in any order, often it is helpful to start with
"Extended Definition" because it's
pattern of thought useful when writing the introduction to any paper using the
other rhetorical modes. Similarly, you may find "Description"
helpful to learn early: not only do many people find this mode easier to use,
but also its pattern of thought, too, is used in many other types of papers.
"argument" is, simply, an educated guess or opinion, not a simple
fact. It is something debatable: "Men have walked on the moon" is a
fact, but "People will walk on Venus in the next ten years" is an
opinion. Anything that reasonably can be debated is an argument. A simple
argument paper usually presents a debatable opinion and then offers supports in
favor of it, or sometimes an argument paper will discuss both sides of an issue
and then give good reasons for choosing one side over the other. For
example, a paper about space flight might argue that humans should not spend
large sums of money in sending people into space. The paper might then
argue that three good reasons this is true is that there are many poor on our
planet, on whom our resources should be spent, that space flight is not as
enlightening for humankind as increasing literacy or cultural awareness, and
that most of he money being spent on space is for military purposes, which is
useless. Another type of argument paper might ask the main idea as a
question: "Should the human race spend large sums of money to send people
into space?" Then it might argue both sides thoroughly and, finally,
choose one side and give strong reasons why this side is best.
argument paper often has what is called a "thesis" structure. It
starts with an introduction that offers an interesting opening--a quotation,
perhaps, or an interesting story, a statement of the main argument, and
sometimes a list of the several reasons (often three, but not necessarily so) to
be given in support of this argument. Then, step by step, the reasons are
given with supporting details such as quotations, facts, figures, statistics,
and/or people's experiences. If the paper is short, there may be just one
paragraph per reason. In a longer argument paper, there may be several
paragraphs or even several pages per reason. At the end, a conclusion
provides a restatement of the main argument and a final interesting quotation or
In the alternative form, the
introduction is much the same, and often starts with an interesting quotation or
story, but it offers the main idea as a question and provides the two (or more)
possible answers. It may or may not state which answer it will choose in
the end. The body is formed by having a section discussing the first
possible answer with reasons and details supporting it, the second possible
answer and its reasons and supporting details, and a final section in which you
choose one of the two answers and give strong reasons why you are doing so.
The conclusion once again restates your final choice and offers a final
interesting quotation or story.
all the other modes, argument is a thinking pattern or skill that is used in a
number of types of college papers in shorter form. You will find it in any
sentence, paragraph, or section of a paper in which an opinion is expressed,
especially when one or more supporting reasons are given for the opinion.
Argument is one of the most basic forms of human thinking. When you use
argument, you rise above the mere offering of a personal opinion precisely
because an argument requires supporting reasons, preferably with specific
supporting details, to justify the position you are taking.
Return to top.
"Cause and effect" simply means that you start with a subject (an
event, person, or object) and then show the causes (reasons) for it, and/or the
effects (results) of it. "Cause" means the reasons why or for
something, or the source of something. "Effects" simply are results or
outcomes. Cause-and-effect writing shows a chain of connected events, each the
logical result of the one before it. A simple cause-and-effect paper discusses
the chain of events related to a person, event, or object, showing what are the
causes and what are the results. For example, a paper about a solar car
might describe how it came to be built by an inventor and how he first became
interested in solar cars (the causes), and what the results of this solar car
might be--how its existence might lead people to take energy efficiency and
environmental concerns more seriously and even lead to mass-produced solar cars
(effects or results).
a cause-and-effect paper has an introductory paragraph defining or clarifying
the subject itself, and stating the nature of the paper (i.e., that your paper
is a cause-and-effect paper); a body of several to many paragraphs; and a brief
concluding paragraph. Assume, when you write a cause-and-effect paper,
that you are explaining events to someone who may know a little about them but
never has heard the entire story of how the events are linked by logical cause
At the end
of your cause-and-effect paper, add a final, concluding paragraph. It
should summarize, very briefly, the most important cause and effect concerning
your subject. And it might offer a final interesting thought or two about
It also is
possible to use cause and effect in less than a full paper. In fact, many
explanations and discussions involve cause-and-effect logic in just a paragraph
or two, just a sentence, or even within a phrase within a sentence.
Anytime you want to answer the question of why something has happened, you are
using cause-and-effect logic.
Return to top.
"Classification" means that a subject--a person, place, event, or
object--is identified and broken into parts and sub-parts. This type of paper is
slightly more complex than others. For this reason, you might first want
to learn to write "Extended Definition,"
"Comparison/Contrast," and "Description" papers.
example of a classification paper, imagine you want to classify a specific
student. You might first start by identifying this student by name and
briefly defining him or her. Second, you would choose a system by
which to classify him: e.g., you could choose a system that would describe his
looks, school classes, and after-school activities; or you might choose a
biological system and describe him by his physical type, health, blood type, and
other biological markings; or, perhaps, you might choose to describe the student
by his psychological makeup, his family history, and/or even his medical
history. Third, once you have chosen a system, you would then describe the
person. As you do so, you would want to show how, in each part of our
classification, he is similar to others like him and also how he differs
from them--this is the heart of developing lengthy description in a good
classification paper, to use comparisons and contrasts with each small element
of our classification system.
classification paper starts with a short introduction. In it, you state
and briefly define (see "Extended Definition") your subject. You
also should state clearly that you intend to classify your subject. In the
body of your paper, you describe your subject according to the classification
system you have chosen. You choose a system based partly on what your
audience expects (e.g., a psychology instructor probably would expect you to
classify and describe using a system of psychology; a biology instructor, a
system of biology; etc.) and partly on how many classification categories you
need to make your paper be well developed (often, the more categories you have,
the more length you can develop). Be sure to break down the body into a
number of separate paragraphs. Finally, your conclusion briefly reminds
your audience of the subject and purpose and, perhaps, ends with a final,
interesting sentence or two.
Classification is used as a pattern of thinking, speaking, and writing in
shorter forms, too. Whenever you must break down a subject into its
separate parts, you are classifying. Classification is almost as basic a
way of thinking as are "Cause and Effect" (above) and
Return to top.
"Comparison/contrast" means to show how subjects are alike and/or
different. A simple comparison/contrast paper often has two subjects and
describes how they are alike and then how they differ. For example, a
comparison/contrast paper on two forms of weekend entertainment, camping and
dancing, might first give details on how both can involve physical skills,
friends, and enjoying sounds and sights; then the paper might give details of
how camping and popular dancing differ in that one happens in nature and the
other in the midst of civilization, one usually is slow and quiet and the other
often fast and loud, and one peaceful while the other is rousing. If you are
asked to write a comparison/contrast paper on just one subject, you can first
compare it to the subjects it is like and then contrast it to the subjects that
seem opposite it; several different similarities and several different opposites
are acceptable, even helpful, in such a paper. For example, if you were
going to write a comparison/contrast paper about airports, you might decide
compare them to city bus stations, train stations, and street bus stops.
Then you might contrast them with each of these.
academic writing, comparison/contrast writing sometimes is used to show how two
related viewpoints--two ideas or opinions--can be similar but different: for
example, in the abortion controversy, some people believe that abortions are
wrong; others believe that artificial birth control is wrong. These two
positions are similar, but they also are different--leading to different
arguments and different results at times. Comparison/contrast also can be
useful in analyzing an author's argument by comparing it to someone else's
argument (yours or another author's), showing points of similarity and points of
difference. For example, if an author argues for a constitutional amendment
preventing gender discrimination, you could analyze the argument by comparing
and contrasting it to the reasons for other constitutional amendments which
comparison/contrast paper simply and clearly: tell your readers in a brief
introduction what you are going to do (compare, contrast, or both) and what your
subject or subjects are. It also may be helpful to offer a very brief
definition (see "Extended Definition") of your subject(s). Then
write the body. It is a good idea to provide at least one paragraph for
each intellectual function you are going to do. For example, you might
first have just one paragraph (or one set of paragraphs) that use comparison,
then another set that uses just contrast. Instead, you might organize our
paragraphs by subject: using the example above of airports, you might have one
paragraph or set of paragraphs comparing and contrasting them to city bus
stations, a second set comparing and contrasting them to train stations, and a
final set to street bus stops. The organization you choose for your body
paragraphs should be the one that helps your readers most easily understand your
comparisons/contrasts. Your conclusion should be one paragraph containing
a summary of your subject and purpose (to compare and/or contrast), and a final
interesting sentence or two. The audience you should consider as you plan
and then write your paper is anyone who knows all of the subjects you are
talking about but who would find it interesting to read about how they are
and contrast both are commonly used in short form in many other types of papers,
too. For example, you must use comparison and contrast to define something
(see "Extended Definition": you show what the subject is like; then
you show how it differs or contrasts from others like it). You also use
comparison anytime you explain that something is "like" something
else; likewise, you use contrast whenever you want to show how something is
different. Comparison/contrast is quite deeply and naturally imbedded in
our everyday thinking and logic.
SAMPLE COMPARISON-CONTRAST PAPER: Go to "Analysis
Using Comparison/Contrast" in the "Analysis" chapter.
Return to top.
"Description" means "illustrative detail." A description
paper often takes a person or object and then describes that person or thing in
great illustrative detail. For example, a description paper about a close
friend might describe his or her appearance, her actions, and her personality,
both through direct descriptive words--like paintings of her in different
situations--and through stories or vignettes showing him in action. It is
important to e thorough--to provide plenty of details. Often it is helpful
to use one or more plans or systems of description. One typical plan is to
move in a specific direction: e.g., from head to foot when describing a person,
or perhaps clockwise when describing a room or place. The exact direction
or order does not matter as long as you are consistent. Another system is
to use the five senses to describe; still another, is to use the five W's of
journalism by answering the questions "Who, What, Where, When, and Why or
How?" When you describe a subject that moves--a person or moving object--it
is wise to describe not only its appearance when standing still, but also its
movement. In fact, whenever you write a description paper, it is wise to
include as much action as possible: to make your readers see a movie whenever
possible, and not just a painting or drawing.
description paper is organized very simply. You can start with a very
short paragraph introducing or defining the subject, or a longer one that offers
a particularly striking first description or overall summary. Next, you
can write the body in as many or as few paragraphs as you need to fully describe
the subject. Organizing these paragraphs according to one or more plans or
systems often is helpful. Finally, you can write a concluding paragraph
either briefly or at length, depending on whether you want to achieve an abrupt
end or to provide some kind of especially strong final description that you have
saved for the last.
rhetorical mode is very common in shorter form, as well. When someone
writes a story, for example, whether he or she is a famous story writer or a
simple school child, he will use two main rhetorical modes: narration (the
giving of a series of events, as above) and description. Even business reports
must sometimes use description to provide an accurate and full account of the
appearance of something. Description plays an especially important part in
the teaching of writing, as writing instructors usually want their students to
learn to write in great detail--the more specifics, the better.
Return to top.
"Exemplification" means "the giving of an example." An
exemplification paper usually starts with a main idea, belief, or
opinion--something abstract--and then gives one extended example or a series of
shorter examples to illustrate that main idea. In fact, an exemplification paper
is a paper that illustrates an abstract idea. For example, if I wished to write
an exemplification paper about "The Opposite Sex--Problems and
Pleasures" (as a man or as a woman), there might be two ways I could go
about this. One would be, after introducing my general idea, to tell several
little stories about--give examples of--how the opposite sex can be both a
problem to deal with and a pleasure to be with. The other way I might write the
paper (and a stronger, more unified way of doing it) might be to pick out one
person of the opposite gender I have dated or lived with and describe how this
one person gave me both problems and pleasures in my overall relationship with
him or her.
short exemplification paper is written like most of the other rhetorical-modes
paper. It usually starts with a single introductory paragraph that briefly
defines your subject and states what you will do in the paper--exemplify. Then
there are one or two to many paragraphs offering one or more extended examples
of your subject. Finally, there is a brief closing paragraph restating
what your subject is and offering some kind of final brief, strong example or
some other kind of interesting ending. Your audience is anyone who might
only have a partial understanding of the subject and to whom an example would be
helpful: in fact, you choose your examples partly by deciding what the audience
will easily understand.
versions of this rhetorical mode exist, as do the others, within the space of a
few paragraphs, one paragraph, or even as part of a larger paragraph.
Exemplification simply means to give an example of a subject, and it is possible
to do this in as little as a sentence.
Return to top.
section describes how to start an "extended definition." An
extended definition simply defines a subject in a fuller or more extended--more
thorough--way than does a dictionary. Typically an extended definition has a
brief introductory paragraph of a few sentences, a body of one or several
paragraphs, and a brief concluding paragraph. Assume, when you write an
extended definition, that you are defining something for a student or perhaps a
foreigner who never has heard the term before.
an extended definition, start with an introductory paragraph first. Write
it in just two or three sentences as if it were a dictionary definition. A
good dictionary definition has the following parts:
(1) the exact term (the who or what) being defined,
(2) its classification--the class or group of people, events, or things to
which it belongs, and
(3) a brief summarizing description of the term. (This description often helps
define your subject by showing how it differs from similar subjects
that fit in the same classification as you have described in "2": in
other words, provide enough details that your subject cannot be mistaken for a
similar but different one.)
three items are the three parts of a good dictionary definition. Use these in
the introduction; then the rest of your news release is the "extended"
part of the definition, adding further description of or about the term. Here
are three examples of good dictionary definitions using the three defining items
(1--term:) "Chris Smith
(2--class:) is a student at George Washington College.
(3--sum/des:) He is 19, is working on an engineering degree, and is from
(1--term:) "The Sun Car Race
(2--class:) is a national competition.
(3--sum/des:) It is based in Utah for solar-run cars developed by independent
inventors and schools."
(2--class:) is a new silicon-based car polish.
(3--sum/des:) It is made by Dup Chemicals and can be used so easily that it
practically applies itself."
extended-definition paper usually starts with such simple dictionary-like
definitions; then the definition is extended by writing a long body further
describing the term. The body paragraph(s) may consist of any or all of
further description and/or details about the subject
one or several excellent examples
a description of the subject in action or use
a background or history of the subject
The conclusion should simply summarize your subject or say something
particularly interesting about it in a final paragraph. Try to make your
conclusion relatively sort--just several sentences, if possible.
is a rhetorical mode that can be used in something smaller or shorter than a
full paper. You can use extended definition for several paragraphs only in a
paper of much greater length. You also can add to a paper a one-paragraph
definition--like a brief encyclopedia definition. And you can use a short
definition, dictionary style, in many types of writing situations that call for
just a sentence or two of definition.
Return to top.
"Narration" or a "narrative" provides details of what
happened. It is almost like a list of events in the order that they
happened, except that it is written in paragraph form. A narration or
narrative doesn't have to show any cause and effect; it only needs to show what
happened in the order that it happened. History books are filled with
narrations. For example, if I were to describe the visit of the Pope to Denver
in 1993, I would use his itinerary and give details of each major event in that
visit. If I were writing a book about it, I would give details of many of the
more interesting minor events as well. I would do this in the order in which
they occurred: first the Pope did this, then he did that, and then he did a
short narration paper starts with a brief introductory paragraph consisting of
two parts. The first is a sentence or two stating the event you are going
to narrate; you might even want to include the who, what, where, and when of the
event in this part. The second part is a simple statement that the paper
you are writing is a narrative of this event. In the body of the
narrative, you break the event into several parts--one part per paragraph.
Each paragraph would then further break down the event into sub-events and
enough description of them that your reader will know what you mean. The
body may have just a few paragraphs or many, depending on the length of paper
and complexity you want. The conclusion can be very brief: just a final
rewording of the overall event you have narrated, and a final interesting
comment or two about it, or perhaps a statement about how, where, or when this
event fits into the larger flow of history around it. Your audience is
anyone who knows little or nothing about the event but can understand it easily
once you explain it.
other rhetorical modes, narration often is used in a context shorter than an
entire paper. More commonly, you may need to explain a sequence of events,
event by event, in just a paragraph or two when you are writing a longer paper
for some other purpose: if you need to give a long example of one or two
paragraphs, this example might, perhaps, be in story form--in the order in which
events happened. This would be a short narration. Any other time as
well that you write about events in the order in which they happened, you are
Return to top.
you are working with the rhetorical modes, you sometimes can examine and even
summarize the structures of a reading by describing the rhetorical modes used in
it. Often, for example, in the introductory paragraph of a paper--or in
the beginning of the body--you might find the rhetorical mode of definition,
helping to define the subject. Often you will find description or
exemplification in a longer paragraph, helping to further describe or give an
example of the subject of that paragraph. Occasionally an entire paper
might be developed with just one primary mode, as discussed in this chapter.
However, it is much more likely--and extremely common--to find several of the
modes used to develop a paper, especially if it is a college essay or
professional paper. This is because each of the modes represents a form of
thinking that is very basic to writing, speaking, and indeed thinking itself;
each can be used in long or short form.
The most common major rhetorical-mode pattern you may find in college readings
is argumentation. It is common because many textbooks and other
assignments you will read in college--especially in the humanities, liberal
arts, and social sciences--are arguing a point. Sometimes this point--this
argument--is obvious. Often it is less so, primarily because in these
fields, most knowledge is based on speculation--on scholars' intelligent
guesses--rather than on hard scientific fact. For this reason, a typical
textbook chapter (or part of one) or assigned short essay in these fields is set
up as having a main argument and then a series of details helping to prove it.
With this in mind, we might look at the following pattern--or some parts of
it--as being somewhat typical for this kind of essay.
COLLEGE READING #1: Argumentation as Overall
Introduction: Issue or Main
Argument. Possible use of definition (which includes
comparison/contrast and example), statement of overall cause and effect,
and/or other modes
Body: A Series of Points Helping to
Prove the Main Argument
Each Longer Paragraph:
(1) statement of a point providing a part of the
(2) development of the proof with detail using exemplification, narration,
cause and effect, and/or other modes
Conclusion: Concluding Argument.
Possible use of description, cause and effect, narration, and/or other modes.
On the other
hand, there is another common type of college reading, one that occurs more
often in the sciences and mathematics. This second kind includes textbook
chapters and shorter readings that are purely or largely factual: they simply
offer information. These, too, use the rhetorical modes. Most
commonly their overall form of structural development is description.
However, other modes also may be used as the overall structural pattern,
especially classification or cause and effect, as these lend themselves easily
to scientific and factual thinking.
COLLEGE READING #2: Description as Overall Structural/Controlling
Introduction: Main Subject.
Possible use of definition or classification (both of which include
comparison/contrast) and/or other modes
- Descriptions of Factual Parts
of the Subject. Organization might be by mode of classification,
cause and effect, description, and/or other modes.
- Each Longer Paragraph:
Conclusion: Brief Restatement of
Subject and Final Summary. Possible use of description,
narration, cause and effect, and/or other modes.
Once you know your rhetorical modes, it is a simple
matter to look for them in your readings. Often is is easier understand
and summarize such readings by understanding just what kind of thinking
pattern the author is using. If you understand these modes--these thinking
patterns--you will find it easier to follow the logic an author is attempting to
use--and, as is sometimes quite important in college reading--to disagree with
the author, too.
Each rhetorical mode is an excellent device to use for writing a paper.
Such writing helps you practice the pure form of the mode in an extended way.
The other types of college papers and as you analyze and argue about college
reading assignments. It is possible to make the modes fun: practice, for
example, narration by telling the blow-by-blow account of an interesting or even
silly event in your life; cause and effect by showing how one part of your life
inevitably leads to your doing or participating in another; comparison/contrast
by comparing and contrasting two activities, people, or activities you really
like or dislike; etc. However you practice the modes, your practice will
have the serius purpose of helping you understand, use, and find in others these
basic methods of thinking.
Return to top. | http://www.tc.umn.edu/~jewel001/CollegeWriting/START/Modes.htm | 13 |
667 | The Comprehensible Philosophy Dictionary
© 2010-2013 James Wallace Gray (Last updated 1/13/2013)
You can download a PDF copy of this dictionary here.
This dictionary is an attempt to comprehensively define all of the most important philosophy terms in a way that could be understood by anyone without requiring an extensive philosophical education. Examples are often discussed to help make the meaning of terms clear.
This list includes critical thinking concepts, and many of those should be understood by everyone to improve rational thought. Many of these concepts are important distinctions made by philosophers to help us attain nuanced thoughts. For example, David Hume introduced us to the concept of “matters of fact” and “relations of ideas.” It will often be said that a term can be contrasted with another when doing so can help us make certain distinctions.
Sometimes a term can be best understood in the context of other terms. They are related. For example, understanding “formal logic” can help us better understand “logical connectives.”
Note that multiple definitions are often given for a term. In that case the definitions are separated by numbers and we should keep in mind that we should try not to confuse the various definitions the terms can have. For example, philosophers use the word ‘argument’ to refer to an attempt at rational persuasion, but other people use the word to refer to hostile disagreement. See “ambiguity” and “equivocation” for more information.
a fortiori – Latin for “from the stronger thing.” A conclusion is true a fortiori if a premise makes it trivially true. For example, “All men are mortal, a fortiori, Socrates is mortal.”
a posteriori – Latin for “from the later.” A posteriori propositions or beliefs are justified entirely by observation. An example of an a posteriori proposition is “human beings are mammals.” “A posteriori” is the opposite of “a priori.”
a priori – Latin for “from the earlier.” A priori propositions or beliefs are justified (at least in part) by something other than observation. Many philosophers agree that propositions that are true by definition have an a priori justification. An example of an a priori proposition is “all bachelors are unmarried.” “A priori” is the opposite of “a posteriori.”
A-type proposition – A proposition with the form “all a are b.” For example, “all cats are animals.”
abduction – A form of reasoning that consists of trying to know what is likely true by examining the possible explanations for various phenomena. Abductive arguments are not necessarily deductive arguments, but they provide some support for the conclusion. The “argument to the best explanation” is an example of abductive reasoning. For example, we can often infer that a neighbor is probably home when we see a light turn on at her house because it’s often the best explanation.
abductive reasoning – A synonym for “abduction.”
The Absolute – A term for “God” or “the Good.”
absolute truth – Something true for all time no matter what situation is involved. A plausible example is the law of non-contradiction. (Statements can’t be true and false at the same time.)
abstract entities – Things that are not physical objects or states of mind. Instead, they exist outside space and time. For example, there are mathematical realists who think that numbers are abstract entities that exist apart from our opinions about them, and there are factual statements concerning how numbers relate. See “Platonic Forms” for more information.
abstraction – To conceptually separate various elements of concrete reality. For example, to identify an essential characteristic of human beings as the ability to reason would require us to abstract away various elements of human beings that we describe as “the ability to be rational.”
abstractism – The view that something is necessary insofar as it’s true of every consistent set of statements, and something is possible insofar as it’s true in at least one consistent set of statements. It’s necessary that oxygen is O2 insofar as it’s true that oxygen is O2 in every consistent set of statements, and it’s possible for a person to jump over a small rock insofar as at least one consistent set of statements has a person jump over a small rock. Abstractism could be considered to advocate the existence of “abstract entities” insofar as the existence of a consistent set of statements could be considered to be factual as an abstract entity.
absurdism – The view that it is absurd for people to try to find the meaning of life because it’s impossible to do so.
absurdity – (1) The property of contradicting our knowledge or of being logically impossible. For example, it is absurd to think knowledge is impossible insofar as we know that “1+1=2.” See “reductio ad absurdum” for more information. (2) “Absurd” is sometimes equated with “counterintuitive.” (3) An irreconcilable interest people have, or a search for knowledge that can’t be completed. For example, it is sometimes said that it’s absurd for people to search for an ultimate foundation for value (or the meaning of life) even though we can never find an ultimate foundation for value. (4) In ordinary language, “absurdity” often means “utterly strange.”
accessibility – (1) The relevant domain used to determine if something is necessary or possible. It is thought that something is necessary if it “has to be true” for all of the relevant domain, and something is possible if it is true of at least one thing within the relevant domain. For example, some philosophers believe that it’s possible for people to exist because they exist in at least one accessible possible world—the one we live in. See “accessible world,” “possible world,” “truth conditions,” and “modality” for more information. (2) In ordinary language, “accessibility” refers to the ability to have contact with something. For example, people in jail have access to food and water; and citizens of the United States have access to move to any city located in the United States.
accessible world – A world that is relevant to our world when we want to determine if something is necessary or possible. For example, we could say that something is necessary if it’s true of all accessible worlds. Perhaps it’s necessary that contradictions are impossible because it’s true of all accessible worlds. An accessible world is not necessarily a world we can actually go to. It could exist outside our universe or only exist conceptually. See “possible world,” “truth conditions,” and “modality” for more information.
accidental characteristic – A characteristic that could be changed without changing what something is. For example, an accidental characteristic of Socrates was his pug nose—he would still be a person (and Socrates) without having a pug nose. “Accidental characteristics” are the opposite of “essential characteristics.”
accidentalism – The metaphysical view that not every event has a cause and that chance or randomness is a factor that determines what happens in the universe. Many philosophers think that quantum mechanics is evidence of accidentalism. Accidentalism requires that we reject “determinism.”
acosmism – The view that the universe is illusory and god is the ultimate reality.
act utilitarianism – A consequentialist theory that claims that we should strive to maximize goodness (positive value) and minimize harm (negative value) by considering the results of our actions. The situation is very important to knowing what we should do. For example, it is generally wrong to hurt people, but it might sometimes be necessary or “morally right” to hurt others to protect ourselves. “Act utilitarianism” can be contrasted to “rule utilitarianism.”
ad hoc – A Latin phrase that literally means “for this.” It refers to solutions that are non-generalizable and only used for one situation. For example, ad hoc hypotheses are designed to save hypotheses and theories from being falsified. Some scientists might think dark energy is an ad hoc hypothesis because it is used to explain nothing other than why the universe is expanding at an increasing rate, which contradicts our understanding of physics.
ad hominem – A Latin phrase that literally means “to the person.” It refers to insults, and usually to fallacious forms of reasoning that make use of insults or disparaging remarks. For example, we could respond to the a doctor’s claim that “smoking is unhealthy” by saying the doctor who made the argument drinks too much alcohol.
ad infinitum – Latin for “to infinity” or “forevermore.” It can also be translated as “on and on forever.”
addition – A rule of inference that states that we can use “a” as a premise to validly conclude “a and/or b.” For example, “Dogs are mammals. Therefore, Dogs are mammals and/or lizards.”
æon – Latin for “life,” “age,” or “for eternity.” Plato used this term to refer to the eternal world of the Forms.
aesthetics – The philosophical study of beauty and art. For example, some philosophers argue that beauty is an objective property of things, but others believe that it’s subjective and might say, “Beauty is in the eye of the beholder.”
affirmative categorical proposition – A synonym for “positive categorical proposition.”
affirmative conclusion – A categorical proposition used as a conclusion with the form “all a are b” or “some a are b.” For example, “some animals are mammals.”
affirmative premise – A categorical proposition used as a premise that has form “all a are b” or “some a are b.” For example, “some mammals are dogs.”
affirming the disjunct – A fallacy committed by an argument that requires us to mistakenly assume two propositions to be mutually exclusive and reject one proposition just because the other is true. The argument form of an argument that commits this fallacy is “Either a or b. a. Therefore, not-b.” For example, consider the following argument—“Either Dogs are mammals or animals. Dogs are mammals. Therefore, dogs are not animals.”
affirming the consequent – An invalid argument with the form “if a, then b; b; therefore, a.” For example, “If all dogs are reptiles, then all dogs are animals. All dogs are animals. Therefore, all dogs are reptiles.”
agency – The ability of a fictional or real person to act in the world.
agent – A fictional or real person who has agency (the ability to act in the world).
agent causation – A type of causation that’s neither determined nor random produced by choices made by people. Agent causation occurs from an action caused by a person that’s not caused by other events or states of affairs. For example, it’s not caused by the reasoning of the agent. See “prime mover” and “libertarian free will” for more information.
agent-neutral reasons – A reason for action that is not dependent on the person who will make a decision. For example, everyone could be said to have a reason to find a cure for cancer because it would save lives. The assumption is that there is a reason to find a cure cancer does not depend on unique motivations or duties of an individual (and perhaps saving lives is good for its own sake). Classical utilitarianism is an agent-neutral ethical theory because it it claims that all ethical reasons to act concern whatever has the most valuable consequences. “Agent-neutral reasons” are often contrasted with “agent-relative reasons.”
agent-relative reason – A reason for action that is dependent on the person involved. For example, a person has a reason to give money to a friend in need because she cares for the friend. Ethical egoism is an agent-relative theory that claims that the only reasons to act are agent-relative. “Agent-relative reasons” are often contrasted with “agent-neutral reasons.”
agnosticism – The view that we can’t (currently) know if gods exist or not.
Agrippa’s trilemma – A synonym for “Münchhausen trilemma.”
akrasia – Greek for “lacking power” and often translated as “weakness of will.”
alethic – Latin for “species.”
alethic logic – A formal logical system with modal operators for “possible” (◊) and “necessary” (□.)
alethic modality – The distinction between “possibility” and “necessity” used within formal logical systems.
The All – A term for “the absolute,” “God,” or “the Good.”
algorithm – A step-by-step procedure.
alternate possibilities – Events that could happen in the future or could have happened in the past instead of what actually happened. Alternate possibilities are often mentioned to refer to the ability to do otherwise. For example, some people think free will and moral responsibility require alternate possibilities. Let’s assume that’s the case. If Elizabeth is morally responsible for killing George, then she had an alternate possibility of not killing George. If she was forced to kill George, then she isn’t morally responsible for doing it. Alternate possibilities are often thought to be incompatible with determinism.
altruism – Actions that benefit others without an overriding concern for self-interest. Altruism does not require self-sacrifice but altruistic acts do require that one does not expect to attain benefits in proportion to (or greater than) those given to others.
ambiguity – Statements, phrases, or words that can have more than one meaning. For example, the word ‘argument’ can refer to an unpleasant exchange of words or as a series of statements meant to give us a reason to believe a conclusion. “Ambiguity” can be contrasted with “vagueness.”
amor fati – Latin for “love of fate.” To value everything that happens and see it as good. Suffering and death could seen as being for a greater good, or at least a positive attitude might help one benefit from one’s own suffering. For example, Friedrich Nietzsche’s aphorism, “what doesn’t kill us makes us stronger” refers to the view that a positive attitude can help us benefit from our suffering.
amoral – Lacking an interest in morality. For example, an amoral person doesn’t care about what’s morally right or wrong, and a person acts amorally when she doesn’t care about morality at that moment in time. Many people think that babies and nonhuman animals act amorally because they have no concept of right or wrong. “Amoral” can be contrasted with “nonmoral.”
amphibology – A synonym for “amphiboly.”
amphiboly – A fallacious argument that requires an ambiguity based on the grammar of a statement. For example, “men often marry women, but they aren’t always ready for marriage.” In this case the word ‘they’ could refer to the men, the women, or both. An example of the amhiboly fallacy is the following argument—“If people feed dogs chocolate, then they will get hurt. You don’t want to get hurt. Therefore, you shouldn’t feed dogs chocolate.” In this case feeding dogs chocolate actually hurts dogs, not people. The argument requires us to falsely think that people get hurt by feeding dogs chocolate.
analogical reasoning – Reasoning using analogies that can be explicitly described as an “argument from analogy.”
analogy – (1) A comparison between two different things that draws similarities between two things. For example, punching and kicking people are both analogous in the sense that they are both generally wrong for the same reason (i.e. they are performed to hurt people). (2) An “argument from analogy.”
analytic – Analytic propositions or beliefs that are true because of their meaning. An example of an analytic proposition is “all bachelors are unmarried.” “Analytic” is the opposite of “synthetic.”
analytic philosophy – A domain of philosophy that’s primarily concerned with justifying beliefs as much as possible with a great deal of clarity and precision. However, the issues analytic philosophers deal with generally involve more speculation and less certainty than the issues natural scientists tend to deal with. “Analytic philosophy” is often contrasted with “continental philosophy.”
anarchism – The view that we should eliminate states, governments, and/or political rulers.
anecdotal evidence – (1) To attempt to persuade people to agree to a conclusion based on the experiences of an individual or even many individuals. Anecdotal evidence is often a fallacious type of argumentation. For example, many individuals could have experiences of winning sports games while wearing a four-leaf clover, but that doesn’t prove that four-leaf clovers actually give sports players luck. No fallacy is committed when the experiences of people are sufficient to give evidence for a causal relation and mere correlation can be ruled out. Fallacious appeals to anecdotal evidence could be considered to be a form of the “hasty generalization” fallacy. Also relevant is the “cum hoc ergo propter hoc” fallacy. (2) The experiences of a person that could be considered to be a reason to agree with some belief. For example, our experience of not getting cavities and brushing our teeth every day is at least superficial evidence that brushing our teeth could help us avoid getting cavities.
and/or – See “inclusive or.”
antecedent – (1) The first part or what happens first. (2) The first part of a conditional with the form “if a, then b.” (“a” is the antecedent). For example, consider the following conditional—“If it rains tomorrow, then we won’t have to water the lawn.” In this case the antecedent is “it rains tomorrow.”
anti-realism – The view that some domain is nonfactual (not part of reality) other than perhaps how it relates to social construction or convention. For example, “moral anti-realists” think that there are no moral facts, but perhaps we can talk about moral truth insofar as some statements conform to a social contract. “Anti-realism” is often contrasted with “realism.”
antinomy – A real or apparent contradiction between laws or rational beliefs. For example, Immanuel Kant argues that time must have a beginning because infinite events can’t happen in the past, but time can’t have a beginning because that would imply that there was a moment before time began. “Antinomies” are sometimes equated with “paradoxes.”
antithesis – The opposition to a thesis, generally within a dialectical process. Objections are antitheses found in argumentative essays; and the flaws of a political system that lead to less freedom could be considered to be the antitheses found in a Hegelian dialectic.
anomaly – A phenomenon that can’t yet be explained by science and could be taken to be evidence against a scientific theory. Anomalies are often explained sooner or later, but sometimes they can’t be explained because our observations of the facts are simply incompatible with the theory we assume to be true. For example, Mercury didn’t move around the Sun in the way we predicted based on Newton’s theory of physics, but it did move in the way a superior theory predicted (Einstein’s theory of physics).
anthropic principle – The view that the universe and observations of the universe must be compatible with the conscious beings that make the observations. For example, the laws of physics must be compatible with the existence of people or we can’t exist in the fist place. (If the universe was incompatible with our existence, then we wouldn’t be here.)
anthropocentrism – The view that human beings are the center of the universe or the most important thing. For example, the view that we should have harmful experimentation using nonhuman animals when it saves human lives could be said to be anthropocentric.
anthropomorphism – To view nonhuman things as having human qualities or to present such things as if they had human qualities that they don’t. For example, we might say that computers “figure out” how to do math problems, but computers don’t actually think or figure things out.
appeal to authority – (1) An argument that gives evidence for a belief by referencing expert opinion. Appeals to authority are not fallacious as long as they actually appeal to the unanimous opinion of experts of the relevant kind. (2) A fallacious type of argument that appeals to the supposed expert opinion of others when the opinion referred to is controversial among the experts; or when the supposed expert that is appealed to is not an expert of the relevant kind.
appeal to consequences – A type of fallacy committed by arguments that conclude that something is true or false based on the effects the belief will have. For example, “We know it’s true that every poor person can become rich because poor people who believe they can become rich are more likely to become rich.”
appeal to emotion – To attempt to persuade people that something is true by appealing to their pity, by causing fear, or by appealing to some other emotion. For example, someone could argue that war is immoral by appealing to our pity of wounded innocent children. The harm done to the children might be relevant to why war is wrong, but it is not sufficient to prove that war is always wrong.
appeal to force – A fallacious form of persuasion that is committed when coercion is used to get people to pretend to agree with a conclusion, or in order to suppress opposing viewpoints. The appeal to force can be subtly used in an academic setting when certain views are taboo and could harm a person’s future employment opportunities. However, sometimes people also fear being punished for expressing their “heretical views.” For example, John Adams passed the Sedition Act, which imposed fines and jail penalties for anyone who spoke out against the government. Additionally, various heresies (taboo religious beliefs) have been punishable by death in various places and times.
appeal to ignorance – A fallacious argument that concludes something on the basis of what we don’t know. For example, to claim that “we should agree that extraterrestrials don’t exist because we can’t yet prove they exist” is fallacious because there are other reasons we might expect extraterrestrials to exist, such as the vastness of the universe.
appeal to nature – See “naturalistic fallacy.”
appeal to popularity – A fallacy committed by an argument that concludes something on the basis of popular opinion. The appeal to popularity is often persuasive because of a common bias people have in favor of popular opinions. Also known as the “bandwagon fallacy.”
appeal to probability – A fallacy committed by an argument that concludes that something will happen just because it might happen. For example, “It’s possible to make a profit by gambling. Therefore, I will eventually make a profit if I keep playing the slot machines.”
applied ethics – Ethical philosophy that’s primarily concerned with determining what course of action is right or wrong given various moral issues, such as euthanasia, capital punishment, abortion, and same-sex marriage.
apperception – To have attention or to be aware of an object as being something other than oneself. See “empirical apperception” and “transcendental apperception” for more information.
arbitrary – Something said or done without a reason. For example, the initial words we use for our concepts are arbitrary. We could have called bananas ‘gordoes’ and there was no reason to prefer to call them ‘bananas’ instead. However, the meaning of words is not arbitrary after the definitions are justified by common usage.
argument – (1) To provide statements and evidence in an attempt to lead to the plausibility of a particular conclusion. For example, “punching people is generally wrong because hurting people is generally wrong” is an argument. (2) In mathematics and predicate logic, “argument” is sometimes a synonym for “operands.” (3) In ordinary language, “argument” often refers to a verbal battle, a hostile disagreement, or a discussion that concerns a disagreement.
argument by consensus – A synonym for “appeal to popularity.”
argument diagram – A visual representation of an argument that makes it clear how premises are used to support a conclusion. Argument diagrams generally have numbers written in circles, and each number is used to represent a statement. Consider the following argument—“ Socrates is a human. All humans are mammals. All mammals are mortal. Therefore, Socrates is mortal.” An example of an argument diagram that can be used to represent this argument is the following:
argument from analogy – An argument that uses an analogy. For example, we could argue that kicking and punching people are both generally wrong because they’re both analogous—they both are generally wrong for the same reason (because they’re both performed to hurt people and it’s generally wrong to try to hurt people). Not all arguments using analogies are well-reasoned. See “weak analogy” for more information.
argument form – See “logical form.”
argument from absurdity – A synonym for “reductio ad absurdum.”
argument from fallacy – A synonym for “argumentum ad logicam.”
argument indicator – A term used to help people identify that an argument is being presented. Argument indicators are premise indicators or conclusion indicators. For example, ‘because’ is an argument indicator used to state a premise. See “argument” for more information.
argument map – A visual representation of an argument that makes it clear how premises are used to support a conclusion. Argument maps are a type of argument diagram, but the premises and conclusions are usually written in boxes. An example of an argument map is the following:
argument place – (1) In logic, it is the number of things that are predicated by a statement. For example, “Gxy” is a statement with two predicated things, so it has two argument places. (In this case “G” can stand for “attacks.” In that case “Gxy” would mean “x attacks y.”) (2) In mathematics, it’s the number of things that are involved with an operation. For example, addition is an operation with two argument places. “2 + 3” has two arguments: “2” and “3.”
argument to the best explanation – An attempt to know what theory, hypothesis, or explanatory belief we should have by comparing various alternatives. The best explanation should be the one that’s the most consistent with our observations (and perhaps exhibit other various theoretical virtues better than the alternatives as well). For example, it’s more plausible that the light turns on at a neighbor’s home because a person turned a light on than that a ghost turned the light on because we don’t know that ghosts exist. See “theoretical virtues” for more information. The argument to the best explanation is a form of “abduction.”
argumentative strategies – The methods we use to form a conclusion from premises. For example, the “argument from absurdity” and “argument from analogy” are argumentative strategies.
argumentum ad baculum – Latin for “argument from the stick.” See “appeal to force.”
argumentum ad consequentiam – Latin for “argument to consequences.” See “appeal to consequences.”
argumentum ad ignorantiam – Latin for “argument from ignorance.” See “appeal to ignorance.”
argumentum ad logicam – Latin for “argument to logic.” A type of fallacy committed by an argument that claims that a conclusion of an argument is false or unjustified just because the argument given in support of the conclusion is fallacious. A conclusion can be true and justified even if people give fallacious arguments for it. For example, Tom could argue that “the Earth exists because Tina is evil.” This argument is clearly fallacious, but the conclusion (that the Earth exists) is both true and justified.
argumentum ad naturam – Latin for “argument from nature.” See “naturalistic fallacy.”
aristocracy – A political system defined by the exclusive power to rule by an elite group of individuals.
Aristotelian ethics – An ethical system primarily concerned with virtue developed by Aristotle. Aristotle believed that (a) people have a proper function as political rational animals to help each other and use their ability to reason; (b) happiness is the greatest good worth achieving (c) virtues are generally between two extremes; and (d) virtuous people have character traits that cause them to enjoy doing what’s virtuous and to do what’s good thoughtlessly. For example, courage is virtuous because it is neither cowardly nor foolhardy, and courageous people will be willing to risk their life whenever they should do so without a second thought.
arité – Greek for “virtue” or “excellence.”
arity – (1) In logic, it refers to the number of things that are predicated. The statement “Fx” has an arity of one because there’s only one thing being predicated. For example, “F” can stand for “is tall” and in that case “Fx” means “x is tall.” The statement “Gxy” has an arity of two because there’s two things being predicated. For example, “G” could stand for “loves” and in that case “Gxy” means “x loves y.” (2) In mathematics, arity refers to the number of things that are part of an operation. For example, addition requires two numbers. “1+2” is an operation with the following two variables: “1” and “2.”
assertoric – Refers to the property of a domain that people make assertions about. Some philosophers think that moral judgments, such as “stealing is wrong,” are assertoric rather than noncognitive (neither true nor false). Assertoric statements are meant to be true or false depending on whether they accurately correspond to reality or relate properly to facts.
association – A rule of replacement that takes two forms: (a) “a and/or (b and/or c)” means the same thing as “(a and/or b) and/or c.” (b) “a and (b and c)” means the same thing as “(a and b) and c.” (“a,” “b,” and “c” stand for any three propositions.) The parentheses are used to group certain statements together. For example, “dogs are mammals, or they’re fish or reptiles” means the same thing as “dogs are mammals or fish, or they’re reptiles.” The rule of association says that we can replace either of these statements of our argument with the other precisely because they mean the same thing.
association fallacy – A type of fallacy committed by an argument with an unwarranted assumption that two things share a negative quality just because of some irrelevant association. For example, we could argue that eating food is immoral just because Stalin ate food. Also see the “halo effect” and “ad hominem” for more information.
atheism – It generally refers to the view that gods don’t exist. However, it is often divided into the categories of “hard atheism” and “soft atheism.”
atom – (1) The smallest unit of matter that is irreducible and indestructible. (2) In modern science, ‘atom’ refers to a type of particle. Atoms are made with protons and neutrons. The number of protons used to make an atom determines what kind of chemical element it is. For example, hydrogen is only made of a single proton.
attribute – (1) An element or aspect. (2) According to Baruch Spinoza, an attribute is what we perceive of as the essence (or defining characteristic) of a what Descartes considered to be a substance, such as extension (for physicality) and thought (for the psychological part of reality). However, Spinoza rejected that mind and matter are two different substances.
autonomy – To be capable of acting freely based on one’s own judgments.
authentic – (1) To be authentic is to act true to one’s nature, to accept one’s innate freedom, and to refuse to let others make decisions (or think) for us. (2) In Martin Heidegger’s work, the term “for oneself” is often translated as “authentic.”
auxiliary hypothesis – The background assumptions we have during observation and experimentation. It is difficult to know when a scientific theory or hypothesis should be rejected by conflicting evidence because the evidence might actually only conflict with an auxiliary hypothesis. For this reason scientists continue to use the same theories and hypotheses until a better one is developed, and conflicting evidence is known as an “anomaly.” For example, a person could think that the belief that a drug is effective at curing a disease is proven wrong when it doesn’t cure someone’s disease, but the drug might have only been ineffective when the person who takes it doesn’t drink alcohol. In this case the auxiliary hypothesis was that the drug would be effective whether or not people drink alcohol.
axiology – The philosophical study of values.
axiom – A starting assumption prior to argument or debate. Axioms should be rationally defensible and some might be self-evident. For example, the law of non-contradiction is an axiom. If we don’t assume that things can’t be true and false at the same time, then reasoning might not even be possible.
background assumptions – Beliefs that are difficult to discuss or question because they are part of how a person understands the world and they are taken for granted. Background assumptions are often unstated assumptions in arguments similar to how many people skip steps when doing math problems.
bad company fallacy – A synonym for “association fallacy.”
bad faith – To act or believe something inauthentically. To deny one’s innate freedom, or try to let other people make decisions (or think) for us.
bad reasons fallacy – A synonym for “argumentum ad logicam.”
base rate fallacy – A fallacy committed when an argument requires a statistical error based on information about a state of affairs. The most common version of the base rate fallacy is based on the assumption that a test with a high probability of success indicates that the test is accurate. For example, we might assume that a test used to detect a disease that’s 99% accurate will correctly detect that more people have a disease than it will falsely claim have the disease. However, if only 0.1% of the population has the disease, then it will falsely detect around ten times as many people as having the disease than actually have it. See “base rate information” and “false positive” for more information.
base rate information – Information about a state of affairs that is used for diagnosis or statistical analysis. For example, we might find out that 70% of all people with a cough and runny nose have a cold. A doctor is likely to suspect a patient with a cough and runny nose has a cold in consideration of how common colds are. “Base rate information” can be contrasted with “generic information” concerning the frequency of a state of affairs, such as how common a certain disease is.
basic belief – Foundational beliefs that can be known without being justified from an argument (or argument-like reasoning). For example, axioms of logic, such as “everything is identical with itself,” are plausibly basic beliefs. “Basic beliefs” are part of “foundationalism,” and they don’t exist if “coherentism” is true.
basic desire – Something we yearn for or value for its own sake rather than as a means to an end. Pleasure and pain-avoidance are plausibly basic desires. It is possible that we desire food to attain pleasure and avoid pain rather than as a basic desire. “Basic desires” are similar to (and perhaps identical with) “final ends.”
bandwagon fallacy – A synonym for “appeal to popularity.”
Bayesian epistemology – A view of knowledge and justification based on probability. It features a formal apparatus for induction based on deduction and probability calculus. The formal apparatus is used to better understand probabilistic coherence, probabilistic confirmation, and probabilistic inference.
bedeutung – German for “reference.”
begging the question – A logical fallacy that is used when an argument uses a controversial premise to prove a conclusion, and the controversial premise trivially implies that the conclusion is true. For example, “the death penalty is murder, so the death penalty is wrong” requires a controversial premise (that the death penalty is murder) to prove something else controversial (that the death penalty is wrong). Also see “circular reasoning.”
being – (1) Existence, reality, or the ultimate part of reality. Being could be said to be “what is.” The philosophical study of being is “ontology.” (2) “A being” is something unified in space and time that has a mind of its own. For example, people are beings. It is plausible that birds and mammals are also beings in this sense.
belief bias – A cognitive bias that’s defined by the tendency to think that an argument is reasonable just because we think the conclusion is likely true. In reality arguments can be offensively fallacious, even if the conclusion is true. For example, “the sky is blue, so dogs are mammals” has a true conclusion, but it’s offensively fallacious.
biased sample – (1) A sample that is not representative of the group it is meant to represent for the purposes of a study. For example, a poll taken in an area known to mainly vote for Republican politicians that indicates that the Republican presidential candidate is popular with the population at large. It might be the case that the Republican candidate is not the most popular one when all other voters are accounted for, and the sample is so biased that we can’t use it to have any idea about whether or not the Republican candidate is truly popular with the population at large. Also see “selective evidence” and “hasty generalization” for more information. (2) A fallacy committed by an argument based on a biased sample. For example, to conclude that a Republican presidential candidate is popular with the population at large based on a poll taken in a pro-Republican area.
biconditional – A synonym for “material equivalence.”
bifurcation fallacy – A synonym for “false dilemma.”
bioethics – Ethics related to biology. Bioethics is often related to scientific research and technology that has an effect on biological organisms. For example, whether or not cloning human beings is immoral.
bivalent logic – Logic with two truth values: true and false. See “the principle of bivalence” for more information.
black or white fallacy – A synonym for “false dilemma.”
blameworthy – Actions by morally responsible people that fail to meet moral requirements. For example, a morally responsible person who commits murder is blameworthy for that action. See “impermissible” and “responsibility” for more information. “Blameworthy” acts are often contrasted with “praiseworthy” ones.
booby trap – (1) A logical booby trap is a peculiarity of language that makes it likely for people to become confused or to jump to the wrong conclusion. For example, an ambiguous word or statement could make it likely for people to equivocate words in a fallacious way. Some people think all forms of debate are attempts at manipulative persuasion, but there are rational and respectful forms of debate. See “equivocation” for more information. (2) In ordinary language, a booby trap is a hidden mechanism used to cause harm once it is triggered by a certain action or movement. For example, Indiana Jones lifted an artifact from a platform that caused the room to collapse.
borderline case – A state of affairs that can be properly described by a vague term, but it is difficult to say how the vague term can be properly applied. For example, it might not be clear whether or not it’s unhealthy to eat a small bag of potato chips. Even so, we know that eating one potato chip is not unhealthy, and eating a thousand potato chips is unhealthy. See “vague” for more information.
brute facts – (1) Facts that exist that have no explanation. The reason brute facts lack explanations isn’t merely because we are incapable of explaining them. It’s because there is literally no explanation for us to find out about. If brute facts exist, then we should reject the “principle of sufficient reason.” (2) According to G.E.M. Anscombe, brute facts are the facts that make a non-brute fact true given the assumption that all other things are equal. For example, a person makes a promise given the brute facts of that person saying they will do something. This is only true if all else is equal and not in unusual circumstances. Perhaps a person doesn’t make a promise when joking around. This sense of “brute facts” is often contrasted with “institutional facts.”
burden of proof – (1) The requirement for a position to be justified during a debate. The burden of proof exists for a claim when the claim will be likely rejected by people until the claim is justified. The burden of proof can shift during a debate. For example, a good argument against a belief would shift the burden of proof onto anyone who wants to defend that belief. (2) The rational burden of proof is the property of a position that people should rationally reject unless at least minimal evidence can be given for it. For example, people have a rational burden of proof to have evidence that faeries exist, and we should reject the existence of faeries until that burden of proof is met.
capitalism – A type of economy with limited government regulation (a “free market”) and where the means of production (factories and natural resources) are privatized. Key features of capitalism includes competition between people who sell goods and services, the profit motive (which is expected to motivate people to compete), and companies.
care ethics – An ethical perspective that focuses on the dependence and importance of personal relationships, and the primary importance of caring for others. Care ethics tends to emphasize the special obligations we have towards one another because of our relationships, such as the obligation of parents to keep their children healthy. Care ethics is often understood to be part of the “moral sentimentalist” and “feminist” traditions, and it’s often believed to be incompatible with utilitarianism and Kant’s categorical imperative.
case-based reasoning – Reasoning involving the consideration of similar situations or things. For example, a doctor could consider the symptoms and cause of illness of various patients that were observed in the past in order to decide what is likely the cause of an illness of another patient who has certain symptoms. Case-based reasoning uses the following four steps for computer models: (a) Retrieve – consider similar cases. (b) Reuse – predict how the similar cases relate to the current case. (c) Revise – check to see if the similar cases relate to the current case as was predicted and make a new prediction if necessary. (d) Retain – once a prediction seems to be successful, continue to rely on that prediction until revision is necessary. Case-based reasoning is similar to “analogical reasoning.”
categorical – (1) Overriding, without exceptions, and absolute. For example, categorical imperatives. (2) Involving categories or types of things. For example, categorical syllogisms.
categorical imperative – An imperative is a command or requirement. Categorical imperatives are overriding commands or requirements that don’t depend on our desires, and are rational even if we’d rather do something else. For example, it is plausible that we have a categorical imperative not to run around punching everyone in the face just for entertainment. The mere fact that someone might want to do it does not make it morally acceptable. Categorical imperatives are often contrasted with “hypothetical imperatives.” People often speak of “the categorical imperative” to refer to “Kant’s Categorical Imperative.”
categorical proposition – A proposition concerning categories. For example, “all men are mortal” concerns two categories: Men and mortality.
categorical syllogism – A syllogism that consists of categorical propositions (propositions that concern various categories or “kinds of things”). For example, “All animals are mortal. All humans are animals. Therefore, all humans are mortal.”
category – (1) A grouping or set of things that share a characteristic. For example, animals, minerals, and persons. (2) The most general concepts. For example, space, time, and causation.
category mistake – A confusion between two categories that leads to an error in reasoning. For example, we might say that an essay tells us the types of biases people suffer from, but that uses a metaphor—essays cannot literally tell us anything. They aren’t the kind of thing to say things, so it would be a category mistake to believe essays literally say things.
causal determinism – See “determinism.”
causal theory of reference – The view that names of things (e.g. ‘water’) refer to the things because of how people referred to the object in history. It is generally thought by supporters that reference requires “reference fixing” (e.g. when someone decides what to call it) and “reference borrowing” (e.g. the name is passed on by other people who talk about it). For example, people at some point called the stuff they have to drink to stay hydrated ‘water’ and the fact that people kept calling it ‘water’ assured us that we use that word to refer to the same stuff as people did during the first time someone decided to call it something.
causal theory of knowledge – The view that the truth of statements cause our knowledge of the statements’ truth, or that facts cause knowledge of facts. For example, a cat that lays on a mat can cause our belief that the cat is on the mat insofar as it being there causes us to see it. If it’s completely impossible to interact with an entity and the entity makes absolutely no difference to us whatsoever, then we might wonder if the entity exists at all.
causation – One thing that makes something else happen. For example, a red rolling billiard ball that hits a blue billiard ball and makes the blue one move. Causation involves necessary connections and laws of nature. We can predict when one event will cause another event based on understanding the state of affairs that exist and the laws of nature.
certainty – See “epistemic certainty” or “psychological certainty.”
ceteris paribus – Latin for “with all else being equal” or when considered in isolation. For example, ceteris paribus, killing people is wrong. However, there might be overriding factors that justify killing others, such as when it’s necessary for survival.
character – (1) Persisting traits that are resistant to change and influence a person’s decision-making. A person’s character can exhibit various character traits, such as virtues (such as courage) and vices (such as addiction). (2) The domain of character traits, such as virtues and vices. Virtues and vices could be used to describe the actual decisions and actions a person tends to make rather than persisting properties that are resistant to change.
character ethics – A synonym for “virtue ethics.”
charity – (1) The virtue in a disagreement or debate to describe other people’s beliefs and arguments accurately rather than to misrepresent them as being less reasonable than they really are. If we are not charitable in this way, then we will create a fallacious “straw man” argument. (2) The virtue concerned with helping others who are in need. For example, giving money to the poor is often charitable in this sense. (3) An organization or institution that exists to try to help others who are in need. For example, the Red Cross or a soup kitchen.
cherry picking – Finding or using evidence that supports a position while simultaneously ignoring any potential counter-evidence against the position. See “one-sidedness” for more information.
circular argument – An argument with a premise that’s identical to the conclusion. For example, “All dogs are animals because all dogs are animals.” The logical form of a circular argument is “a; therefore a.” Circular arguments are similar to the “begging the question” fallacy. Also see “circular reasoning.”
circular reasoning – (1) Reasoning involving a set of mutually supporting beliefs that are not justified by anything other than the set of beliefs. A simple form of circular reasoning is the following—A is justified because B is justified; B is justified because C is justified; and C is justified because A is justified. For example, “we should agree that stealing is wrong because it should be illegal; we should agree that stealing should be illegal because we shouldn’t want people stealing from us; and we shouldn’t want people stealing from us because it’s wrong.” (2) A “circular argument.”
class conflict – The power struggle between social classes. The wealthy are often thought to fight to maintain their power and privilege and the working class is thought to fight to attain a greater share of power. For example, the working class could fight for a higher minimum wage, and the wealthy could fight to keep receiving corporate welfare. Karl Marx thought that class conflict also happens at the level of ideology—the wealthy tries to convince everyone else that those with wealth deserve to keep their wealth and maintain their privilege, but other people resist this ideology and offer alternatives.
class warfare – A synonym for “class conflict.”
cogent argument – An inductively strong argument with true premises. For example, “All objects that were dropped near the surface of the Earth in the past fell to the ground. Therefore, objects that are dropped near the surface of the Earth tomorrow will will probably fall to the ground.” See “strong argument” for more information.
cognition – A mental process. For example “inferential reasoning” is a form of cognition.
cognitive bias – A psychological trait that leads to errors in reasoning. For example, the “confirmation bias.”
cognitivism – A field concerning judgments that are true or false. For example, moral cognitivism states that moral judgments can be true or false. “Cognitivism” is often contrasted with “non-cognitivism.”
coherence – (1) The degree of consistency something has. For example, the beliefs “all men are mortal” and “Socrates is a man” are consistent. Contradictory beliefs are incoherent or “inconsistent.” (2) In ordinary language, “coherence” often refers to the degree of clarity and sense a person makes. Someone who is incoherent might say nonsense.
coherence theories of epistemology – See “coherentism.”
coherentism – The view that there are no foundational beliefs, but that some beliefs can be mutually supported by other beliefs. It is often claimed that an assumption is justified through coherence if it is useful as part of an explanation. Observation itself is meaningless without assumptions, and observation appears to confirm our assumptions as long as our observations are consistent with them. For example, my assumption that a table exists can be confirmed by touching the table, and my experiences involved with touching the table confirms my assumption that the table exists. Some philosophers argue that coherentism should be rejected because it legitimizes “circular reasoning,” which we ordinarily recognize as being a fallacious form of justification. However, coherentists claim that circular reasoning is not vicious as long as enough beliefs are mutually supporting.
common sense – (1) Beliefs or assumptions we are more certain about than the premises used by skeptical arguments against them, but it’s difficult or impossible to fully understand how we can be so certain about them. For example, G. E. Moore said he is absolutely certain that he knows that something existed before he was born and that something will still exist after he is dead. (2) Assumptions we hold without significant evidence when rejecting the assumptions does not appear to be a reasonable option. For example, we accept that inductive reasoning is effective even though we can’t prove it without circular reasoning. Rejecting inductive reasoning would lead to absurdity (and it would perhaps imply that we should reject all natural science altogether). (3) Beliefs or assumptions people tend to have prior to philosophical study. (4) According to Aristotle, common sense are the internal senses that are used to judge and unite experiences caused by sense perception (the five senses: sight, sound, touch, taste, and smell).
communism – (1) A type of economy where the means of production (factories and natural resources) are publicly owned rather than privatized, and where there are no social classes (i.e. there is no working class or upper class). The difference between communism and socialism is not entirely clear and the terms are often used as synonyms. (2) In ordinary language, “communism” often refers to a type of totalitarian political system and economy where the government owns all the businesses and controls the means of production.
commutation – A rule of replacement that states that “a and b” and “b and a” both mean the same thing. (“a” and “b” stand for any two propositions.) For example, we know that “all dogs are animals and all cats are animals” means the same thing as “all cats are animals and all dogs are animals.” If we use one of these statements in an argument, then we can replace it with the other statement.
commutation of conditionals – A fallacy committed by arguments that have the logical form “if a, then b; therefore if b, then a.” (“a” and “b” stand for any two propositions.) For example, “If all snakes are reptiles, then all snakes are animals. Therefore, if all snakes are animals, then all snakes are reptiles.”
commutative – To be able to switch symbols without a loss of meaning. “a and b” has the same meaning as “b and a.” For example, “dogs are mammals and lizards are reptiles” has the same meaning as “lizards are reptiles and dogs are mammals.”
compatibilism – The view that determinism and free will are compatible. Compatibilists often believe we actually have free will, and their conception of free will is compatible with determinism. For example, compatibilists could say that we are free as long as we can do whatever we choose to do. A person can be free to choose to spend the next ten minutes eating food or taking a shower; and she is likely able to be able to do either of those things assuming she chooses to. “Compatibilism” can be contrasted with “libertarian free will.”
complete theory – A theory is complete if and only if it can answer all relevant questions. For example, a normative theory of ethics is complete if it can determine whether any action is right or wrong.
completeness – See “semantic completeness,” “syntactic completeness,” “expressive completeness,” or “complete theory.”
complex question – A synonym for “loaded question.”
composition – (1) In logic, the term ‘composition’ refers to the “fallacy of composition.” (2) When a creditor agrees to accept a partial payment for a debt. (3) The arrangement of elements found in a work of art. (4) Producing a literary work, such as a text or speech.
compound proposition – A proposition that can be broken into two or more propositions. For example, “Socrates is a man and he is mortal” can be broken into the following two sentences: (a) Socrates is a man. (b) Socrates is mortal. “Compound propositions” can be contrasted to “non-compound propositions.”
compound sentence – See “compound proposition.”
comprehensiveness – The scope of a theory or explanation. A theory is more comprehensive than another if it covers a greater scope. Theories are more comprehensive if they are capable of explaining a greater number of observations or more types of phenomena. Consider the view that (a) it’s generally wrong to punch people and (b) the view that it’s generally wrong to hurt people. The view that it’s generally wrong to hurt people is more comprehensive because it can explain why many more actions are wrong than the view that it’s generally wrong to punch people.
conceptual analysis – A systematic study of concepts in an attempt to improve our understanding of them (perhaps to help us avoid confusion during debates). Conceptual analysis involves giving definitions, and giving necessary and sufficient conditions for using a term. Conceptual analysis could be revisionary by defining concepts in new ways or it can define concepts in ways that are almost entirely based on how people use language. For example, to say that killing people is generally the right thing to do would be revisionary to the point of absurdity because it would require a new definition for “right thing to do” that has little to nothing to do with how people use language. Even so, how people actually use language can be unstable, vague, or ambiguous; so revisionary definitions can be necessary.
conceptual framework – A systematic understanding of a field (such as morality) and all related concepts (such as moral duties, values, and virtues) that might not accurately represent reality, but it could exhibit various theoretical virtues. Conceptual frameworks provide a certain understanding of various concepts involved, but alternative ways of understanding the concepts could also be possible (or even superior).
conclusion – A statement that is meant to be proven or made plausible in consideration of other statements. For example, consider the following argument—“All men are mortal. Socrates is a man. Therefore, Socrates is mortal.” In this case “Socrates is mortal” is the conclusion. “Conclusions” are often contrasted with “premises.”
conclusion indicator – A term used to help people identify that an conclusion is being stated. For example, “therefore” or “thus.” See “conclusion” for more information.
concretism – The view that possible words exist just like the actual world, and that everyone from a possible world calls their own world the “actual world.” Concretism is an attempt to explain what it means to say that something is necessary or possible—something is necessary insofar as it’s true in every possible world, and something is possible insofar as it is true in at least one possible world. It is necessary that oxygen is O2 insofar as it’s true that oxygen is O2 in every possible world, and it’s possible that a person can jump over a small rock insofar as it’s true in at least one possible world. “Concretism” can be contrasted with “abstractism.” See “modality” and “modal realism” for more information.
conditional – (1) Something that happens or could happen depending on other facts. For example, making enough money for a living is often conditional on finding full time employment. (2) A “material conditional.”
conditional proof – A strategy used in natural deduction used to prove an argument form is logically valid that has an if/then proposition as a conclusion. We know the argument form is valid if we can assume the premises are true and the first part of the conclusion is true in order to deduce the second part of the conclusion. For example, consider the argument “If A, then B. If B, then C. Therefore, if A, then C.” We can use the following conditional proof to know this argument is valid:
If we can assume the first part of the conclusion (“A”) and the premises to prove the second part of the conclusion (“C”), then the argument is valid.
We know “if A, then B” is true and “A” is true, so we know “B” is true. (See “modus ponens.”)
We know “if B, then C” is true and “B” is true, so we know “C” is true. (See “modus ponens.”)
We have now deduced that the second part of the conclusion is true, so the argument form is logically valid.
conditionalization – Concerning how we ought to update our beliefs and degrees of confidence when we attain new information. For example, a person who believes all swans are white ought to reject that belief once she sees a black swan.
confirmation – Strong evidence supporting a hypothesis or theory. For example, the fact that all known species of birds are warm-blooded is confirmation of the hypotheses that all birds are warm-blooded.
confirmation bias – One of the most important forms of cognitive bias that is evident when people take supporting evidence of their beliefs too seriously while simultaneously ignoring or marginalizing the importance of evidence against their beliefs. For example, a person with the confirmation bias could take her experiences of white swans as evidence that all swans are white but ignore the fact that some people have seen black swans.
conjunct – The first or second part of a conjunction. The logical form of a conjunction is “a and b.” Both “a” and “b” are conjuncts. For example, the conjunction “all dogs are mammals and all mammals are animals” has two conjuncts: (a) all dogs are mammals and (b) all mammals are animals.
conjunction – (1) A proposition that says both of two things are true. The logical form of conjunctions is “a and b.” For example, “all doctors are humans and all humans are capable of reasoning.” There are two common symbols used for conjunction in formal logic: “&” and “∧.” An example of a statement using one of these symbols is “A ∧ B.” (2) A rule of inference that states that we can use “a” and “b” as premises to validly conclude “a and b.” (“a” and “b” stand for any two propositions.) For example, “Birds are animals. The Sun will rise tomorrow. Therefore, birds are animals and the Sun will rise tomorrow.”
consequent – (1) The second part of a conditional with the form “if a, then b.” (“b” is the consequent.) For example, consider the conditional statement “if all dogs are mammals, then all dogs are animals.” In this case the consequent is “all dogs are animals.” (2) A logical implication of various beliefs. For example, a person who believes that “if all dogs are mammals, then all dogs are animals” and “all dogs are mammals” can validly infer the consequent, “all dogs are animals.” (3) The result of an event. For example, a person who turns the light switch downward consequently turned the light off.
consequentialism – Moral theories that state that the consequences of actions determine which actions are right or wrong. For example, if we know what has intrinsic value, then we can compare each possible course of action and see which course of action will maximize intrinsic goodness (i.e. lead to the most positive value and least negative value). Consequentialist philosophers would argue that such an action would be the “most right” and actions that depart from the ideal will be “more wrong” to whatever extent they fail to do what is best. Sometimes “utilitarianism” is used as a synonym for “consequentialism.”
consistency – The property of lacking contradictions. To be logically consistent is to have beliefs that could all be true at the same time. For example, “all fish are animals” and “all mammals are animals” are both logically consistent. However, “all fish are animals” and “goldfish are robots” is inconsistent. We can compare “consistent” beliefs with “contradictions.”
consistent logical system – A logical system with axioms and rules of inference that can’t possibly be used to prove contradictory statements from true premises. See “formal logic,” “axioms,” and “rules of inference” for more information.
constant – See “logical constant” or “predicate constant.”
continuant – (1) A persisting thing. For example, we often think people persist through time and continue to exist from one moment to the next. (2) A persisting thing that “endures.” For example, people could persist through time and exist in their entirety at every moment despite going through many changes.
constructionism – See “constructivism.”
constructive dilemma – A rule of inference that states that we can use the premises “a and/or b,” “if a, then c,” and “if b, then d” to validly conclude “c and/or d.” (“a”, “b,” and “c” stand for any three propositions.) For example, “Either all dogs are mammals and/or all dogs are lizards. If all dogs are mammals, then all dogs are animals. If all dogs are lizards, then all dogs are reptiles. Therefore, all dogs are animals and/or reptiles.”
constructivism – (1) “Metaethical constructivism” is the view that morality is based on convention or agreement. Metaethical constructivism could claim that morality is based on our instinctual reactions or on a social contract. See “ideal observer theory” for more information. (2) The view that something is created through human interaction, agreement, or a common understanding. For example, the game “chess” and the presidency of the United States are constructed.
continental philosophy – A philosophical domain that often requires less precision and clarity in order to allow for a discussion of major issues. Continental philosophy is often a continuation of ancient philosophy involving highly abstract issues (such as the nature of “being”) and issues that directly affect our lives. “Continental philosophy” is often contrasted with “analytic philosophy.”
continuum fallacy – A fallacy that is committed by an argument that appeals to the vagueness of a term to unreasonably conclude something (usually based on the fact that we don’t know where to draw the line between two things). For example, we don’t know where to draw the line concerning how many hairs must be on a person’s head before that person is no longer bald, but we would commit the continuum fallacy to conclude from that fact that no one is bald. See “vagueness” for more information.
contingent truth – Propositions that are true based on some sort of dependence that “could have been otherwise.” Contingent statements are possible, but they are not necessary. For example, the fact that Socrates had a pug nose is a contingent truth. See “physical contingence,” “metaphysical contingence,” and “logical contingence.”
contingence – The property of being possible but not necessary. There is a sense in which contingent things “could have been otherwise.” Aristotle’s concept of an “accidental characteristic” refers to contingent characteristics. See “physical contingence,” “metaphysical contingence,” and “logical contingence.”
Contradiction – (1) When two propositions cannot both be true due to their logical form. “Socrates was a man” and “Socrates was not a man” are two statements that can’t both be true because the logical form is “a” and “not-a.” (“a” is any proposition.) (2) In categorical logic, contradiction is a process of negating a categorical statement and expressing it as a different categorical form. For example, “all men are mortal” can be contradicted as “some men are not mortal.”
contradictory – (1) In categorical logic, a contradictory is the negation of a categorical statement expressed in a different categorical form. For example, “no men are immortal” is the contradictory of “some men are immortal.” (2) When two propositions form a contradiction. For example, it’s contradictory to say, “Exactly four people exist” and “only two people exist.”
contraposition – (1) To switch the terms of a categorical statement and negate them both. There are two valid types of categorical contraposition: (a) “All a are b” means the same thing as “all non-b are non-a.” (b) “Some a are not b” means the same thing as “some non-b are not non-a.” For example, the following argument is valid—“Some snakes are not mammals. Therefore, some non-mammals are not non-snakes.” (2) To infer a contrapositive from a categorical proposition. See “contrapositive” for more information. (3) In modern logic, it is also known as “transposition.”
contrapositive – A categorical proposition is the contrapositive of another categorical proposition when the terms are negated and switched. For example, the contrapositive of “all mammals are animals” is “All non-animals are non-mammals.” It is valid to infer the contrapositive of two different types of categorical propositions because they both mean the same thing: (a) “All a are b” means the same thing as “all non-b are non-a.” (b) “Some a are not b” means the same thing as “some non-b are not non-a.” For example, “some people are not doctors” means the same thing as “some non-doctors are not non-people.”
contrary propositions – Propositions that are mutually exclusive. For example, “Socrates is a man” and “Socrates is a dog” are contrary propositions. (Both statements refer to the historical philosopher.)
convention – What is true based on agreement or a common understanding. For example, it’s a convention that people drive on the right side of the road in the United States (on two way roads), so it would be generally wrong to drive on the left side of the road in the United States.
converse – A categorical proposition or if/then statement with the two parts switched. The converse of “all a are b” is “all b are a.” (“a” and “b” are any two terms.) The converse of “if c, then d” is “if d, then c.” (“c” and “d” are any two propositions.) For example, the converse of “if all fish are animals, then all fish are organisms” is “if all fish are organisms, then all fish are animals.” It is valid to infer the converse of any categorical statement with the form “no a are b” or “some a are b.” See “conversion” for more information.
conversion – To switch the terms of a categorical statement. There are two valid types of conversion: (a) “No a are b” means the same thing as “no b are a.” (b) “Some a are b” means the same thing as “some b are a.” For example, the following is a valid argument—“No birds are dogs. Therefore, no dogs are birds.”
corpuscles – Small units of matter of various shapes and with various physical properties that interact with one another.
correlation – When two events or characteristics are found together. For example, it’s a correlation that not drinking water and thirsty people are found together. “Correlation” can be contrasted with “causation.”
correspondence theory of truth – The view that true propositions correspond or relate properly to facts (or to reality). Correspondence theories of truth are compatible with “factual truths” and various forms of “realism.” The “correspondence theory of truth” is often contrasted with the “deflationary theory of truth.”
corrigible – Propositions or beliefs that can be improved or corrected by new information.
counter evidence – Evidence against a belief.
counterargument – An objection to an objection. An argument used to refute, disprove, or oppose an objection. For example, someone could argue against the belief that hurting people is always wrong by saying, “Hurting people in self-defense is never wrong, so it can’t always be wrong to hurt people.” Someone else can respond to that objection by giving a counterargument and saying, “Hurting people in self-defense is wrong when it involves excessive force, such as when we kill someone just for kicking us.”
counterexample – (1) An object or state of affairs that disproves a belief. For example, a white raven disproves the belief that “all ravens are black.” (2) An argument meant to prove another argument to be logically invalid by using the same argument form as the other argument, but the counterargument must have obviously true premises and an obviously false conclusion. Consider the invalid argument, “If dogs are lizards, then dogs are reptiles. Dogs are not lizards. Therefore, dogs are not reptiles.” A counterexample would be, “If dogs are reptiles, then dogs are animals. Dogs are not reptiles. Therefore, dogs are not animals.”
counterfactual – Conditional statements about what would be the case if something else wasn’t the case (that is actually the case). For example, “If Socrates was not a mortal, then Socrates was not a human.” Socrates was a mortal, so the counterfactual requires us to imagine what would have been the case if things were different.
counterintuitive – Something that conflicts with what we think we know for some reason. For example, it would be counterintuitive to find out that other people don’t have mental activity. What we find counterintuitive is often taken to be a reason for thinking something is false, but sometimes what we initially find to be counterintuitive is proven to be true. For example, people find it (at least mildly) counterintuitive to think that large objects fall at the same speed as small ones, but it’s been proven to be true.
credence – A subjective degree of confidence concerning the odds we believe that something could be true. For example, it would seem irrational to be highly confident that the law of gravity will no longer exist tomorrow. See “psychological certainty.”
credence function – A comparison between the actual state of the world and the credence (subjective degree of confidence) a person has of the world being that way. Ideally people will have a strong credence towards factual statements. For example, people should be very confident that more than five people exist considering that society couldn’t function without thousands of people existing.
criteria – A standard used for making distinctions. For example, empiricists think the only relevant criteria that determines if something is a good justification is that it’s based appropriately on empirical evidence (observation).
critical reasoning – A synonym for “critical thinking.”
critical thinking – An understanding of argument analysis and fallacies. It is often equated with “informal logic,” but any qualities that lead to an increased understanding of rationality and an increased ability to be reasonable could be involved.
criticism – (1) An argument that is meant to persuade us to reject a belief of another argument. See “objection.” (2) Disparaging remarks, fault-finding, or judging something as falling short of certain requirements or standards.
cum hoc ergo propter hoc – Latin for “with this, therefore because of this.” A logical fallacy committed when an argument concludes that something causes something else to happen due to a correlation. For example, the fact that a person takes a sugar pill before recovering from an illness doesn’t prove that she recovered from the sugar pill. She might have recovered for some other reason. This fallacy is a version of the “false cause” fallacy.
cultural evolution – A synonym for “sociocultural evolution.”
cultural relativism – (1) The view that moral statements are true because we agree on their truth (or merely because we believe they are true). Rape and murder would be considered wrong for a society if that society agrees that they are wrong, but might be considered to be right in another culture. Cultural relativism refers to the view that moral statements are true because a culture agrees with them, but other forms of moral relativism could be individualistic—what’s right and wrong could depend on the individual. One form of relativism is the view that morality is determined by a “social contract.” Relativism should be contrasted with the view that an action could be either right or wrong depending on the context. (2) The view that the moral beliefs of various cultures differ. What one culture says is right or wrong is often different from what another culture says is right and wrong.
cynicism – (1) The practice of a philosophical group known as the cynics. The cynics were skeptical of argumentation and theorizing, and they focused on becoming virtuous, which they generally didn’t think required very much argumentation or theorizing. Cynics generally focused on being happy, free from suffering, and living in accordance with nature. The cynics were known for disregarding cultural taboos and believing that taboos are irrelevant to being virtuous. (2) It often refers to a pessimistic attitude. Cynicism can be characterized by mistrust towards people and the expectation that people will misbehave. (3) A skeptical attitude characterized by criticism towards various beliefs and arguments.
Das Man – German for “they self” and often translated as “the they” or “the one.” Martin Heidegger uses this term to refer to the social element of human beings—that we act for others, and that our thoughts are based on those of others. For example, we tell our children “one shares toys with others” when we want to teach them social norms.
Dasein – German for “being there.” Martin Heidegger’s term used for human beings to emphasize the view that they are not objects. Heidegger rejected the subject/object distinction and thought it led to a mistaken view dualism—that the mind and body are totally different things. Dasein is used as a verb rather than a noun to emphasize that we are what we do and not an object of some sort.
de dicto – Latin for “of the word” or “of what is said.” For example, a person can consistently believe that water (i.e. a liquid we drink for survival that freezes when cold and turns to gas when hot) can boil at a lower temperature than H2O (a molecule consisting of two different chemical elements) under a de dicto interpretation. “De dicto” is often contrasted with “de re.”
de facto – Latin for “concerning fact.” Used to describe the actual state of affairs or practice regardless of what’s right or lawful. For example, a dictator could find an illegitimate way to attain power and be a ruler de facto. “De facto” is often contrasted with “de jure.”
de jure – Latin for “concerning law.” Used to describe a situation in terms of the law or ethical considerations. For example, a dictator who attains power illegitimately would not be in power de jure. “De jure” is often contrasted with “de facto.”
de re – Latin for “of the thing.” For example, a person can not coherently believe that water can boil at a lower temperature than H2O because they both refer to the same thing under a de re interpretation. “De re” is often contrasted with “de dicto.”
debate – A prolonged discussion concerning a disagreement that is characterized by two or more opposing sides that (a) try to give reasons to believe a conclusion, (b) try to explain why the conclusions of the opposing side should be rejected, and (c) try to explain why the arguments given by the opposing side should be rejected. Debates need not be between two people and they need not exist in a face-to-face presentation. A single philosophical essay can be considered to be part of a debate that’s been going on for hundreds or thousands of years by philosophers in different time periods who read various arguments and respond to them.
decidability – A question is decidable if we can determine the answer. For example, logical systems are supposed to be able to determine if arguments are valid. An argument that can’t be determined to be valid by a logical system would be “undecidable” by that logical system. Any logical system that can’t determine if an argument is valid is semantically incomplete. See “semantic completeness” for more information.
decision theory – See “utility theory.”
deconstructionism – A philosophical domain concerned with examining the assumptions behind various arguments and beliefs (known as “deconstruction”).
deduction – Reasoning or argumentation that attempts to prove a conclusion is true as long as we assume the premises are true. Good deductive arguments are logically valid. For example, “All men are mortal. Socrates is a man. Therefore, Socrates is a mortal” is a logically valid deductive argument. Deduction is often contrasted with “induction.”
deductive reasoning – See “deduction.”
deductively complete – See “syntactic completeness.”
default position – The position that lacks the burden of proof before debate begins (perhaps because it is rationally preferable). For example, the default position of a debate tends to be an undecided point of view against both those who are for and and those who are against some belief. Both sides of a debate are therefore expected to argue for their particular beliefs.
defeasible – Reasoning is defeasible if it’s rationally compelling without being logically valid. The support the premises have for the conclusion could be insufficient depending on certain unstated facts. A defeasible argument can be defeated by additional information. Defeasible arguments could be considered to be reasons to believe something, all things equal—one consideration in favor of a conclusion. The opposite of “defeasible” is “indefeasible.”
defeater – The information that can defeat a defeasible argument. Defeaters are reasons against conclusions that are more important than the previous defeasible support for the conclusion.
defense – (1) A defensive argument against an objection (i.e. a “counterargument”). (2) A response to various objections in an attempt to explain why they aren’t convincing. (3) The opposition to an attack.
definable concept – A concept that can be defined and understood in terms of other concepts. For example, we can define “valid argument” as an argument with a form that assures us that it can’t have true premises and a false conclusion at the same time. “Definable concepts” can be contrasted with “primitive concepts.”
definiendum – The term that is defined by a definition. Consider the definition of “argument” as “one or more premises that supports a conclusion.” In this case the definiendum is “argument.” “Definiendum” can be contrasted with “definiens.”
definiens – The definition of a term. Consider the definition of “premise” as “a proposition used to give us reason to believe a conclusion.” In this case the definiens is “a proposition used to give us reason to believe a conclusion.” “Definiens” can be contrasted to “definiendum.”
deism – The view that one or more gods exist, but they are not people and/or they don’t interefere with human affairs. For example, Aristotle’s first cause (i.e. prime mover).
deity – See “god.”
deflationary – (1) The property of involving truth or reality without involving it as strongly as we might otherwise expect. Deflationary truth involves truth without any assumption regarding realism, but deflationary metaphysics could be compatible with realism (i.e. the existence of facts). (2) To have the property of shrinking or collapsing.
deflationary theory of truth – The view that to assert a statement to be true is merely to assert the statement, and that there is nothing more to be said about what “truth” means. The deflationary theory of truth is compatible with “nonfactual truths” and are sometimes contrasted with the “correspondence theory of truth.”
deflationism – See the “deflationary theory of truth.”
Demiurge – (1) A godlike being theorized by Plato that is thought to be similar to an artisan who crafts and maintains the physical universe. Plato did not describe the Demiurge as the creator of the entire physical universe, and Platonists often thought that the entire physical universe was created or dependent on a greater being called “the Good.” (2) According to Neoplatonists, the Demiurge is “Nous” (the mind or intellect of the Good).
democracy – A political system where people share power by voting. Many democracies have people vote for “representatives” who have the majority of the ruling power. (Representative democracies are also known as “republics.”)
DeMorgan’s laws – A rule of replacement that takes two forms: (a) “It’s not the case that both a-and-b” means the same thing as “not-a and/or not-b.” (b) “It’s not the case that a and/or b” means the same thing as “not-a and not-b.” (“a” and “b” stand for any two propositions.) For example, “it’s not the case that dogs are either cats or lizards” means the same thing as “no dogs are cats, and no dogs are lizards.”
denying a conjunct – A logical fallacy committed by arguments with the following form—“It’s not the case that both a-and-b. Not-a. Therefore, b.” This argument form is logically invalid. For example, “Socrates isn’t both a dog and a person. Socrates isn’t a dog. Therefore, Socrates isn’t a person.”
denying the antecedent – An invalid argument with the form “if a, then b; not-a; therefore, not-b.” A counterexample is, “If all dogs are reptiles, then all dogs are mammals. It’s not the case that all dogs are reptiles. Therefore, it’s not the case that all dogs are mammals.”
deontic logic – A formalized logical system that uses “deontic quantifiers.”
deontic quantifier – A symbol used in formal logic to state when an action is obligatory (O), permissible (P), or forbidden (F). For example, “Op” means that “p” is obligatory.
deontology – Moral theories that state that there is something other than consequences that determine which actions are right or wrong, but deontologists also reject virtue ethics (which is primarily concerned with what it means to be a good person rather than what actions are right or wrong). For example, see “Kant’s Categorical Imperative.” “Deonology” is often contrasted with “consequentialism.”
derivation – A formal proof of a proposition expressed in formal logic. A derivation can be described as a series of statements that are implied by rules of inference, axioms of a logical system, or other statements that have been derived by those two things. For example, a logical system could have an axiom that states “a or not-a” and have a rule of inference that states “a implies a or b.” In that case the following is a derivation—“a or not-a. Therefore, a or not-a or b.” See “axioms,” “rules of inference,” “logical system,” and “theorem” for more information.
descriptive – (1) Statements that help us understand the nature of things or aspects of reality. (2) Value-free information about the nature of things or reality. “Descriptive” is often contrasted with “prescriptive” or “evaluative.”
desire – Motivation or yearning. For example, a hungry person desires food. Desire is sometimes thought of only as motivation related to the body rather than as motivation caused by reasoning or ethical considerations. “Desire” can be contrasted to Immanuel Kant’s conception of “good will.”
desire-dependent reason – A reason for an action that depends on a desire. For example, a person who yearns to eat chocolate has a reason to eat chocolate. “Desire-dependent reasons” can be contrasted with “desire-independent reasons.”
desire-independent reason – A reason for action other than a desire. For example, John Searle argues that promises are desire-independent reasons. If you promise to do something, then you have a reason to do it, even if you don’t desire to do it. “Desire-independent reasons” can be contrasted with “desire-dependent reasons.”
destiny – (1) A fated course of events, which is generally thought to be fated due to a person having a certain purpose. For example, King Arthur could have been said to be destined to become a king insofar as he was meant to be a king and would become a king no matter what choices he made. (2) A probable future event involving a person’s purpose that could be willfully achieved, but could be avoided given resistance. Perhaps King Arthur was destined to become king and could make choices to become the king, but could have fought against his destiny and become a blacksmith instead.
determinism – The view that everything that happens is inevitable and couldn’t have been otherwise. Causal determinism is the view that the prior state of the universe and laws of nature were sufficient to cause later states of the universe. Determinism is not necessarily incompatible with the view that our decisions help determine what happens in the world, but the decisions we make could also be determined.
deterrence – A justification for punishment in terms of the fear that the punishment causes people in order to prevent crimes. Rational people are expected to choose not to commit the crime in order to avoid punishment. For example, many people argue that the death penalty is a justified to use to punish murderers because it will deter murderers from killing more people.
deus – Latin for “god” or “divinity.”
deus ex machina – Latin for “god from the machine.” Refers to solving problems via miracles, or in unreasonable and simplistic ways.
dialectic – A process involving continual opposition and improvement. For example, Socratic dialectic occurs during a debate when hypotheses are presented, proven to be inadequate, then new and improved hypotheses are presented. Someone could claim that justice is refusing to harm people; and someone else could argue that sometimes it’s unjust to refuse to help someone, so justice can’t be sufficiently defined as merely refusing to harm people. A new claim could then be presented that defines justice as refusing to harm people and being willing to help people. One conception of dialectic is said to consist of at least one “thesis,” “antithesis,” and “synthesis.”
dialectical materialism – The view that economic systems face various problems and solutions are offered for those problems until they are replaced by an improved economic system. For example, slavery was replaced by feudalism, and feudalism was replaced by capitalism; and each of these systems faced fewer or less severe problems than those that existed previously. Dialectical materialists often think that communism is the ultimate economic system that will no longer face problems. See “dialectic” for more information.
dialetheism – The view that the “law of non-contradiction is false”—that contradictions can exist. If dialetheism is true, then a statement can be both true and false at the same time.
dictatorship – A political system defined by a single person who has the supreme power to rule.
difference principle – A principle of John Rawls’s theory of justice (i.e. “Justice as Fairness”) that requires that we only allow economic and social inequality if it benefits the least-well-off group of society. For example, many people believe that capitalism helps both the rich and the poor insofar as it motivates people to work hard to make more money (which could lead to economic prosperity), and the difference principle could be used to justify an unequal distribution of wealth assuming it can justify capitalism in this way.
Ding an sich – German for “thing in itself.”
discursive – (1) Involving “inferential reasoning.” (2) Rambling or discussing a wide range of topics.
discursive concept – According to Immanuel Kant, discursive concepts are general concepts known through inferential reasoning or experience rather than concepts known from a “pure intuition” (that don’t depend on experience or generalization). For example, the concept of the person is a discursive concept because we can only understand the concept of the person from having various experiences and generalizing from those experiences. “Discursive concepts” can be contrasted with “non-discursive concepts.”
discursive reasoning – A synonym for “inferential reasoning.”
disjunct – The first or second part of a disjunction. Disjunctions have the form “a or b,” so both “a” and “b” are disjuncts. Consider the disjunction, “either Socrates is a man or he’s a dog.” That disjunction has two disjuncts: (a) Socrates is a man and (b) Socrates is a dog.
disjunction – An either-or proposition. Disjunctions have the logical form “a or b.” The symbol for disjunction in symbolic logic is “∨.” An example of a statement using this symbols is “A ∨ B.” There are two kinds of disjunctions—the “inclusive or” and the “exclusive or.”
disjunctive syllogism – A valid argument form with the following form – “a or b; not a; therefore b.” For example, “Either all dogs are reptiles or all dogs are mammals. Not all dogs are reptiles. Therefore, all dogs are mammals.”
dispreferred – See “suberogatory.”
distribution – (1) When a categorical statement applies to all members of a set or category. For example, the statement, “all cows are mammals,” distributes cows, but not mammals because it says something about all cows, but it doesn’t say anything about all mammals. (2) A rule of replacement that takes two forms: (a) “a and (b and/or c)” means the same thing as “(a and/or b) and (a and/or c).” (b) “a and/or (b and c)” means the same thing as “(a and b) and/or (a and c).” (“a”, “b,” and “c” stand for any three propositions.) For example, “all lizards are reptiles, and all lizards are either animals or living organisms” means the same thing as “either all lizards are reptiles or animals, and either all lizards are reptiles or living organisms.” (3) The way something is given away. For example, “distributive justice.” (4) Statistical differences. For example, “probability distribution.”
distributive justice – The domain of economic justice concerned with how we should determine the allocation or distribution of goods, services, opportunities, and privileges. For example, laissez-faire capitalism distributes goods and services based on voluntary transactions. In general, people will conduct business to make money and use the money to buy other goods and services. However, some people believe that distributive justice demands that we engage in redistribution of wealth because they believe it would be unjust to allow people who have no money to suffer or starve to death.
divine command theory – The view that things are right or wrong because one or more gods commands us to behave a certain way (or favors us to behave a certain way). For example, murder is wrong because one or more gods commands us not to murder other people. Divine command theory requires us to reject that there is rational criteria that determines right and wrong. For example, the divine command theorist might say that God commands us not to murder other people, but that God has no reason to command such a thing other than perhaps having various emotions. Many people reject divine command theory because of the “Euthyphro dilemma.” Many people believe that divine command theory is a form of “subjectivism” because right and wrong would merely describe the subjective states of one or more gods.
divine plan – (1) A course of events that were fated from a divinity. (2) A synonym for “divine providence.”
divine providence – The view that everything that happens in the universe is guided and controlled by a divinity. It is generally believed that the divinity controls the universe to make sure that better things happen than would happen otherwise. Sometimes it is believed that the divinity assures us that everything that happens is predestined and “for the best” (or at least “everything happens for a good reason”) It is often thought that divine providence is a logical consequence of the assumption that God exists and is all-good, all-knowing, and all-powerful; and it is often thought to conflict with our experiences of evil in the world. See “the problem of evil” for more information.
divinity – A god or godlike being. See “God,” “Demiurge,” “Monad,” “the Good,” or “Universal Reason.”
division – (1) See “fallacy of division.” (2) A mathematical operation based on a ratio or fraction. For example “4 ÷ 2 = 2.” (3) To split objects into smaller parts.
doctrine of the maturity of chances – (1) The false assumption that the past results of a random game will influence the future results of the game. For example, a person who loses at black jack five times in a row might think that she is more likely to win if she plays another game. (2) See the “gambler’s fallacy.”
dogmatism – Close-mindedness. To be unwilling to change one’s mind even if one’s beliefs are proven to be unreasonable.
dominance – (1) See “stochastic dominance.” (2) Relating to having control over others.
double negation – (1) A rule of replacement that states that “a” and “not-not-a” both mean the same thing.(“a” stands for any proposition.) For example, “Socrates is a man” means the same thing as “it’s not the case that Socrates isn’t a man.” (2) A “double negative.” When it’s said that something isn’t the case twice. For example, “it’s not the case that Mike didn’t turn the TV on means the same thing as “Mike turned the TV on.”
downing effect – The tendency for people with below average IQ to overestimate their IQ, and for people with above average IQ to underestimate their IQ. This bias could be related to the “Dunning-Kruger effect.”
doxastic – Something that relates to beliefs or is a lot like a belief, such as judgment or desire.
doxastic logic – A formal logical system with modal operators for having various beliefs.
dualism – (1) The view that there are two fundamental different kinds of things, such as mind and matter. See “property dualism” and “substance dualism” for more information. (2) A binary opposition, such as that between good and evil.
due process – (1) Procedures and safeguards to protect our rights. For example, the right to a fair trial. (2) Rights that are needed for appropriate dispute resolution, such as the right to appeal, to defend oneself from accusations, and to protect oneself from unjustified harm or punishment.
duty – (1) What must done. See “obligation.” (2) In Metaphysische Anfangsgründe der Tugendlehre, Immanuel Kant described “duty” as a normative continuum ranging from obligatory to heroic. Some philosophers believe that Kant always had this definition of duty in mind. (3) The concept of duty used by the Stoics was that of an appropriate action—actions that are rationally preferable. The Stoic concept of duty was not of what must be done as it often implies in our day and age.
Dunning–Kruger effect – The cognitive bias defined by the tendency of unskilled people to overestimate how skilled they are because they don’t know about all the mistakes they make. This bias could cause many people to be overconfident concerning the likelihood that their beliefs are justified or true. This bias is likely related to the “the Downing effect.”
E-type proposition – A proposition with the form “no a are are b.” For example, “no cats are reptiles.”
economy – (1) A system involving the production of goods and services, and wealth distribution. See “capitalism” and “socialism” for more information (2) Thrifty management.
efficient cause – That which makes something move around or makes things change. For example, the efficient cause of a billiard ball’s movement could be the event of another billiard ball that rolled into it.
egoism – Relating to oneself. See “ethical egoism” or “psychological egoism.”
eliminative materialism – (1) The view that the mind does not exist as many people think, and that the concepts of “folk psychology” (e.g. beliefs and desires) are inaccurate views of reality. The mind is understood instead as certain brain activity or functions. (2) The view that physics describes reality as it exists best and nothing outside of physics describes reality accurately. Eliminative materialism endorses a form of reductionism that requires us to try to find out the parts something is made out of to find out what it really is. For example, eliminative materialists tend to think that psychological activity is actually brain activity, and they are likely to reject the existence of “qualia.” “Eliminative materialism” requires us to reject “emergence.”
eliminative reductionism – The view that the ultimate reality is made up of small parts, like subatomic particles. We can find out what things really are by finding out what parts they are made of. For example, water is actually H2O (or whatever H2O is made of). The physicalist conception of “eliminative reductionism” is “eliminative materialism.”
eliminativism – See “eliminative materialim.”
emergence – (1) Epistemic emergence refers to our inability to know how to reduce one phenomenon into another. For example, chemistry is epistemically emergent insofar as we don’t know how to reduce it to physics—the laws of physics seem insufficient to predict the behavior of all chemical reactions. (2) Metaphysical emergence refers to when something is “greater than the sum of its parts” or the irreducible existence of a phenomenon that exists because of an underlying state of affairs. For example, some scientists and philosophers think that the mind is an emergent phenomena that exists because of brain activity, but the mind is not the same thing as brain activity.
emanation – How lower levels of existence, such as physical reality, flows from and depends on an ultimate eternal being. Those who believe in emanation tend to think that the ultimate reality is God or “the Good.” Emanation is the idea that creation is ongoing and eternal rather than out of nothing. In that sense the physical universe has always existed.
emanationism – The view that reality as we know it exists from emanation—all of existence as we know it depends on and constantly flows from an ultimate eternal being. See “emanation” for more information.
emotivism – An anti-realist noncognitive metaethical theory that states that moral judgments are emotional expressions. For example, saying, “The death penalty is immoral” actually expresses one’s preference against the death penalty and it means something like saying, “The death penalty, boo!” Although emotivism expresses emotions, the emotions we express when we make moral judgments don’t have to actually be experienced by anyone.
empirical – Evidence based on observation.
empirical apperception – According to Immanuel Kant, this is the consciousness of an actual self with changing states or the “inner sense.” “Empirical apperception” can be contrasted with “transcendental apperception.”
empirical intuition – Intuitive justification that is based on a person’s background knowledge concerning observation (empirical evidence). It can be difficult for a person to explain why they find various beliefs to be plausible even if they are based on her observations, and she can say that those beliefs are “intuitive” as a result. For example, it was intuitive for many early scientists to expect objects that fall from a moving surface (such as a sailing ship) to continue falling in the same direction they were moving at, and we have confirmed that belief to be true (depending on various other factors). This belief is now a rational expectation based on the law of inertia (Newton’s First Law of motion)—an object at rest stays at rest and an object in motion stays in motion with the same speed and direction unless it is acted upon by an outside force.
empiricism – The philosophical belief that all knowledge about the world is empirical (based on observation). Empiricists believe that we can know what is true by definition without observation, but that beliefs about the world must be based on observation. Empiricists reject innate ideas, noninferential reasoning, and self-evidence as legitimate sources of knowledge.
end in itself – Something that should be valued for its own sake. See “final end.”
endurance theory – See “enduratism.”
endurantism – The view of persistence and identity that states that a persisting thing is entirely present at every moment of its existence. Endurantists believe that things can undergo change and still be the same thing. For example, a single apple can be green and then turn red at a later time. Endurantists believe that persisting things have spatial parts, but they don’t have temporal parts. See “temporal parts” for more information. “Endurantism” is often contrasted with “perdurantism.”
endure – (1) For a single thing to fully exist at any given moment in time, and to continue to exist at different moments in time. Things that endure could undergo various changes, but are not considered to be “different things” as a result. For example, an apple can be green at an earlier point in time, and it can turn turn red at a later point in time. See “endurantism” for more information. (2) To survive adversity or to continue to exist despite being changed. (3) To tolerate an attack or insult.
entailment – (1) A logical implication that is properly relevant or connected. For example, “if all dogs are mammals, then Socrates is a man” is true, according to classical logic, but it is counterintuitive and could even be considered to be false in ordinary language. “Relevance logic” is an attempt to make better sense out of how implications should be properly connected as ordinary language requires them to be. (2) A valid logical implication. The premises entail the conclusions of valid arguments.
enthymeme – (1) A categorical syllogism with an unstated premise. For example, “all acts of abortion are immoral because all fetuses are persons.” In this case the missing premise could be “all acts of killing people are immoral.” (2) Any argument with an unstated premise or conclusion. For example, “all fetuses are people and all acts of killing people are immoral” has the unstated conclusion “all acts of abortion are immoral.”
entity – A phenomenon, being, part of reality, or thing that exists.
eon – See “æon.”
epicureanism – (1) The philosophy of Epicurus who thought that everything is physical, that pleasure is the only good, pain is the only evil, and that gods don’t care about human affairs. (2) The view or attitude that mindless entertainment and pleasures are more important than intellectual or humanitarian pursuits.
epiphenomenalism – The view that psychological phenomena has no effect on nonpsychological physical phenomena. If epiphenomenalism is true, then our thoughts and decisions could be a byproduct of a brain and be incapable of making any difference to the motions of our body. For example, stopping pain would never be a reason that we actually decide to see the dentist when we have a cavity. Instead, the brain might fully determine that we go to the dentist based on the physical motion of particles.
epistêmê – Greek for “theoretical knowledge.”
epistemic anti-realism – The view that there are no facts relating to rationality or justification (other than what is true based on our mutual interests or collective attitudes). For example, we might say that it’s true that believing “1+1=3” is unjustified and irrational, but anti-realists might say that we merely tend to dislike the concept of some people believing such a thing, and this mutual interest led to talk concerning what we ought not believe.
epistemic certainty – The degree of justified confidence we have in our beliefs. To be certain that something is true could mean (a) that we have a maximal degree of justification for that belief, (b) that we can’t doubt that it’s true, or (c) that it’s impossible for the belief to be false. To be absolutely certain that something is true is to have no chance of being wrong. For example, we are plausibly absolutely certain that “1+1=2.”
epistemic externalism – (1) The view that proper justification (or knowledge) could be determined by factors that are external to the person. For example, reliabilists think that a belief is only justified if it’s formed by a reliable process (e.g. scientific experimentation). (2) The view that a person does not always have access to finding out what makes her beliefs justified. For example, we think we know that induction is reliable, but we struggle to explain how we could justify such a belief with an argument. (3) The view of justification as being something other than the fulfillment of our intellectual duties. For example, beliefs could be justified if they are more likely true than the alternatives. Newton’s theory of physics was unable to predict the motion of Mercury around the Sun, but Einstein’s theory of physics was able to, so that one consideration seems to imply that Einstein’s theory is more likely true or accurate.
epistemic internalism – (1) The view that proper justification (or knowledge) can only be determined by factors that are internal to the person. For example, “mentalism” states that only mental states determine if a belief is justified. (2) The view that a person can become aware of what makes her beliefs justified through reflection. For example, everyone who knows “1+1=2” can reflect about it to find out how their belief is justified or they don’t know it after all. (3) The view that justification concerns the fulfillment of our intellectual duties. For example, justification would require that we fulfill the duty not to contradict ourselves.
epistemic intuitionism – The view that we can justify various beliefs using intuition, and it’s generally a form of rationalism.
epistemic modality – The distinction between what is believed and what is known. Moreover, epistemic modality can involve the degree of confidence a belief warrants. For example, we know that more than three people exist and we are highly confident that this belief is true. We communicate epistemic modality through terms and phrases, such as “probably true,” “rational to believe,” “certain that,” “doubt that,” etc.
epistemic naturalism – The view that that natural science (or the methods of natural science) provides the only source of factual knowledge. Knowledge of tautologies or what’s true by definition is not relevant to epistemic naturalism. “Epistemic naturalism” is often only used to refer to one specific field. For example, one could be an epistemic naturalist regarding morality, but not one regarding logic. A moral epistemic naturalist would think that we can learn about morality through natural science (or the same methods used by natural science). See “empiricism” for more information.
epistemic objectivity – Beliefs that are reliably justified (e.g. though observation or the scientific method) or justified via a process that can be verified by others using some agreed-upon process. In this case the existence of laws of nature would be objective, but the existence of a person’s pain might not be. “Epistemic objectivity” can be contrasted with “epistemic subjectivity.”
epistemic randomness – When something happens that is not reliably predictable. For example, when we roll a six-sided die, we don’t know what number will come up. We say that dice are good for attaining random results for this reason. “Epistemic randomness” can be contrasted with “ontological randomness.”
epistemic realism – The view that there is at least one fact of rationality or justification that does not depend on a social construction or convention. Epistemic realists often think there are certain things people should believe and that people are irrational if they disagree. For example, it is plausible that we should agree that “1+1=2” because it’s a rational requirement.
epistemic relativism – Also known as “relativism of truth.” The view that what is true for each person can be different. For example, it might be true for you that murder is wrong, but not for someone else. Relativism seems to imply that philosophy is impossible because philosophers want to discuss reality and what’s true for everyone. Relativism is often said to be self-defeating because it makes a claim about everything that’s true—but that implies that relativism itself is relative.
epistemic state – A psychological state related to epistemology, such as belief, degrees of psychological certainty, perception, or experience.
epistemic subjectivity – Beliefs that are unreliably justified or beliefs that can’t be verified by others through some agreed-upon process. For example, a plausible example is believing something is true because it “feels right.” “Epistemic subjectivity” can be contrasted with “epistemic objectivity.”
epistemic utility theory – The view that we should determine our epistemic states (e.g. beliefs) on which epistemic states we value the most. Epistemic states could be said to be “rational” when they are states we significantly value more than the alternatives, and “irrational” when they are states we significantly value less than the alternatives. For example, the epistemic state of believing that jumping up and down is the best way to buy doughnuts is significantly worse than the alternatives, so believing such a thing is irrational.
epistemic vigilance – Attributes and mechanisms that help people avoid deception, manipulation, and confusion. For example, we intuitively tend not to trust claims that seem to be “too good to be true” from people who want to sell us something, which helps us stay vigilant against those who want to manipulate us.
epistemology – The philosophical study of knowledge, rationality, and justification. For example, empiricism is a very popular view of justification, and the scientific method is generally a reliable source of knowledge.
equivalence – A rule of replacement that takes two forms: (a) “a if and only if b” means the same thing as “if a, then b; and if b, then a.” (b) “a if and only if b” means the same thing as “a and b” and/or “not-a and not-b.” (“a” and “b” stand for any two propositions.) For example, “Socrates is a rational animal if and only if Socrates is a person” means the same thing as “Socrates is a rational animal and a person, or Socrates is not a rational animal and not a person.”
equivocation – A fallacy that is committed when an argument requires us to use two different definitions for an ambiguous term. For example, someone could argue that everyone has a family tree and trees are tall woody plants, so everyone has a tall woody plant. See “ambiguity” for more information.
equivocal – Ambiguous words or statements (that could have more than one meaning or interpretation). For example, the word “social” can refer to something “socialistic” (e.g. social programs) or to something that has to do with human interaction (e.g. being social by spending time talking to friends).
error theory – (1) The view that states that all moral statements are literally false because they don’t refer to anything, even though moral statements are meant to refer to facts. Nothing is right or wrong, nothing has intrinsic value, and no one is virtuous or vicious. Error theory has been criticized for being counterintuitive. For example, the error theorist would say that it’s false that “murder is wrong.” However, error theorists can endorse “fictionalism” or continue to make moral statements for some other reason. (2) Any theory that requires us to reject the view that concepts of some domain refer to facts, but require us to agree that statements within that domain are meant to relate to facts. For example, an error theorist could reject all psychological facts and say that all psychological statements that we think refer to facts are false. It would then be false that “some people feel pain.”
essence – The defining characteristics of an entity or category. Aristotle argued that objects and animals have an essence. Aristotle’s understanding of essence is a lot like Platonic Forms except he considers it to be part of the object or animal rather than an eternal and immaterial object outside of space and time. For example, Aristotle says that the essence of human beings is “rational animal,” so human beings wouldn’t be human beings if they lacked one of these defining characteristics.
essential characteristic – Characteristics that are necessary to be what one is. For example, Aristotle argues that essential characteristic of Socrates is that he is capable of being rational because that is essential to being a human—if Socrates is not capable of being rational, then he is not a person. “Essential characteristics” are the opposite of “accidental characteristics.”
essentialism – The view that types of entities can be defined and distinguished using a finite list of characteristics. See “essence” for more information.
eternal return – The view that events will repeat themselves exactly as they occur now over and over ad infinitum in the future. Every person will live again and they will live the exact same life on and on forever. Sometimes the eternal return is presented as a possibility given that the universe has finite possibilities and infinite time.
ethical egoism – The view that people should only act in their rational self-interest. For example, an ethical egoist might believe that a person shouldn’t give money to the poor if she can’t expect to be benefited by it in any way.
ethical libertarianism – See “political libertarianism.”
ethics – The philosophical study of morality. Ethics concerns when actions are right or wrong, what has value, and what constitutes virtue.
etymological fallacy – A fallacy committed by an argument when a word is equivocated with another word it’s historically derived from. For example, “logic” is derived from “logos,” which literally meant “word.” It would be fallacious to argue that “logic” is the study of words just because it is historically based on “logos.”
eudaimonia – Greek for “happiness” or “flourishing.”
eudaimonism – Ethical theories concerned with happiness or flourishing. Eudaimonist theories of ethics tend to be types of “virtue ethics.” Eudaimonists tend to argue that we should seek our happiness (or flourishing), and that virtue is a necessary condition of being truly happy or flourishing. Socrates, Aristotle, Epicurus, and the Stoics are all examples of “eudaimonists.”
Euthyphro dilemma – A problem concerning whether something is determined by the interest of one or more gods or whether the interest of one or more gods is based on rational criteria. The Euthyphro dilemma was originally found in a Socratic dialogue called the Euthyphro where Socrates asked if what is pious was pious because the gods liked it or if the gods liked pious things because they were worthy of being liked. Now the “Euthyphro dilemma” is generally used to refer to what is right or wrong—is what is right only right because God likes it, or does God like what is right because of some rational criteria? Many people take this dilemma as a good reason to reject “divine command theory” and to think that what is right or wrong is based on rational criteria. If God exists, then perhaps God likes what is right because of the rational criteria.
evaluative – Concerning the value of things. Statements, judgments, or beliefs that refer to values. For example, “human life is intrinsically good” is an evaluative judgment.
evidence – See “justification.”
evidentialism – The view that beliefs are only justified if and when we have evidence for them.
ex nihilo – A Latin phrase meaning “out of nothing.” It is generally used to refer to the idea that something could come into existence from nothing, and such an idea is often said to conflict with the scientific principle known as the “conservation of energy.”
exclusive or – An “or” of a sentence that requires that only one of two propositions are true. It would be impossible for both propositions to be true. The form of the exclusive or can be said to be “either a or b, and not-a-and-b.” For example, “either something exists or nothing exists.” It would be impossible for both to be true or for neither to be true. The “exclusive or” is often contrasted with the “inclusive or.”
exclusive premises – A fallacy committed when categorical syllogisms have two negative premises. There are no logically valid categorical syllogisms with two negative premises. For example, “No dogs are fish. Some fish are not lizards. Therefore, no dogs are lizards.”
existential quantifier – A term or symbol used to say that something exists. For example, “some” or “not all” are existential quantifiers in ordinary language. “Some horses are mammals” means that at least one horse exists and “not all horses are male” means that there is at least one horse that is not a male. The existential quantifier in symbolic logic is “∃.” See “quantifier” for more information.
exportation – A rule of replacement that states that “if a and/or b, then c” means the same thing as “if a, then it’s the case that if b, then c.” (“a”, “b,” and “c” stand for any three propositions.) For example, “if Socrates is either a mammal or an animal, then Socrates is a living organism” means the same thing as “if Socrates is a mammal, then it’s the case that if Socrates is an animal, then Socrates is a living organism.”
expressive completeness – A logical system is expressively complete if and only if it can state everything it is meant to express. For example, a system of propositional logic with connectives for “and” and “not” is expressively complete insofar as it can state everything any other connective could state. You can restate “A and/or B” as “it’s not the case that both not-A and not-B.” (“Hypatia is a mammal and/or a mortal” means the same thing as “it’s not the case that Hypatia is both a non-mammal and a non-mortal.”) See “expressibility” and “logical connective” for more information.
extension – What a term refers to. For example, the “morning star” and “evening star” both have the same extension. “Extension” is often contrasted with “intension.”
extensionality – Exensionality is concerned with the reference of words. For example, “the morning star” and “the evening star” both refer to Venus, so they both have the same extensionality. “Extensionality” is often contrasted with “intensionality.” Also see “sense” and “reference” for more information.
existential fallacy – A fallacy that is committed by an argument that concludes that something exists based on the fact that something is true of every member of a set. The form of the existential fallacy is generally “all A are B. Therefore, some A are B.” For example, “All unicorns are horses. Therefore, there is a unicorn and it’s a horse.” Another example is, “All trespassers on this property will be fined. Therefore, there is a trespasser on this property who will be fined.”
existential import – The property of a proposition that implies that something exists. For example, Aristotle thought that the proposition “all animals are mammals” implied that “at least one mammal exists.” However, many logicians now argue that propositions of this type do not have existential import. See the “existential fallacy” for more information.
existentialism – A philosophical domain that focuses on the nature of the human condition. What it’s like to be a human being and what it means to live authentically are also of particular interest. Existentialist philosophers often argue (a) for the meta-philosophical position that philosophy should be a “way of life” as opposed to technical knowledge or essay writing; (b) that each person is ultimately “on their own;” (c) for the view that people should re-examine their values rather than rely on evaluative beliefs passed on by others; (d) that we have no essence, so we need to determine our purpose (or “essence”) through our actions; and (e) that being a human being is characterized by absolute freedom and responsibility.
explaining away – To reveal a phenomenon to not exist after all. To “explain away” a phenomenon is often counterintuitive, inconsistent with our experiences, or insensitive to our experiences. For example, someone who claims that beliefs and desires don’t actually exist because psychological phenomena are actually just brain activity of some sort would be counterintuitive and conflict with our experiences. Anyone who makes this claim should tell us why we seem to experience that beliefs and desires exist, and why these concepts are convenient when we want to understand people’s behavior. See “eliminative reductionism” for more information. “Explaining away” can be contrasted with “saving the phenomena.”
explanans – A statement or collection of statements that explain a phenomena. For example, “people who are hungry generally eat food” is an explanans and can explain why John ate two slices of pizza. “Explanans” is often contrasted with “explanandum.”
explanandum – A phenomenon that’s explained by a statement or series of statements. For example, John ate two slices of pizza is an explanandum, and we can explain that phenomenon by the fact that “people who are hungry generally eat food.” “Explanandum” is often contrasted with “explanans.”
explicit knowledge – Knowledge that enters into one’s consciousness and can be justified through argumentation by those who hold it. For example, scientists can explain how they know germs cause disease because they have explicit knowledge about it. “Explicit knowledge” is often contrasted with “tacit knowledge.”
expressibility – The ability of a logical system to express the meaning of our statements. For example, consider the argument, “all humans are mammals; all mammals are animals; therefore, all humans are animals. According to propositional logic, this argument has the form “A; B; therefore, C” and it would determine this argument to be logically invalid as a result. However, predicate logic is better able to capture the meaning of these statements and it can prove that the argument is logically valid after all. Therefore, predicate logic is expressively superior to propositional logic given this one example. See “valid argument” for more information.
expressive completeness – A logical system is expressively complete if and only if it can state everything it is meant to express. For example, a system of propositional logic with connectives for “and” and “not” is expressively complete insofar as it can state everything any other connective could state. You can restate “A and/or B” as “it’s not the case that both not-A and not-B.” (“Hypatia is a mammal and/or a mortal” means the same thing as “it’s not the case that Hypatia is both a non-mammal and a non-mortal.”) See “expressibility” and “logical connective” for more information.
extension – To exist in space and time, and to take up space. To have a body or physical shape.
externalism – See “epistemic externalism,” “motivational externalism,” or “semantic externalism.”
externalities – Unintended positive or negative effects on third parties by business transactions. For example, pollution is a negative externality caused by many business transactions. Many people who oppose having a free market without regulation argue that it would be unfair for third parties who are harmed by externalities to not be compensated, and that compensation might not be feasible without regulations.
extrinsic value – A type of value other than “intrinsic value.” For example, “instrumental value” and “inherent value” are types of extrinsic values. Sometimes we say that an action is extrinsically good insofar as it is instrumental for some intrinsic good. For example, eating food is generally good because it often helps us live happier or more pleasurable lives. However, eating food is extrinsically good rather than intrinsically good because eating food without some relation to pleasure, health, or happiness has no value.
fact – (1) A state of affairs, relation, or part of reality that makes a statement true. For example, it’s true that objects fall and will continue to fall because it’s a fact that “the law of gravity exists” and accurately describes reality. (2) A statement that is known to be true—at least by the experts. Facts of this sort are often contrasted with insufficiently justified “opinions.” (3) According to scientists, facts are observations (empirical evidence).
fact/value gap – Some philosophers believe that facts and values are completely different domains that can’t overlap. The fact/value gap refers to the distance they believe these two domains are from each other. This gap is especially important to people who believe that evaluative statements reflect our desires or preferences rather than to factual statements.
factual truth – Statements are factually true when they properly relate to facts or reality. For example, “something exists” is an uncontroversial example of a factually true statement. “Factual truth” can be contrasted with “nonfactual truth.”
faculty – (1) The ability to do something. For example, people have the faculty for rational thought. (2) The teachers of a school. For example, the faculty of a school.
fallacy – An error in reasoning. Formal fallacies are committed by invalid arguments and informal fallacies are committed by errors in reasoning of some other kind.
fallacy fallacy – (1) See “argumentum ad logicam.” (2) A type of fallacy committed by an argument that falsely claims another argument commits a certain fallacy. For example, Lisa could argue that “Sam is an idiot for thinking that only two people exist. We have met many more people than that.” Sam could then respond, “You have committed the ad hominem fallacy. My belief should not be dismissed, even if I am an idiot.” In this case Lisa’s argument does not require us to believe that Sam is an idiot. It is an insult, but it can be separated from her actual argument.
fallacy of composition – A fallacy committed by an argument that falsely assumes that a whole will have the same property as a part. For example, “molecules are invisible to the naked eye. We are made of molecules. Therefore, we are invisible to the naked eye.” The “fallacy of composition” is often contrasted with the “fallacy of division.”
fallacy of division – A fallacy committed by an argument that falsely assumes that a property that a whole has will also be a property of the parts. For example, “We can see humans with the naked eye. Humans are made of molecules. Therefore, we can see molecules with the naked eye.”
fallacy of the consequent – A synonym for “affirming the consequent.”
fallible – Beliefs or statements that possibly contain errors or inaccuracies.
fallibilism – The view that knowledge does not require absolute certainty or justifications that guarantees the truth of our beliefs. “Fallibilism” is the opposite of “infallibilism.”
false – A proposition that fails to be true, such as “1+1=3.” Propositions, statements, and beliefs can be false. “False” is the opposite of “true.”
false analogy – A synonym for “weak analogy.”
false cause – A synonym for “non causa pro causa.”
false conversion – A synonym for “illicit conversion.”
false dichotomy – A synonym for “false dilemma.”
false dilemma – A fallacious argument that requires us to accept fewer possibilities than there plausibly are. For example, we could argue the following—“All animals are mammals or lizards; sharks are not mammals; therefore, sharks are lizards.” False dilemmas are related to the “one-sidedness” fallacy and generally use the logical argument form known as the “disjunctive syllogism.”
false positive – A positive result that gives misleading information. For example, to test positive for having a disease when you don’t have a disease. Let’s assume that 1 of 1000 people have Disease A. If a test is used to detect Disease A and it’s 99% accurate, then it will probably detect that the one person has the disease, but it will also probably have around ten false-positive results—it will probably state that ten people have Disease A that don’t actually have it.
false precision – See “overprecision.”
falsifiability – The ability to reject a theory or hypothesis based on rational criteria. For example, Newton’s theory of physics was rejected on the basis of having more anomalies than an alternative—Einstein’s theory of physics.
falsification – To falsify a theory—to prove it false, likely to be false, or worthy of being rejected based on some rational criteria.
falsificationism – The view that scientific theories and hypotheses can be distinguished from pseudoscientific ones insofar as scientific theories and hypotheses can be falsified—they can be rejected on the basis of rational criteria. In particular, hypotheses can be rejected if they conflict with our observations more than the alternatives. For example, we could hypothesize that all swans are white and that hypothesis would be falsifiable because a single non-white swan would prove it to be false.
fast track quasi-realism – An attempt to make sense out of moral language (such language involving moral facts, mind-independence, and moral truth) without endorsing moral realism by explaining how all such moral language can be coherent without moral realism. “Fast track quasi-realism” is often contrasted with “slow track quasi-realism.” See “quasi-realism” for more information.
fate – (1) A fated event is an inevitable event that will occur no matter what we do. For example, every choice we make will lead to our death; so it’s plausible to think we are all fated to die. Sometimes it’s thought that a fated event is inevitable because of a divine influence. Fate can be said to be a separate concept from “determinism” in that a determinist does not necessarily think that everything that happens will happen no matter what choices we make. (2) “Fate” is another term the Stoics used for the concept of “Universal Reason.” (3) In ordinary language, “fate” is often synonymous with “destiny.”
faulty analogy – A synonym for “weak analogy.”
feminism – The view that women should be treated as equals to men, that they have been systematically treated unjustly, that we should demand greater justice for women, and that we should combat sexism.
fictionalism – (1) The meta-ethical view that moral judgments refer to a fictional domain, and moral statements can be true or false depending on whether or not they accurately refer to the fictional domain. According to fictionalism, it would be true that “murder is wrong,” but only insofar as people agree that it’s wrong (perhaps because a social contract says so). (2) Any domain where statements are meant to refer to a fictional domain. Statements within that domain are true or false depending on whether or not they accurately refer to the fictional domain. For example, we find it intuitive to say that it’s true that unicorns are mammals and that Sherlock Holmes is a detective.
final cause – The purpose of a thing, action, or event. The final causes of scissors are to cut, and Aristotle thought that the final cause of human beings is to use (and improve) their capacity to reason.
final end – Something we psychologically accept to be worthy of desire or valuable for its own sake. For example, money is not a final end, but happiness is often said to be one. If someone asks why you need money, you might need to explain what you will do with the money to justify the need, but “happiness” seems to be worthy of desire without an additional justification. Final ends are often said to be important because if there are no final ends, then there seems to be nothing that makes a decision more ethical than another. A person who wants to get money to get food, wants food to live longer, and wants to live longer just to get money seems to be living a meaningless life. None of these goals are in any sense worthy on their own.
first-person point of view – The perspective of a person as having a unified field of experience that brings together various experiences within a single perspective (e.g. that can be used experience sight and touch at the same time). The first-person point of view is also often said to be unified in time—we experience things only because there’s a before and after. If we didn’t experience that our experiences are unified in time, then we could not observe objects moving and we couldn’t even experience that an object is “the same object it was earlier.” The “first-person point of view” is often contrasted with the “third-person point of view.”
folk psychology – The everyday or common sense understanding of psychology involving the concepts of “belief” and “desire.” Some philosophers argue that “folk psychology” is false and we should only examine brain activity to know what facts of psychology are really about.
forbidden – A synonym for “impermissible.”
formal cause – The reason that something exists and/or the properties something will have if it perfects itself. For example, a seed’s formal cause is to transform into a plant. Some philosophers argue that “formal causes” are identical to “final causes.”
formal fallacy – An error in reasoning committed by a logically invalid argument. For example, any argument with the form “if a, then b; b; therefore, a” is logically invalid.
formal language – Languages that are devoid of semantics, such as the languages used for formal logic. See “formal logic” and “formal system” for more information. “Formal language” can be contrasted with “natural language.”
formal logic – Logic concerned with logical form, validity, and consistency. “Formal logic” is often contrasted with “informal logic.” See “logical form” for more information.
formal semantics – A domain concerned with the interpretations of formal propositions. For example, “A and B” could be interpreted as “all lizards are reptiles and all dogs are mammals.” (“A” and “B” each represent specific propositions.) See “interpretation,” “translation,” “models,” and “schemes of abbreviation” for more information.
formal system – A syntax-based system generally used for logic or mathematics. Formal systems require people to follow rules and manipulate symbols in order to try to prove something in logic or mathematics, and no subjective understanding of the words are required.
formalism – The view in philosophy of mathematics that mathematics is nothing more than a set of rules and symbols. Mathematics as it exists in computers is an entirely formal system in this way. Some philosophers argue that there is more to mathematics than what computers do.
forms – See “Platonic Forms.”
foundational – The starting point or building blocks that everything else depends on. For example, a foundational belief can be justified without inferential reasoning or argumentation. The axioms of logic are a plausible example of foundational beliefs.
foundationalism – The view that there are privileged or axiomatic foundational beliefs that need not be proven. The source of privileged beliefs could be from self-evidence, non-inferential reasoning, non-empirical intuitive evidence. Foundationalism is one possible solution to the problem of justification requiring an infinite regress or circular reasoning. If everything we know needs to be justified from an argument, then we need to prove our beliefs using arguments on and on forever, or we need to be able to justify beliefs with other beliefs in a circular mutually supportive fashion. However, foundationalism requires us to reject that everything we know must be justified with inferential reasoning.
foundherentism – A view that we should accept a view that combines elements of both foundationalism and coherentism. Foundherentism uses various experiences or observations as a foundational origin of belief, but it (a) allows foundational beliefs to be mutually supportive, and (b) allows us to reject foundational beliefs that are inconsistent with the others. Foundherentism is one way we can try to correct potentially inaccurate beliefs that are based on theory-laden observations (that what we observe is interpreted by us and our assumptions shape how we interpret them). The fact that observations are theory-laden seems to imply that beliefs based on our observations are fallible.
free logic – Formal logical systems that can discuss objects or categories without requiring us to assume that the objects or categories exist.
free rider – Someone who benefits from a collective action between people without doing the work done by the others. For example, a person could benefit from laws against pollution but refuse to abide by those laws to increase the profit of her company.
function – (1)The purpose of something. For example, the function of a knife is to cut things. (2) A synonym for “operation.”
functionalism – Theories of philosophy of mind that state that psychological states are identical with (or “constituted by”) some functional role played within a system (such as a brain). According to many funtionalists, both a machine and a human brain might have the same psychological states as long as they both have the same functional activities.
fuzzy logic – (1) Logical systems that use degrees of truth concerning vague concepts. For example, some people are more bald than others. Someone with no hair at all would be truthfully said to be bald, but someone with only a little hair might be accurately described as being bald (with a lesser degree of truth involved). (2) In ordinary language, “fuzzy logic” often refers to poorly developed logical reasoning.
gambler’s fallacy – Fallacious reasoning based on the assumption that the past results of a random game will influence the future results of the game. For example, if you toss a coin and get heads twice in a row and conclude that you are more likely to get tails if you keep playing. Gamblers who lose a lot of money often have this assumption when they make the mistake in thinking that they will more likely start winning if they keep playing the same game.
genetic fallacy – A fallacy committed by arguments that conclude something solely based on the origin of something else. For example, it would be fallacious to argue that someone will be a Christian just because her parents were Christians; or that someone’s belief in evolution is unjustified just because her belief originated from casual conversations rather than from an expert.
god – A god is a very powerful being. Some people believe gods to be eternal and unchanging beings that created the universe, but others think gods to be part of (or identical to) the universe. Theists believe at least one god exists and atheists believe that no gods exist. Monotheists think that only one god exists, polytheists think more than one god exists, and pantheists think god is identical to the universe. Traditional monotheists often believe that God is all-good, all-powerful, all-knowing, and existing everywhere.
golden mean – Aristotle’s concept of virtues as being somewhere between two extremes. For example, moderation is the character trait of wanting the right amount of each thing, and it’s between the extremes of gluttony and an extreme lack of concern for attaining pleasure.
golden rule – A moral rule that states that we ought to treat other people how we want to be treated. For example, we generally shouldn’t punch other people just because they make us angry insofar as we wouldn’t want them to do it either.
The Good – Plato’s term for the Form of all Forms. It is the ultimate being that all other types of reality depend on for their existence, and it is the ultimate ideal that determines how everything should exist. The Good is thought to be nonphysical and eternal. It’s also known by Neoplatonists as the “One” or the “Monad.” See “Plato’s Forms” and “emination” for more information.
good argument – An argument that’s rationally persuasive. A popular example of a good argument is “Socrates is a man. All men are mortal. Therefore, Socrates is mortal.” Good arguments give us a good reason to believe a conclusion is true. If an argument is sufficiently good, then we should believe the conclusion is true. Ideally good arguments rationally require us to believe the conclusion is true, but some arguments might only be good enough to assure us that a belief is compatible with rationality. The criteria used to determine when an argument is “good” is studied by logicians and philosophers.
good will – (1) To have good intentions. (2) According to Immanuel Kant, good will is being rationally motivated to do the right thing.
grandfather’s axe – A thought experiment of an axe for which all the parts that have been replaced. The question is whether or not it’s the same axe.
greatest happiness principle – The moral principle that states that we ought to do what will lead to the greatest good for the greatest number. In this case “goodness” is equated with “happiness” and “harm” is equated with “suffering.” So, the greatest happiness principle states that we ought to maximize happiness and minimize suffering for the greatest number of people. We can judge moral actions as right and wrong in terms of how much happiness and suffering the action will cause. Actions are right insofar as they maximize happiness and reduce suffering, and wrong insofar as they maximize suffering and reduce happiness. See “utilitarianism” for more information. John Stewart Mill’s utilitarian notion of the “greatest happiness principle” is meant to be contrasted with Jeremy Bentham’s form of utilitarianism insofar as Mill believes that there are higher and lower qualities of pleasure (unlike Bentham). In particular he believes that intellectual pleasures are of a higher quality and value than bodily pleasures. To emphasize this view, Mill said, “Better to be Socrates dissatisfied than a pig satisfied.”
grouping – In logic, grouping is used to make it clear how logical connectives relate to various propositions, and parentheses are often used. For example, “A or (B and C)” groups “B and C” together. In this case “A” can stand for “George Washington was the first president of the United States,” “B” can stand for “George Washington is a mammal,” and “C” can stand for “George Washington is a lizard.” In this case the statement can be interpreted as “George Washington is a lizard, or he is both a mammal and a lizard.” This can be contrasted with the statement “(A or B) and C,” which would be interpreted as “George Washington is either the first president of the United States or a mammal, and he’s a lizard.”
grue – A theoretical color that currently looks green, but will look blue at some later point in time. Grue is used to illustrate a problem of induction—how do we know emeralds will be green in the future when they might actually be grue? They might appear green now and then appear blue at some later point.
guilt by association – A synonym for “association fallacy.”
gunk – Any type of stuff that can be indefinitely split into smaller pieces. Gunk can be made of smaller parts without an indivisible or indestructible “smallest part” (i.e. atom). Philosophers speculate whether or not everything in the physical universe is made of gunk or atoms.
gunky time – The view of time as being infinitely divisible. If time is gunky, then there is no such thing as a shortest moment of time.
halo effect – The cognitive bias defined by our tendency to expect people with positive characteristics to have other positive characteristics, and people with negative characteristics to have other negative characteristics. For example, those who agree with us are more likely to be believed to be reasonable than those who disagree with us. The halo effect sometimes causes people to think of outsiders who think differently to be inferior or evil, and it makes it more likely for people to dismiss the arguments of outsiders with differing opinions out of hand.
halt – (1) When a mechanical procedure ends. For example, we can use truth tables to know if arguments are valid. The procedure halts as soon as we determine whether or not the argument is valid. See “valid argument” and “truth table” for more information. (2) In ordinary language, ‘halt’ means “stop.”
hard determinism – The view that determinism is incompatible with free will, that the universe is deterministic, and that people lack free will.
hard atheism – The traditional view of atheism as the belief that gods don’t exist. Hard atheism is contrasted with “soft atheism.”
hasty generalization – A fallacious argument that concludes something because of insufficient evidence. Hasty generalizations conclude that something is true based on various observations when the observations are not actually a sufficient reason to believe the conclusion is true. For example, to conclude that all birds use their wings to fly based on seeing crows and swans would be a hasty generalization. Not all generalizations are fallacious. See “induction” for more information.
hedonism – The view that pleasure is the only thing worthy of desire in itself and pain is the only thing worthy of avoidance in itself. Some hedonists might think that pleasure and pain are the only things with intrinsic value, but others might think their value is purely psychological—that pleasure is something people universally desire to attain and pain is something people universally desire to avoid.
Hegelian Dialectic – The view that progress is continually made in history when people find ways of attaining greater freedom. Various systems and institutions are often proposed to improve people’s freedom, but they face various problems that prevent freedom from being perfectly enjoyed by everyone (which leads to revolts and revolutionary wars). New and better systems and institutions are then created and the process continues. The first societies were thought to be based on slavery, greater freedom was found within feudalism, and even greater freedom was found in capitalism. Hegelian Dialectic developed the notions of “class conflict” and “social progress.” See “dialectic” for more information.
hermeneutic circle – Interpreting a text by alternating between considering parts and the whole of the text. For example, we can’t understand the definitions given in dictionaries without considering the definition it gives of several different words and how they all relate. Some philosophers have suggested that the totality of human knowledge is like a hermeneutic circle insofar as we can’t interpret our experiences without referring to other assumptions and experiences within a worldview.
hermeneutics – (1) The systematic study of interpretation regarding texts. (2) Philosophical hermeneutics is the systematic study of interpretations regarding linguistic and nonlinguistic expressions as a whole. For example, an issue of philosophical hermeneutics is, “Why is communication possible?”
heuristic – Experience-oriented techniques for finding the truth, such as a rule of thumb or intuition. For example, thought experiments are used to bring out an intuitive response.
heuristic device – An entity that exists to increase our knowledge of another entity. For example, models might never perfectly correspond to the reality they represent, but they can make it easier for us to understand aspects of reality. Allegories, analogies, and thought experiments are also heuristic devices.
hidden assumption – A synonym for “unstated assumption.”
hidden conclusion – A synonym for “unstated conclusion.”
hidden premise – A synonym for “unstated premise.”
historical dialectic – The view that history is a process that offers various ways of living that face problems, and new and improved ways of living are introduced to avoid the problems the old ones had. This is one way to understand historical progress or “cultural evolution.” “Hegelian dialectic” and “dialectical materialism” could both be considered to be types of “historical dialectic.”
horned dilemma – An objection that shows why a claim can be interpreted or defended in more than one way, but none of those solutions are acceptable. For example, consider the following statement—“This statement is false.” If the statement is false, then it’s true. If it’s true, then it’s false. Neither of these solutions are acceptable because they both lead to self-contradictions.
hot hand fallacy – An argument commits this fallacy when it requires the false assumption that good or bad luck will last a while. For example, a gambler who wins several games of poker in a row is likely to think she’s on a “winning streak” and is more likely than usual to keep winning as a result.
humanism – An approach to something that focuses on the importance of human concerns and away from other-worldly concerns. For example, a humanist would likely be unsatisfied with having religious rituals that are meant to honor the gods, but don’t benefit human beings in any way. Also see “secular humanism” and “religious humanism.”
hypothesis – A defensible speculative explanation for various phenomena. “Hypotheses” are often contrasted with “theories,” but the term ‘theory’ tends to be used to describe hypotheses that have been systematically defended and tested without facing significant counter-evidence.
hypothetical imperative – Imperatives are commands or requirements. Hypothetical imperatives are things we are required to do in order to fulfill our desires or goals. For example, if you are hungry, then you have a hypothetical imperative to get some food to eat. “Hypothetical imperatives” are often contrasted with “categorical imperatives.”
hypothetical syllogism – A rule of inference that states that we can use “if a, then b” and “if b, then c” to validly conclude “if a, then c.” (“a” and “b” stand for any two propositions.) For example, “if all dogs are mammals, then all dogs are animals. If all dogs are animals, then all dogs are living organisms. Therefore, if all dogs are mammals, then all dogs are living organisms.”
hypothetico-deductive method – To start with a hypothesis, consider what conditions or observations would be incompatible with the hypothesis, then set up an experiment that could cause observations that are incompatible with the hypothesis. For example, we could hypothesize that objects continue to move in the same direction until another force acts on it, we could consider that dropping an object on a moving sailing ship should cause the object to continue to move along the path of the sailing ship, and then we can set up an experiment consisting of dropping objects while on sailing ships. The hypothetico-deductive method is a common form of the scientific method.
I-type proposition – A proposition with the form “some a are b.” For example, “some cats are female.”
idea – (1) See “Platonic Forms.” (2) According to Immanuel Kant, a concept of reason that can’t be fully understood through experience alone. (3) In ordinary language, an “idea” is a concept or thought.
ideal observer – A fully-informed and perfectly rational agent that deliberates about a relevant issue in the appropriate way. An ideal observer would have a superior perspective concerning what we should believe concerning each moral issue. It is often thought that ideal observers would determine the social contract that we should agree with, and perhaps all moral truth depends on such a contract. See “ideal observer theory” and “meta-ethical constructivism” for more information.
ideal observer theory – A form of meta-ethical constructivism that states that moral statements are true if an ideal observer would agree with them and false when an ideal observer wouldn’t agree. For example, an ideal observer would likely agree that it’s true that “it’s morally wrong to kill people whenever they make you angry.” A potential example is John Rawls’s “Justice as Fairness.”
idealism – The view that there is ultimately only one kind of stuff, and it’s not material (it’s not physical). Reality might ultimately be a dream-world or Platonic Forms. See “Platonic Forms” or “subjective idealism” for more information.
identity theory – A theory or hypothesis that states that two things are identical. For example, some people think that the psychological states are identical to certain brain states and scientists agree that water is identical to H2O.
ignosticism – The view that we can’t meaningfully discuss the existence of gods until an adequate and falsifiable definition of “god” is presented. “Ignosticism” is often taken to be synonymous with “theological noncognitivism.”
illicit affirmative – The fallacy committed when categorical syllogisms have positive premises and a negative conclusion. All categorical syllogisms with this form are logically invalid. For example, “Some dogs are mammals. All mammals are animals. Therefore, some dogs are not animals.”
illicit contraposition – In categorical logic, illicit contraposition refers to a fallacy committed by an invalid argument that switches the terms of a categorical statement and negates them both. There are two types of illicit contraposition: (a) No a are b. Therefore, no non-b are non-a. (b) Some a are b. Therefore, some non-b are non-a. For example, “Some horses are non-unicorns. Therefore, some unicorns are non-horses.”
illicit conversion – Invalid forms of conversion—invalid ways to switch the terms of a categorical statement. There are two types of illicit conversion: (a) All a are b. Therefore, all b are a. (b) Some a are not b. Therefore, some b are not a. For example, the following is an invalid argument—“Some mammals are not dogs. Therefore, some dogs are not mammals.”
illicit major – A fallacy committed by an invalid categorical syllogism when the major term is undistributed in the major premise, but it’s distributed in the conclusion. For example, the following argument commits the illicit major fallacy—“All lizards are reptiles; no snakes are lizards; therefore, no snakes are reptiles.” See “distribution” for more information.
illicit minor – A fallacy committed by an invalid categorical syllogism when the minor term is undistributed in the minor premise, but it’s distributed in the conclusion. For example, the following argument commits the illicit minor fallacy—“All dogs are mammals; all dogs are animals; therefore, all animals are mammals.” See “distribution” for more information.
illicit negative – The fallacy committed when categorical syllogisms have one or two negative premises and a positive conclusion. All categorical syllogisms with that form are logically invalid. For example, “No fish are mammals. Some mammals are dogs. Therefore, some fish are dogs.”
illicit process – A fallacy committed when categorical syllogisms have a term distributed in the conclusion without being distributed in a premise. All categorical syllogisms that commit this fallacy are logically invalid. For example, “All lizards are reptiles. Some reptiles are lizards. Therefore, all reptiles are lizards.” See “distribution,” “illicit major” and “illicit minor” for more information.
illicit transposition – A synonym for “improper transposition.”
illocutionary act – The act of communication with some intention. For example, to get people to do something, to persuade, to educate, or to make a promise.
illocutionary force – The intended semantic meaning of a speech act. For example, someone could say, “I can see the morning star” without knowing that the morning star is Venus.
illusory superiority – The cognitive bias defined by people’s tendency to think they have above average characteristics in all areas. People tend to overestimate their abilities and underestimate the abilities of others. For example, people are likely to think they have a higher IQ than they really do. This bias is related to the “self-serving bias.”
immeasurable – The quality of something that can’t be measured or quantified.
immanence – Presence within the physical universe. Some people believe God is immanent. “Immanence” is often contrasted with “transcendence.”
immoral – A synonym for “morally wrong.”
impartial spectator – Someone with a moral point of view who has no bias to grant favoritism to any side of a conflict or competition. The concept of an “impartial spectator” is generally found in ethical systems that lack moral facts and claim that emotion plays an important role in determining right and wrong. It is then said that what is right or wrong depends on what an “impartial spectator” would think is right or wrong in that situation, which could depend on the emotions of the impartial spectator. The “impartial spectator” is often often used as a synonym for the “ideal observer.”
impartiality – Without bias, nonrational preference, or favoritism. Decisions are impartial if they’re based on rational principles rather than subjective desires.
imperative – A command or prescription for behavior. See “categorical imperative” and “hypothetical imperative” for more information.
imperfect duty – A duty that can be manifested in a variety of ways and allows for personal choice. For example, Immanuel Kant argues that we have an imperfect duty to develop our talents and help others. It is imperfect because we have to choose how to develop our talents and help others. Additionally, these duties are limited because we would otherwise be required to spend our entire lives relentlessly developing our talents and helping others, but that would be too demanding on us. “Imperfect duties” contrast with “perfect duties.”
impermissible – What is forbidden, or what is not allowed, or what we are obligated not to do. Something is impermissible when it falls short of certain relevant standards. Impermissible beliefs are incompatible with rationality and impermissible actions are incompatible with moral requirements. We are obligated not to believe something that’s epistemically impermissible, and we are obligated not to do something that’s morally impermissible. “Impermissible” beliefs and actions are often contrasted with “permissible” or “obligatory” ones.
impossibility – The property of being not possible. Impossible things are neither possible, contingent, nor necessary. See “physical impossibility,” “metaphysical impossibility,” and “logical impossibility” for more information.
implication – (1) The logical consequences of various beliefs. For example, the implication of “all cats are mammals” and “if all cats are animals, then all cats have DNA” is “all cats have DNA.” The implication could be said to be implied by the other propositions. (2) A conditional proposition or state of affairs. See “material conditional.” (3) A rule of replacement that states that “if a, then b” and “not-a and/or b” both mean the same thing. (“a” and “b” stand for any two propositions.) For example, “if dogs are lizards, then dogs are reptiles” means the same thing as “dogs are not lizards, and/or dogs are reptiles.”
implicit knowledge – A synonym for “tacit knowledge.”
improper transposition – A logically invalid argument with the form “If a, then b. Therefore, if not-a, then not-b.” For example, “If all lizards are mammals, then all lizards are animals. Therefore, if not all lizards are mammals, then not all lizards are animals.” See “transposition” for more information.
inadvisable – See “suberogatory.”
inclusive or – An “or” used to designate that either one proposition is true or another is true, and they might both be true. The logical form of an inclusive is “either a or b, or a-and-b.” For example, “either Socrates is a man or he has two legs” allows for the possibility that Socrates is both a man and something with two legs, but it doesn’t allow for the possibility that Socrates is neither a man nor something with two legs. We often use the term “and/or” to refer to the “inclusive or.” People often contrast the “inclusive or” with the “exclusive or.”
incommensurability – A feature of various things that makes it impossible to determine which is superior or overriding. For example, it’s impossible to rationally determine if one value is superior to another if they’re incommensurable; and it could be impossible to rationally determine if one theory is superior to another if they’re incommensurable. We can assume that pleasure and human life both have value, but we might not be able to know for sure if a longer life with less pleasure would be better than a shorter life with more pleasure.
incompatibilism – The view that free will and determinism are not compatible. Incompatibilists that believe in free will are “libertarians” and those who reject free will are “hard determinists.”
inconsistent: Beliefs or statements that form a contradiction. See “contradiction” for more information.
incorrigible – The feature of a proposition that makes the proposition necessarily true simply because it’s believed. A plausible example is Rene Descartes’s argument, “I think therefore I am.” If we think it, then it seems like it has to be true.
indefeasible – An argument that can’t be defeated by additional information. Indefeasible arguments are sufficient reasons to believe a conclusion and no additional information could provide a better reason to reject the conclusion. The opposite of “indefeasible” is “defeasible.”
indeterminism – The view that not everything is causally determined. A rejection of “determinism.” For example, some philosophers and scientists believe that quantum mechanics is evidence for indeterminism. The behavior of subatomic particles seems to be random and unpredictable.
index of points – A set of points. Some philosophers believe that the truth conditions of necessity and possibility are based on an index of points. Aristotle thought that something was necessary if and only if it is true at all times, and possible if and only if it is true at some time. It’s necessary that “1+1=2” because it’s true at all times, and it’s possible for a person to jump over a small rock because it’s true at some time. In this case the index of points refers to points in time. See “truth conditions” for more information.
indexicals – Linguistic expressions that shift their reference depending on the context, such as “here,” “now,” and “you.” Indexical reference points to something and does not rely on describing the reference. Our descriptions of objects are often wrong, but we can still talk about the objects by using indexicals. For example, a person would be wrong to describe water as “the type of stuff that’s always a liquid that we use for hydration” insofar as water is not always a liquid, but we could still talk about water by pointing to it.
indirect proof – A strategy used in natural deduction used to prove an argument form is logically valid consisting of assuming the premises of an argument are true, but the conclusion is false. If this assumption leads to a contradiction, then the argument form has been proven to be logically valid. For example, consider the argument form “If A, then B. A. Therefore, B.” (“A,” “B,” and “C” are specific propositions.) An indirect proof of this argument is the following:
Assume the premises are true and the conclusion is false (not-B is true).
We know that “if A, then B” is true, and B is false, so A must be false. (See “modus tollens.”)
Now we know that A is true and false.
But that’s a contradiction, so the original argument form is logically valid.
induction – To generalize based on a sample. The view that the future will resemble the past in order to arrive at conclusions. For example, a person who only sees white swans could conclude that all swans are white. Also, a person who knows that bread has always been nutritious could conclude that nearly identical types of bread will still be nutritious tomorrow. Not all inductive reasoning is well-reasoned. See “hasty generalization” for more information. “Induction” is often contrasted with “deduction.”
inductive arguments – Arguments that use inductive reasoning to come to conclusions. See “induction” for more information.
inductive reasoning – See “induction.”
inductive validity – A synonym for “strong argument.”
infallible – Free of error and absolutely accurate. The opposite of “infallible” is “fallible.”
infallibilism – The view that to know something is to have a true belief that has been justified in a way that guarantees that the belief is true. This view equates knowledge with absolute certainty. The opposite of “fallibilism.”
inference – Coming to a conclusion from various propositions. For example, a person who knows that “all birds are warm-blooded” and “all crows are birds” could infer that “all crows are warm-blooded.” See “deduction” and “induction” for more information.
inferential reasoning – Reasoning that takes the form of argumentation (premises that give evidence for conclusions). To draw inferences from various beliefs. For example, a person who knows that “all men are mortal” and that “Socrates is a man” can realize that “Socrates is mortal.” Both “deduction” and “induction” are forms of inferential reasoning. Sometimes “inferential reasoning” is contrasted with “noninferential reasoning.”
infinite regress – (1) When a proposition requires the support of another proposition, but the second proposition requires the support of a third proposition, on and on forever. The implication is that infinite propositions are required to justify any other proposition. Philosophers often discuss infinite regresses as being an objectionable implication of certain beliefs, but some philosophers argue that not all infinite regresses are “vicious” (a reason to be rejected). For example, the belief that rational beliefs must be proven to be true requires an infinite regress to justify any proposition, but it is likely impossible for a person to actually justify a belief this way insofar as it would require infinite justifications. We would have to justify a proposition with an argument consisting of at least one other proposition, but then we would have to justify the second proposition with another argument, ad infinitum. (2) A process with no beginning or end. For example, it’s possible that the universe always existed and always will exist. Assuming every state of the universe causes the future states of the universe, there is a causal chain consisting of an infinite series of events with no beginning or end.
infinitism – The view that we are never done justifying a belief because every belief should be justified by an argument, but arguments have more premises that must also be justified. That requires us to justify our beliefs on and on forever. Imagine that you justify a belief with an argument, such as “we generally shouldn’t punch people because we generally shouldn’t hurt people.” Someone could then require us to justify our premise (that we generally shouldn’t hurt people). We could then say that “we generally shouldn’t hurt people because it causes suffering.” Someone could then want to know why this premise is justified (that hurting people causes suffering). This can go on and on forever. In order to know something, the infinitist believes that we will have to meet an infinite regress by having infinite justifications. However, infinitists don’t believe that the regress is vicious (a reason to reject their theory). See “vicious regress” for more information.
informal fallacy – An error in reasoning committed by an argument that is not merely a “formal error” (being an invalid argument).
informal logic – The domain of logic concerned with natural language rather than argument form. Informal logic covers critical thinking, argument analysis, informal fallacies, argument identification, identifying unstated assumptions, and the distinction between deductive and inductive reasoning. Informal logic generally excludes controversial issues related to the nature of knowledge, justification, and rationality. “Informal logic” can be contrasted with “formal logic.”
inherent value – Something that could help cause intrinsically good states to exist, but does not necessarily cause anything intrinsically good to exist. For example, a beautiful painting might be inherently good insofar as it can help cause intrinsically good experiences, but it might be hidden away in an attic and never cause any intrinsically good states. “Inherent value” is a type of “extrinsic value.”
innate ideas – Concepts or knowledge that we are born with. For example, Rene Descartes thought that we are born with the concept of perfection and could innately know that existence is a perfection. If innate ideas exist, then we have to reject “empiricism.”
innatism – The view that “innate ideas” exist.
inner sense – Our ability to experience states of the mind as opposed to the external world. “Inner sense” can be contrasted with “outer sense.”
intentional objects – The object that our thoughts or experiences refer to. For example, seeing another person involves an intentional object outside of our mind—another person. Some intentional objects are thought to be abstract entities, such as numbers or logical concepts.
inverse – An if/then proposition that is inferred from another if/then proposition. It is valid to conclude that one if/then proposition can be inferred from another whenever they both mean the same thing. It is valid to conclude from any proposition with the form “if a, then b” that “if not-b, then not-a.” For example, we can infer that “if it is false that all dogs are animals, then it is false that all dogs are mammals” from the fact that “if all dogs are mammals, then all dogs are animals.” “Transposition” is the name given to valid rules of inference using an inverse.
inversion – To infer an if/then proposition from another if/then proposition. See “inverse” for more information.
irrealism – A synonym for “anti-realism.”
institutional fact – Facts that exist because of collective attitudes or acceptance. For example, the value of money is an institutional fact and money would have no value if people didn’t agree that it has value. Institutions, such as the police force, government, and corporations all depend on institutional facts (because they can only exist due to collective attitudes and acceptance).
instrumental value – The usefulness of something. For example, knives have instrumental value for cutting food.
instrumentalism – A form of scientific anti-realism that claims that we should use the concept of unobservable scientific entities if they are useful within a theory or model, and we should not concern ourselves with whether such entities actually exist. For example, electrons are an important part of our scientific theories and hypotheses, so instrumentalists would agree that we should continue to talk about electrons and use them when conceptually thinking about our theories. Even so, instrumentalists would not claim that electrons exist.
intellectual virtues – Positive characteristics that help us reason well, such as open-mindedness, skepticism, perception, and intuition. “Virtue epistemology” is concerned with our intellectual virtues. “Virtue reliabilism” and “virtue responsibilism” are two different views about intellectual virtues and they require that intellectual virtues cover differeing domains.
intentionality – Also called “intentionality with a ‘t.’” The ability of thought to refer to or be about things. Philosophers discuss intentionality when they want to understand what it means to refer to objects or how we can refer to objects. Some philosophers also argue that there are “intentional objects” that are abstract or non-existent. (For example, numbers could be abstract intentional objects.)
intension – What a term means or how a word refers to things, which is often given in terms of a description. For example, the intension of “the morning star” is “the last star that can be seen in the morning” and the intension of “the evening star” is “the first star we can see at night.” Therefore, they both have a different intension, even though they both refer to Venus. “Intension” is often contrasted with “extension.”
intensionality – Also called “intensionality with an ‘s.’” Intensionality refers to the meaning and reference of words. Sometimes what a word means is different from what it refers to. For example, “the morning star” and “the evening star” both refer to Venus, but the meaning of the terms are different, so they both have a different intension. (The “morning star” is the last star we can see in the morning and the “evening star” is the first star we can see at night.) “Intensionality” can be contrasted with “extensionality.” See “sense” and “reference” for more information.
interchange – In categorical logic, interchange is the act of switching the first and second term of a categorical statement. For example, the interchange of “all men are mortal things” is “all mortal things are men.” See “conversion” for more information.
internalism – See “epistemic internalism,” “motivational internalism,” or “semantic internalism.”
interpretation – (1) To attribute meaning to statements of a formal logical system. Formal logical statements are devoid of content, but we can add content to them in order to transform them into statements of natural language. For example, “A or B” is a statement of a formal logical system, and it can be interpreted as stating, “either evolution is true or creationism is true.” In this case “A” stands for “evolution is true” an “B” stands for “creationism is true.” See “formal semantics,” “models,” and “schemes of abbreviation” for more information. (2) To try to understand information when there are multiple ways of doing so. Information can be ambiguous or vague, so interpretation can be necessary to attempt to understand them properly. For example, a person who sees the Sun set could think that they are seeing the Sun go around the Earth or they could think that they are seeing the Earth spin and turn away from the Sun as a result. See “theory-laden observation” and “ambiguity” for more information.
intrinsic value – Something with value just for existing. We might say happiness is “good for its own sake” to reflect that it is good without merely being useful to help us attain some other goal. If something is intrinsically good, then it is something we should try to promote. For example, if human life is intrinsically good, then all things equal, saving lives would plausibly be (a) rational, (b) a good thing to do, and (c) the right thing to do.
introspection – An examination of our first-person experiences. For example, we can reflect about what it’s like to feel pain or what it’s like to see the color green.
intuition – A form of justification that is difficult to fully articulate. A belief is strongly intuitive when rejecting it seems absurd (i.e. lead to counterintuitive implications), and a belief is weakly intuitive when accepting it doesn’t seem to conflict with any of our strongly intuitive beliefs. For example, we intuitively know that “1+1=2,” even if we can’t explain how we know it; and it’s counterintuitive to think it’s always morally wrong to give to charity. Some philosophers think we can know if a proposition is “self-evident” from intuition.
intuition pump – A thought experiment designed to make a certain belief seem more intuitive. For example, Hilary Putnam’s Twin Earth thought experiment asks us to imagine that there’s another world exactly like the Earth except water is replaced by another chemical that seems to be exactly like water except it’s not made of H2O. He argues that it’s intuitive to think that the chemical is not water despite the fact that all our experiences of it could be identical.
intuitionism – See “mathematical intuitionism,” “epistemic intuitionism, “meta-ethical intuitionism,” and “Ross’s intuitionism.”
invalid – See “invalid argument” or “invalid logical system.”
invalid argument – An argument form that can have true premises and a false conclusion at the same time. An example of an invalid argument is the following—“Socrates is either a man or a mortal. Socrates is a man. Therefore, Socrates is not a mortal.” “Invalid argments” are the opposite of “valid arguments.” See “logical form” for more information.
invalid logical system – A logical system that has one or more invalid rule of inference. If a logical system is invalid, then it’s possible for true premises to be used with the rules of inference to prove a false conclusion. “Invalid logical systems” are the opposite of “valid logical systems.” See “rules of inference” for more information.
inverse error – A synonym for “denying the antecedent.”
ipso facto – Latin for “by the fact itself.” It refers to something that is a direct consequence of something else. It means something similar to the phrase “in and of itself.” For example, people who lack drivers licenses ipso facto can’t legally drive.
irreducible – Something is irreducible if it can’t be fully understood in terms of something else, or if it’s greater than the sum of its parts. We can’t find out “it was actually something else.” We found out that water could be reduced to H2O, so water was reducible to facts of chemistry. However, some philosophers argue that minds are irreducible to facts of biology, and that morality is irreducible to social constructs. See “emergentism” for more information.
is/ought gap – The difference between what is the case and what ought to be the case. It is/ought gap is discussed by those who believe that morality is a totally different domain from other parts of reality, and/or that we can’t know moral facts from non-moral facts.
jargon – Technical terminology as used by specialists or experts. Jargon terminology is not defined in terms of common usage—how people generally use the words in everyday life. Instead, they are defined in ways that are convenient for specialists. For example, logicians, philosophers, and other specialists define “valid argument” in terms of an argument form that can’t possibly have true premises and a false conclusion at the same time, but most people use the term “valid argument” as a synonym for “good argument.” See “stipulative definition” for more information. “Jargon” can be contrasted with “ordinary language.”
judgment – (1) A belief or an attitude towards something. For example, “moral judgment” generally refers to a moral belief (e.g. that stealing is wrong) or to an attitude towards a state of affairs (e.g. disliking stealing). Philosophers argue about whether moral judgments are actually beliefs or attitudes (or both). (2) The capacity to make decisions. “Good judgment” is the ability of some people to make reasonable or virtuous decisions. (3) A decision. “He made a good judgment” means that the decision someone made was reasonable or virtuous.
justice – An ethical value concerned with fairness, equality, and rights. Theories of justice are meant to determine how we should structure society, how wealth should be distributed, and what each person deserves.
Justice as Fairness – John Rawls’s theory of justice that states that people should have the maximal set of rights including a right to certain goods, and that economic and social inequality is only justified if it benefits those who are least-well-off in the society. See “original position,” “veil of ignorance,” “primary social goods,” and the “difference principle” for more information.
justification – (1) Evidence or reasons to believe something. Observation is one of the strongest forms of justification; but self-evidence, intuition, and appeals to authority could also be legitimate forms of justification. For example, people can justify their belief that they can feel pain by having actual pain experiences. (2) The supporting premises of an argument.
justified belief – Some philosophers believe that justified beliefs are those that are given a sufficiently good justification, but it is possible that justified beliefs are defensible beliefs that one has no sufficient reason to reject. For example, a typical uncontroversial example of a justified belief is the belief that “1+1=2” but few to no people know how to properly justify this belief using argumentation.
Kant’s Categorical Imperative – Immanuel Kant’s moral theory. The first formulation of his Categorical Imperative states that people should only act when the subjective motivation for the act can be rationally universalized for all people. According to Kant, we should only act based on a subjective principle that we can will as a universal law of nature—everyone would act on the same principle. This guarantees that moral acts are not hypocritical. For example, we shouldn’t go around burning people’s houses whenever (and just because) they make us angry because we couldn’t rationally will that anyone else will be motivated to act in that way. See “categorical imperative” and “maxim” for more information.
know how – The ability to do things well, such as playing musical instruments, fighting, building ships, or healing the sick. “Know how” is often contrasted with “theoretical knowledge.”
knowledge – Classically defined as “justified true belief,” but many argue that it must be “justified in the right way” or that there might be a fourth factor. An eyewitness who sees a murderer commit the act knows who the murderer is because the belief is justified through observation and the belief is true. However, consider a situation where Sally believes that cows are on the hillside because she mistakes cardboard cutouts of cows as the real thing, and some real cows are on the hillside hiding behind some trees. The belief is justified and true, but some philosophers argue that Sally doesn’t actually know that cows are on the hillside.
laissez-faire – French for “allow to act.” It generally refers to free market capitalism with little to no government regulation of the market (other than to prevent theft and enforce contracts).
law of excluded middle – The logical principle that states that every proposition is true, or the negation is true. This implies that all tautologies are true—propositions with the form “a or not-a.” This also implies that no propositions can be true and false at the same time (i.e. contradictions are impossible). The “law of excluded middle” is similar to to the “principle of bivalence.”
law of identity – The logical principle that states that every proposition or object is identical to itself (i.e. a=a).
law of nature – (1) A constant predictable element of nature. For example, the law of gravity states that dropped objects will fall when dropped near the surface of the Earth. (2) A synonym for “natural law.”
law of non-contradiction – The logical principle that states that contradictions are impossible. It’s impossible for a statement to be true and false at the same time (i.e. propositions with the form “a and not-a” are always false).
lemma – A proven statement used to prove other statements.
letter – (1) A symbol used from an alphabet in symbolic logic. See “predicate letter” and “propositional letter” for more information. (2) A symbol used in an alphabet, such as “A, B, [and] C.” (3) A message written on a piece of paper for the purposes of communication over a distance.
lex talionis – Latin for “law of retaliation.” It’s often used to refer to the view that a punishment fits the crime if it causes the same injury as the crime, but it can also be used to refer to retributive justifications for punishment in general.
lexical definition – A dictionary definition, or the meaning of a term in “common usage.” Dictionary definitions are often vague or ambiguous because words tend to be used in many different ways by people. “Lexical definitions” can be contrasted with “stipulative definitions.”
liberalism – (1) A presumption that freedom is preferable—that liberty is generally a good, and we shouldn’t restrict people’s freedom unless we have an overriding reason to do so. Liberalism does not require a specific conception of freedom. For example, not all liberals agree that freedom requires a person to be in control of her own desires. (2) The political and ethical positions of liberals. For example, that the government can help solve social programs, and that it is sometimes just to redistribute wealth from the rich to the poor.
libertarian free will – Free will as described by incompatibilists—as being incompatible with determinism. Libertarian free will requires causation that resembles that of Aristotle’s prime mover. People need to be able to cause their actions without being caused to make those actions.
libertarianism – See “metaphysical libertarianism” or “political libertarianism.”
Liebnitz’s law – The view that there can’t be two or more different entities that have the exact same properties. For example, two seemingly identical marbles are both made of different atoms and exist at different places. Two entities that have all the same properties would both have to exist at the same place at the same time. Imagine that we find out that Clark Kent has all the same properties as Superman. For example, Clark Kent was at the precisely same place as Superman at exactly the same time. That seems like a good reason to think that Clark Kent is Superman because Clark Kent and Superman can’t have all the same properties and be two different people.
life-affirmation – To value life no matter what it consists of. Both suffering and death could be considered to be part of life, but a life-affirming attitude would require us to value life as a whole despite these considerations. Life could be considered to be valuable despite death and suffering, or death and suffering could also be considered to have value. Life-affirming morality primarily focuses on goodness and things with value; and badness is primarily understood as things lacking value rather than as having a negative value. According to Friedrich Nietzsche, “master morality” is a type of life-affirming morality. A similar concept to “life-affirmation” is that of “amor fati.”
life-denying – To see the whole of life as primarily having negative value. The negative value associated with pain, suffering, or death are seen as being more important than the positive value associated with pleasure, happiness, or life. Life-denying morality primarily focuses on evil or negative value, and goodness is primarily understood as being not evil or not harmful to people. According to Friedrich Nietzsche, “slave morality” is a type of life-denying morality. The opposite of being “life-affirming.”
literary theory – A systematic attempt to understand and interpret literature in a reasonable way.
loaded question – (1) A fallacy committed when a question implies a question-begging presumption. For example, consider the following question—“Why do liberals want to destroy families?” This question implies that liberals want to destroy families without question, but it is a controversial accusation to make against liberals, and liberals are unlikely to agree with it. Loaded questions are a version of the “begging the question” fallacy. (2) A question that implies a presumption, but is not necessarily fallacious. For example, a police officer might ask a potential shoplifter, “Why did you steal the clothes?” The shoplifter might have already admitted to stealing the clothes. In that case this question would not be fallacious. However, if the police officer does not know that the person stole the clothes, then the question could be fallacious.
loaded words – (1) A fallacy committed when words are used to imply a question-begging presumption or evoke an emotional response. For example, the words ‘weed’ or ‘job-creator’ are often used as loaded words. The word ‘weed’ could be used merely to refer to certain plants that grow quickly and disrupt the equilibrium of a habitat, but it is more often used to imply that a plant is a nuisance and should be destroyed. It would be fallacious to presume that all plants we don’t like ought to be destroyed because such plants could be good for the environment in various ways. The term ‘job-creator’ could be used to merely refer to someone who creates jobs, but it is more often used to refer to wealthy people with the presumption that wealthy people inherently create jobs by their mere existence. It would be fallacious to presume that all wealthy people create jobs merely by existing because it’s a contentious issue. (2) Words used to imply presumptions or evoke an emotional response that are not necessarily fallacious. For example, some political leaders might be truthfully be said to be tyrants. The term ‘tyrant’ is used to refer to a political leader, but it is used to imply that there is something wrong about how a political leader behaves—that the political leader abuses her power. However, it could be fallacious to call a political leader a ‘tyrant’ in order to presume she abuses her power when it’s a contentious issue.
locutionary act – A speech act with a surface meaning based on the semantics or language the act is expressed in (as opposed to the intended meaning). For example, a person can sarcastically say, “There is no corruption in the government.” The surface meaning is the literal meaning, but the statement is intended to mean the opposite (that there is corruption in the government).
logic – (1) The study of reliable and consistent reasoning. Logic is divided into “formal logic” and “informal logic.” Logic focuses on argument form, validity, consistency, argument identification, argument analysis, and informal fallacies. Logic tends to exclude controversial issues related to the nature of argumentation, justification, rationality, and knowledge. (2) The underlying form and rules to various types of communication. For example, R.M. Hare argues that there is a noncognitive logic involving imperatives. (3) In ordinary language, “logic” often refers to vague concepts involving “good ways of thinking” or “the reasoning someone uses.”
logical argument – (1) Rational persuasion. (2) A logically valid argument.
logical connective – The words used to connect propositions (or symbols that represent propositions) in formal logic. Various logical connectives are the following: “not” (¬), “and” (∧), “or” (∨), “implies” (→), and “if and only if” (↔). Logical connectives are the only words contained within propositional logic once the content is removed. For example, “Socrates is a man and he is mortal” can be translated into propositional logic as “A ∧ B.” In this case “A” stands for “Socrates is a man” and “B” stands for “Socrates is mortal.”
logical constant – Symbols used in formal logic that always mean the same thing. Logical connectives and quantifiers are examples of logical constants. For example, “∧” is a logical connective that means “and.” See “logical connective” and “quantifier” for more information. “Logical constants” can be contrasted with “predicate constants.”
logical construction – A concept that refers to something other than particular actual objects. Consider the statement “the average car bought by the average American lasts for five years.” This statement refers to “average cars” and no such car actually exists, and it refers to “average American” and no such person actually exists. Both of these concepts are logical constructions.
logical contingence – Propositions that are not determined to be true or false from the rules of formal logic alone. For example, it’s logically contingent that the laws of nature exist. Logical contingent propositions are neither tautologies nor contradictions. See “logical modality” for more information.
logical equivalence – Two sentences that mean the same thing. For example, “no dogs are lizards” is logically equivalent to “no lizards are dogs.”
logical form – The logical form of an argument consists in the truth claims devoid of content. “The sky is blue or red” has the same logical form as “the act of murder is right or wrong.” In both cases we have the form, “a or b.” (“a” and “b” are propositions.) In this case the truth claim is that one proposition is true and/or another proposition is true.
logical impossibility – The logical status of contradictions. Logically impossible statements can’t be true because of the rules of logic (i.e. because they form a contradiction). For example, it’s logically impossible for a person to exist and not exist at the same time. See “logical modality” for more information.
logical modality – The status of a proposition or series of propositions concerning the rules of formal logic—logically contingent propositions could be true or false, logically necessary propositions have to be true (are tautologies), and logically impossible propositions have to be false (because they form a contradiction). For example, it is logically contingent that the Earth exists.
logical necessity – The logical status of tautologies. Logically necessary statements must be true because of the rules of logic. For example, it’s logically necessary that the laws of nature either exist or they don’t exist. See “logical modality” for more information.
logical operator – A synonym for “logical connective.”
logical positivism – A philosophical movement away from speculation and metaphysics, and towards descriptive and conceptual philosophy. Logical positivists accept “verificationism.”
logical possibility – (1) A proposition that’s either logically contingent or logically necessary. We might say that “it’s logically possible that the Earth exists” or we might say that “it’s logically possible that the Earth either exists or it doesn’t.” (2) Sometimes “logical possibility” is a synonym for “logical modality.”
logical structure – A synonym for “logical form.”
logical system – A system with axioms and rules of inference that can be applied to statements in order to determine if propositions are consistent, tautological, or contradictory. Additionally, logical systems are used to determine if arguments are logically valid. See “formal logic,” “axioms,” and “rules of inference” for more information.
logical truth – See “tautology.”
logically valid – See “valid.”
logicism – The view that mathematics is reducible to logic. If logicism is true, we could derive all true mathematical statements from true statements of logic.
logos – Greek for “word” or “language.” It is often used to refer to logical argumentation.
main connective – A logical connective that’s inside the least amount of parentheses when put into a formal language. For example, consider the statement “all dogs are mammals or reptiles, and all dogs are animals.” This statement has the propositional form “(A or B) and C.” In this case “and” is the main connective. See “formal logic,” “logical connective,” and “grouping” for more information.
major premise – The premise of a categorical syllogism containing the “major term” (the second term found in the conclusion). For example, consider the following categorical syllogism—“All dogs are mammals. All mammals are animals. Therefore, all dogs are animals.” In this case the major premise is “all mammals are animals.”
major term – The second term in the conclusion of a categorical syllogism. If the conclusion is “all dogs are mammals,” then the major term is “mammals.”
mandatory – A synonym for “obligatory.”
master morality – A life-affirming type of moral system primarily focused on goodness, which is primarily understood as superiority, excellence, greatness, strength, and power. Good or superior things are contrasted with “bad things,” which are seen to be inferior, mediocre, and weak. “Master morality” is often contrasted with “slave morality.”
master table – A truth table that defines all logical connectives used by a logical system by stating every combination of truth values, and the truth value of propositions that use the logical connectives. For example, the logical connective “a ∧ b” means “a and b,” so it’s true if and only if both a and b are true. (“Hypatia is a mammal and a person” is true because she is both a mammal and a person. See “logical connective” for more information.) An example of a master table for propositional logic is the following:
material cause – The stuff a thing is made out of. For example, the material cause of a stone statue is the stone it is made out of.
material conditional – A proposition that states that one thing is true if something else is true. It has the logical form “If a, then b.” A material conditional can also be expressed as “b if a.” There are two common symbols used for the material conditional in formal logic: “⊂” and “→.” An example of a statement using one of these symbols is “A → B.”
material equivalence – A proposition that states that one thing is true if and only if something else is true. Either both propositions are true or both are false. The logical form of a material equivalence is “a if and only if b.” Material equivalence can also be expressed as “a-and-b or not-a-and-b” or “if a, then b; and if b, then a.” There are two common symbols used in formal logic for the material equivalence: “” and “↔.” An example of a statement using one of these symbols is “A ↔ B.”
material implication – A synonym for “material conditional.”
materialism – The view that ultimately only matter and energy exists—that there is only one kind of stuff, and everything is causally connected to particles and energy. “Materialism” is often taken as a synonym for “physicalism.”
mathematical anti-realism – The view that there are no mathematical facts. For example, what we take to be “true mathematical statements” could be based on a social construction or convention.
mathematical intuitionism – The view that mathematics is a construct of our mind and mathematicians are creating the same types of thoughts in each others’ minds.
mathematical platonism – The view that there is at least one mathematical fact and that there are abstract mathematical entities. For example, numbers can be abstract entities.
mathematical realism – The view that there is at least one mathematical fact that is not dependent on a social construction or convention. Many mathematical realists believe that numbers are real (exist as abstract entities) and that it is impossible for the universe to violate mathematical truths.
matters of fact – Empirical statements concerning the physical world. They can be known to be true or false from observation. For example, “all dogs are mammals” is a matter of fact. David Hume believed the only propositions that could be justified were “matters of fact” and “relations of ideas.”
maxim – A subjective motivational justification. For example, Lilith punches an enemy who makes her angry might simultaneously assume the action is justified by assuming that anger can justify acts of violence towards others. Immanual Kant used this concept of a maxim for his moral theory (Kant’s Categorical Imperative)—he believed that people should act on a maxim, and that our maxim must be one we can rationally will everyone also has. In that case Lilith shouldn’t punch people based on her anger because she probably can’t rationally will that everyone else do the same.
maximally complete – See “syntactic completeness.”
maximin rule – When deciding on what system to use, the maximin rule requires that we choose the system that has the least-bad possible outcome. It could be described as a risk-adverse rule because some people might want to take a chance at being more wealthy, even if they also have a chance of being more poor.
maximize expected utility – (1) The view that states that a person ought to make decisions based on whatever will probably lead to the greatest utility (the most valued or desired state). See “utility theory” and “stochastic dominance” for more information. (2) To make a decision that will probably lead to the most preferable outcome considering all possible outcomes of all possible decisions.
meaning – (1) The value, importance, or worth of something. For example, the meaning of life could be to make people happier. (2) The definition or semantics of terms, sentences, or symbols. For example, the meaning of water is “the stuff we drink to hydrate our bodies made of H2O.”
meaning of life – What we should do with our life and what “really matters.” If something really matters, then we might have reason to promote it. For example, happiness seems to really matter. If happiness is worthy of being a meaning of life, then we should try to make people happy. Some philosophers believe that the meaning of life is related to what has “intrinsic value.”
means of production – Machines and natural resources used for production of goods. For example, oil and oil refineries.
means to an end – The method, tools, or process used to accomplish a goal. Sometimes people are said to be inappropriately treated as “means to ends” rather than valued or respected (an “end in themselves”). “Means to an end” is often contrasted with “end in itself.”
meme – An idea or practice that has certain qualities that cause it to be spread among several people. For example, religions are said to be memes. Memes are thought to undergo something like natural selection and the most successful memes could be said to survive for being the fittest. The fittest memes tend to have qualities that arouse people’s interest to spread the idea or practice to others, but they need not be beneficial to people.
metalogic – The study of logical systems. Logic is concerned with using logical systems to determine validity, and metalogic is concerned with determining the properties of entire logical systems. For example, a logical system can be “expressively complete.”
mental – See “psychological.”
mentalism – (1) In epistemology, mentalism is the view that justifications for beliefs must be some mental state of the person who has the belief. For example, justifications could take the form of propositions that are understood by a person. We could justify that the Sun will probably rise tomorrow by knowing that “the Sun has risen every day of human history; and if the Sun has risen every day of human history, then the Sun will probably rise tomorrow.” (2) In philosophy of mind, mentalism is the view that the mind is capable of interacting with the body and can cause the body to move in various ways. For example, a person who decides to raise her arm could raise her arm as a result.
merology – The philosophical study of parts and wholes. Merology concerns what the parts are of various things and how various parts and wholes relate. One merological question is whether there are atoms (smallest indivisible parts) of all objects, or whether all objects are ultimately gunky (can be split into smaller pieces indefinitely). Another merological question is whether or not an object is the same object if we replace all of its parts with functionally equivalent parts, such as if we replaced all the parts of a pirate ship with new but nearly identical parts.
meronomy – A type of hierarchy dealing with part-whole relationships. For example, protons are parts of molecules.
metaethical constructivism – The view that moral right and wrong are determined by what ideally rational agents would agree with (if they deliberated in an ideal fashion). This can be based on a “social contract theory”—we should accept the moral rules that would be provided by a social contract if it’s what rational people would endorse in ideal conditions.
metaethical intuitionism – The view that moral facts are not identical to nonmoral states and that we can know about moral facts through intuition. Moral intuitionists typically think that observation is insufficient to attain moral knowledge, so the intuition involved is a nonempirical form of intuition. Some philosophers object to moral intutionism because they don’t think intuition is a reliable form of justification. See “intuition” for more inforamation.
metaethics – Philosophical inquiry involving ethical concepts, the potential moral reality behind ethical concepts, how we can know anything about ethics (i.e. moral epistemology), and moral psychology. Metaethical questions include: “Is anything good?” and “What does ‘good’ mean?”
metalangauge – Language or symbols used to discuss language. Formal logical systems are metalanguages. See “formal logic” for more information.
metalinguistic variable – A synonym for “metavariable.”
metaphilosophy – Systematic examination and speculation concerning the nature of philosophy, and what philosophy ought to be. For example, Pierre Hadot argues that the term ‘philosophy’ ought to refer to a way of life involving an attempt to become more wise and virtuous rather than as expertise related to argumentation regarding various topics traditionally debated by philosophers.
metaphysical contingence – What might or might not exist concerning reality itself assuming that the laws of nature could have been different. Metaphysical contingence can be said to refer to “what is true in some possible worlds, but not others.” For example, the existence of water is plausibly metaphysically contingent—water might not exist if the laws of physics were different. See “metaphysical modality” for more information.
metaphysical impossibility – What can’t exist concerning reality itself assuming that the laws of nature could have been different. Metaphysically impossible statements refer to “what is not true in any possible world.” For example, it would be plausible that it’s metaphysically impossible for a person to exist and not exist at the same time. See “metaphysical modality” for more information.
metaphysical libertarianism – The view that we have free will and that free will is incompatible with determinism. Libertarianism requires that free will to be something like Aristotle’s notion of a first cause or prime mover. The free decisions people make can cause things to happen, but nothing can cause our decisions.
metaphysical modality – A range of modal categories concerning reality as it exists assuming that
the laws of nature could have been different. The range includes metaphysical contingence, possibility, necessity, and impossibility. Metaphysical modality can be described as the status
of a statement or series of statements considering all possible worlds—A statement is metaphysically
contingent if it’s true in some possible worlds and false in others, possible if is true in some possible
worlds, metaphysically necessary if it is true in all possible worlds, and metaphysically impossible if
it’s false in all possible worlds. For example, some philosophers argue that “water is H2O” is a
metaphysically necessary statement. Assuming they are right, if we found a world with something
exactly like water (tastes the same, boils at the same temperature, and nourishes the body) but it is
made of some other chemical, then it would not really be water.
metaphysical naturalism – (1) The view that only natural stuff exists, which is a type of “physicalism.” Natural stuff is often taken to be stuff found in the physical world and natural facts are often assumed to be nonmoral and non-psychological. Naturalists reject the existence of non-natural facts (perhaps mathematical facts) as well as supernatural facts (perhaps facts related to gods or ghosts). (2) The view that the only stuff that exists is stuff described by science. Not all philosophers agree that the reality described by science is merely physical reality.
metaphysical necessity – What must be true or exist concerning reality itself assuming that the laws of nature could have been different. Metaphysically necessary statements refer to “what is true in every possible world.” For example, it is plausible that tautologies are metaphysically necessary and are true in every possible world. See “metaphysical modality” for more information.
metaphysical possibility – (1) The status of a statement being metaphysically possible (non-impossible) as opposed to a range of modal categories. This status of possibility refers to what could be contingently true or necessarily true about reality assuming that the laws of nature could have been different. A statement is metaphysically possible if it is “true in at least one possible world.” For example, it is metaphysically possible that the H2O exists because there is at least one possible world where it exists—the one we exist in. (2) Sometimes “metaphysical possibility” is used as a synonym for “metaphysical modality.”
metaphysics – Philosophical study of reality. For example, some people think that reality as it’s described by physicists is ultimately the only real part of the universe.
metavariables – A symbol or variable that represents something within another language. For example, a logical system could have various either/or statements. “A or B” and “A and B, or C” are two different either/or statements within a logical system. We could then use metavariables to talk about all either/or statements that could be stated within the logical system. For example, “a or b” would represent all either/or statements of our logical language assuming that the lower-case letters are metavariables.
methodological naturalism – See “epistemic naturalism.”
middle term – The term of a categorical syllogism that doesn’t appear in the conclusion, but it appears in both premises. For example, consider the categorical syllogism, “All dogs are mammals; all mammals are animals; therefore, all dogs are animals.” In this case “mammals” is the middle term because it’s not in the conclusion, but it appears in both premises.
mind – The part of a being that has thoughts, qualia, semantics, and intentionality. The mind might not be an object in and of itself, but merely refer to the psychological activity within a being. The mind is often contrasted with the body, but some philosophers argue that the mind could be part of certain living bodies. For example, some philosophers believe that psychological activity could be identical to certain kinds of brain activity.
mind-body dualism – See “dualism.”
mind-body problem – The difficulty of knowing how the body and mind interact. Psychological states seem quite different from physical states, so philosophers speculate about how they both relate. Some philosophers argue that the mind can’t cause the body to do anything at all. Philosophers often think that the mind-body problem is a good reason to reject substance dualism insofar as it seems to imply that the mind and body can’t interact (insofar as they would then be totally different kinds of stuff). One solution to the mind-body problem is “emergentism.”
mind dependent – Something that can only exist if a mind exists (or if psychological phenomena exists). For example, money wouldn’t exist if no psychological phenomena exists.
minor premise – The premise of a categorical syllogism that contains the minor term (the first term found in the conclusion. For example, consider the following categorical syllogism—“All dogs are mammals. All mammals are animals. Therefore, all dogs are animals.” In this case the minor premise is “all dogs are mammals.”
minor term – The first term of the conclusion of a categorical syllogism. For example, if the conclusion is “all dogs are mammals,” then the minor term is “dogs.”
missing conclusion – A synonym for “unstated conclusion.”
missing premise – A synonym for “unstated premise.”
modal antirealism – The view that there are no modal facts—facts concerning necessity and possibility. For example, a modal antirealist would say that it’s not a fact that it’s possible for a person to jump over a small rock.
modal logic – Logic that uses modal quantifiers or quantifiers of some other non-classical type. For example, deontic quantifiers are sometimes used in modal logic. See “modal quantifier” for more information.
modal realism – The view that there are modal facts—facts concerning necessity and possibility. For example, it seems like a fact that it’s possible for a person to jump over a small rock; and it seems like a fact that it’s necessary that contradictions don’t exist. See “modality,” “concretism,” and “abstractism” for more information.
modal quantifier – Modal quantifiers allow us to state when a proposition is possible or necessary. The two main symbols are “□” for necessary and “◊” for possible. For example, “□p” would mean that p is necessary. (“p” is a proposition). “□p” could refer to the proposition, “Necessarily, dogs are mammals.”
modality – Concerning quantification (modal quantifiers), such as necessity and possibility. See “metaphysical modality,” “physical modality,” or “logical modality” for more information.
mode – (1) A nonessential property of a substance. For example, “spherical.” See “substance” for more information. (2) A form of something. For example, a stone statue can have the form of a human being. (3) A way of doing something. For example, traveling by car is a mode of transportation.
modus ponens – Latin for “the way that affirms by affirming.” It is used to refer to the following valid logical form—“If a, then b; a; therefore, b.” An argument with this form is “If dogs are mammals, then dogs are animals. Dogs are mammals. Therefore, dogs are animals.”
modus tollens – Latin for “the way that denies by denying.” It is used to refer to the valid logical form—“If a, then b; not-b; therefore, not a.” An argument with this form is “If dogs are lizards, then dogs are reptiles. Dogs are not reptiles. Therefore, dogs are not lizards.”
monad – (1) Literally means a “unit.” (2) According to the Pythagoreans, the “Monad” is the divine or the first thing to exist. (3) According to Platonists, the “Monad” is another name for “the Good.” (4) According to Gottfried Wilhelm Leibniz, monads are elementary particles (the building blocks of physical reality) that have no material existence of their own, and move according to an internal principle rather than from a physical interaction or external forces.
monadic predicate – A predicate that only applies to one thing. For example, “x is mortal” could be stated as “Mx.” (“M” stands for “is mortal,” and “x” stands for anything.)
monadic predicate logic – A system of predicate logic that can express monadic predicates, but can’t express polyadic predicates. See “monadic predicate” and “predicate logic” for more information.
monarchy – A political system defined by the supreme rule of a king or queen.
monism – (1) The metaphysical position that reality ultimately derives into one kind of thing. For example, materialists think that physical reality is the ultimate reality; and some idealists think that the mind is the ultimate reality. “Monism” is often contrasted with “dualism.” (2) A view that only one thing is ultimately relevant to a subject or issue.
monopoly – Exclusive power or control over something. For example, the government has a monopoly over violence and no one is generally allowed to use violence without government approval.
monotheism – The view that one god exists. “Monotheism” can be contrasted with “polytheism.”
Monte Carlo fallacy – A synonym for “gambler’s fallacy.”
moral absolutism – (1) The view that morally right and wrong acts do not depend on context. Something is always right or always wrong no matter what situation people are in. (2) In ordinary language, “moral absolutism” often refers to something similar to “moral realism” (as opposed to “moral relativism.”)
moral anti-realism – The rejection of moral realism. The belief that intrinsic values don’t exist, and that moral facts don’t exist. Some moral anti-realists think that there are moral truths, but such truths would not be based on facts about the world. Instead, they could be based on a social contract or convention.
moral atomism – See “moral generalism.”
moral constructivism – The view that moral truths consist in psychological facts, agreements, or some kind of an ideal based upon one or both of them. See “constructivism” or “meta-ethical consructivism” for more information.
moral epistemology – The systematic study of moral knowledge, rationality, and justification. For example, some philosophers argue that we can know if an action is right or wrong by considering intuitive or axiomatic moral principles.
moral externalism – See “motivational externalism.”
moral generalism – The view that there are abstract moral criteria (rules, duties, or values) that can be applied in every relevant situation to determine what we ought to do. Moral generalists often believe that analogies can be used to discover what makes an action right or wrong. For example, kicking and punching are both analogous insofar as we could use either to try to hurt people, and they both tend to be wrong insofar as hurting people is bad. “Moral generalism” is often contrasted with “moral particularism.”
moral holism – A synonym for “moral particularism.”
moral internalism – A synonym for “motivational internalism.”
moral intuitionism – See “meta-ethical intuitionism” or “Ross’s intuitionism.”
moral naturalism – The moral realist meta-ethical view that there are moral facts that are either identical to or emergent from nonmoral facts of some kind. For example, actions could be wrong insofar as they cause states of affairs with greater suffering and less happiness than the alternatives. Some philosophers reject moral naturalism based of the fact that we can easily question any proposed identity relation. Actions that we consider wrong are not necessarily those that cause more suffering and less happiness than the alternatives, and some people don’t think that’s all it means to say that an action is wrong.
moral objectivism – (1) The view that there are moral facts that are mind-independent. Moral objectivism excludes views of moral facts that depend on subjective states or conventions. This form of moral objectivism requires a rejection of “moral subjectivism” and “constructivism.” (2) A synonym for “moral realism.” (3) The view that there are true moral statements that are not true merely due to a convention or subjective state.
moral particularism – The view that there are no abstract moral criteria (rules, duties, or values) that can be applied in every relevant situation to determine what we ought to do. Instead, what we ought to do depends on the circumstance we are in without being determined by such things. Moral particularists sometimes agree that rules of thumb and analogies can be useful, but they don’t think we can discover rational criteria that determines what we ought to do in every situation. For example, kicking and punching are both analogous insofar as we could do either to try to hurt people, but the particularist will argue that it could be morally right to try to hurt people in some situations. Ross’s intuitionism is a plausible example of moral particularism. “Moral particularism” is often contrasted with “moral generalism.”
moral psychology – The philosophical study concerning the intersection between ethics and psychology, and primarily concerned with moral motivation. For example, some philosophers have argued that sympathy or empathy is needed to be consistently motivated to do the right thing.
moral rationalization – Arguments used in an attempt to justify, excuse, or downplay the importance of immoral behavior. Moral rationalizations may superficially appear to be genuinely good arguments, but they fail on close examination. For example, many people deny that they are responsible for the harm they cause when they were one person out of many who were needed to cause harm, such as certain corporate employees. They are likely to say they are like a “cog in a machine” or “just doing my job.” See “rationalization” for more information.
moral realism – The belief that moral facts exist, and that true moral propositions are true because of moral facts—not merely true because of a social contract, convention, popular opinion, or agreement. Many moral realists believe that intrinsic values exist. A moral realist could say, “Murder is wrong because human life has intrinsic value, not merely because you believe that it’s wrong.” Some philosophers argue that moral realism requires a rejection of “constructivism” and “subjectivism,” but that is a contentious issue.
moral responsibility – See “responsibility.”
moral relativism – See “cultural relativism.”
moral sense theory – See “moral sentimentalism.”
moral sentimentalism – A philosophical position that takes reasoning to be less important for moral judgment than our emotions, empathy, or sympathy. Moral sentimentalists tend to think that morality somehow concerns our emotions rather than facts.
moral theories – A synonym for “normative theories of ethics.”
moral worth – (1) The degree an action is morally praiseworthy or blameworthy. For example, a morally responsible person who commits murder has done something “morally blameworthy.” (2) According to Immanuel Kant, an action has moral worth (or perhaps moral relevance) when it is caused by a rational motivation that is guided by ethical principles.
morality – The field concerning values, right and wrong actions, virtue, and what we ought to do.
morally right – (1) Behavior that’s consistent with moral requirements. For example, it could be considered to be morally right to refuse to attack people who make us angry. What is “morally right” is often contrasted with what’s “morally wrong.” (2) Preferable moral behavior. For example, to give to charity.
morally wrong – Immoral. Behavior that’s inconsistent with moral requirements. For example, it is morally wrong to kill people just because they make you angry. What’s “morally wrong” is often contrasted with what’s “morally right.”
motivational externalism – The view that moral judgments are not intrinsically motivating. A person could think something is wrong but still be motivated to do that thing, even when in a relevant situation. For example, a sociopath might believe that harming other people is wrong but have no motivation against harming others. The opposite of “motivational externalism” is “motivational internalism.”
motivational internalism – The view that moral judgments are intrinsically motivating. A person can’t think something is right without having at least some motivation for doing that thing (when in a relevant situation). For example, we are likely to doubt the sincerity of a person who says that stealing is wrong but absolutely loves stealing and feels no motivation to refuse to steal. The opposite of “motivational internalism” is “motivational externalism.”
multiple realizability – When more than one state constitutes or brings about another state. For example, psychological states seem like they are multiply realizable—two different brain states can correlate with the same psychological state for different people. Perhaps both a sophisticated machine and a human brain could have the same psychological states.
Münchhausen Trilemma – A philosophical problem that presents three possible types of reasoning we could use to justify beliefs: (a) circular reasoning (beliefs must justify one another), (b) regressive reasoning (beliefs must all be justified by other arguments on and on forever) or (c) axiomatic reasoning (some beliefs are self-evident). It is often thought that knowledge consists of justified true beliefs that must be justified by an argument or axiom. The problem is that all three of the possible ways to justify beliefs that constitute knowledge seem to have problems. Circular arguments are fallacious, regressive reasoning can never be completed by people (who are finite beings), and what we think of as axioms can always be questioned and are often proven to be false at some point.
NAND – A synonym for the “Sheffer stroke.”
natural deduction – A method used to prove deductive argument forms to be valid. Natural deduction uses rules of inference and rules of equivalence. For example, consider the argument form “A and (B and C). Therefore, A.” (“A,” “B,” and “C” are three specific propositions.) The rule of implication known as “simplification” says we can take a premise with the form “a and b” to conclude “a.” (“a” and “b” stand for any two propositions.) We can use this rule to use “A and (B and C)” as a premise to conclude “A.” Therefore, that argument is logically valid.
natural language – Language as it is spoken. Natural language includes both specialized language used by experts and ordinary language. “Natural language” can be contrasted with “formal languages.”
natural law – (1) A theory of ethics that states that moral standards are determined by facts of nature. Consider the following two examples: One, the fact that human beings need food to live could determine that it’s wrong to prevent people from eating food. Two, the fact that people have a natural desire to care for one another could be a good reason for them to do so. (2) A theory of ethics that states that there are objective moral standards. See “moral objectivism.” (3) A theory of law that states that laws should be created because of moral considerations. For example, murder should be illegal because it’s immoral.
natural theology – The systematic study of gods using secular philosophical argumentation. For example, the argument for God’s existence that states that the universe must have a first cause is part of natural theology. “Natural theology” is often contrasted with “revealed theology.”
naturalism: – See “epistemic naturalism” or “metaphysical naturalism.”
naturalistic fallacy – (1) A fallacious form of argument that assumes that the fact that something is the case means it ought to be the case. For example, to argue that people should be selfish because they are selfish. (2) A fallacious form of argument that concludes that goodness is identical with some natural property or state of affairs just because goodness is always accompanied by the natural property or state of affairs. For example, to argue that pleasure and goodness are identical just based on the belief that pleasure always accompanies goodness. Some philosophers—such as moral identity theorists—argue that this type of argument isn’t necessarily fallacious.
necessary condition – Something that must be true for something else to be true is a necessary condition. For example, a necessary condition of being a dog is being a mammal. “Necessary conditions” can be contrasted with “sufficient conditions.”
necessary truth – Statements that have to be true no matter what states of affairs there are or could be. For example, “1+1=2” is a necessary truth. See “physical necessity,” “metaphysical necessity,” and “logical necessity” for more information.
necessity – The property of being unable to be any other way. See “physical necessity,” “metaphysical necessity,” and “logical necessity” for more information.
negation – A false proposition or what we say is not the case. The logical form of a negation is “not-a.” For example, “not all peple are scientists” is the negation of “all people are scientists.”
negative argument – See “objection.”
negative categorical proposition – A categorical proposition that has the form “not all a are b” or “some a are not b.” For example, “some mammals are not dogs.”
negative conclusion – A categorical proposition used as a conclusion with the form “no a are b” or “some a are not b.” For example, “no dogs are reptiles.”
negative liberty – Freedom from constraints. To be in chains or imprisoned would be to lack negative liberty. Sometimes negative liberty is related to specific types of freedom. For example, we have the negative freedom to live insofar as others are not allowed to kill us. “Negative liberty” is often contrasted with “positive liberty.”
negative premise – A categorical proposition used as a premise with the form “no a are b” or “some a are not b.” For example, “some animals are not mammals.”
negative rights – Rights to be left alone. For example, freedom of speech is a negative right that means that no one can stop you from saying things (within the bounds of reason). Negative rights can be contrasted with “positive rights.”
neutral monism – The view that reality is ultimately neither mental nor physical although there could be mental and physical properties.
naive realism – The view that we perceive reality as it exists. See “realism” and “thing in itself” for more information.
nihilism – (1) The view that intrinsic values don’t exist or that moral facts don’t exist. See “moral anti-realism.” (2) A position that denies the existence of something. For example, an epistemic nihilists would deny that there are epistemic facts—that there are facts related to being reasonable, to having justifications, or to having knowledge (other than simply what is true by convention). (3) A synonym for “error theory.”
no true Scotsman – A fallacy committed when someone stacks the deck by defining terms in a convenient way in order to win an argument. It’s often used to try to win an argument by definition. For example, a person could say that all religious people are irrational, and we might then mention a religious person who is not irrational (perhaps Marsha). Someone could then claim that Marsha isn’t really religious because religious people are irrational by definition.
noëtic structure – Everything a person believes and the relationship between all of her beliefs. Also, noëtic structure involves how confident a person is that various statements could be true and the strength in which each belief influences other beliefs. For example, finding out that there is no external reality would have a dramatic effect on our noëtic structure insofar as we are very confident that an external reality exists and many of our beliefs depend on that belief. Perhaps hurting “other human beings” would no longer be immoral insofar as they don’t really exist anyway. See “worldview” for more information.
nominalism – (1) The view that there are no universals and only particulars exist. That names of various kinds of entities exist in name only out of convenience and our understanding of those things are based on generalizations or abstraction. See “universal” for more information. (2) The view that Platonic Forms don’t exist.
non causa pro causa – Latin for “non-cause for cause” and also known as the “false cause” fallacy. This is a fallacy that is committed by arguments that conclude that a cause exists when the premises don’t sufficiently justify the conclusion. See “cum hoc ergo propter hoc,” “post hoc ergo propter hoc,” and “hasty generalization” for more information.
non-compound proposition – A sentence that can’t be broken into two or more propositions. For example, “Socrates is a man.” “Non-compound propositions” can be contrasted to “compound propositions.”
non-compound sentence – See “non-compound proposition.”
non-discursive concept – According to Immanuel Kant, it’s a concept known from “pure intuition” (known a priori without depending on experience). For example, space and time. According to Kant, we couldn’t even have experiences of the world without already interpreting our experiences in terms of space and time. “Non-discursive concepts” can be contrasted with “discursive concepts.”
non-discursive reasoning – See “non-inferential reasoning.”
non-inferential reasoning – Intuitive or contemplative reasoning that does not involve argumentation (conclusions derived from premises). Non-inferential reasoning can require contemplation in order to discover what beliefs are self-evident. For example, Aristotle believes that we can know the axioms of logic through non-inferential reasoning. “Non-inferential reasoning” is often contrasted with “inferential reasoning.”
non sequitur – Latin for “it does not follow.” (1) A statement that is made that’s not related to the preceding conversation. (2) A logically invalid argument, i.e. the conclusion doesn’t follow from the premises.
noncognitivism – (1) The view that some domain lacks true and false judgments. The rejection of cognitivism. For example, epistemological non-cognitivism is the view that judgments concerning rationality, justification, and knowledge are neither true nor false. For example, the judgment that we know that there are laws of nature might merely express our approval of such a belief. (2) Metaethical non-cognitivism is the anti-realist view that states that moral judgments are neither true nor false. For example, emotivists believe that moral judgments are expressions of our emotions. Saying, “stealing is wrong,” might be expressing one’s frustration concerning stealing without saying it is literally true.
nonfactual truth – Statements that are true or false, but are not meant to refer to reality or facts. For example, it is true that unicorns are mammals and that Sherlock Holmes is a detective who lives at 221B Baker Street, but it’s only true within a fictional domain—it’s not true about factual reality. It could be argued that “all bachelors are unmarried” by definition, and such a truth could also be nonfactual. Moreover, the existence of money could also be nonfactual insofar as it depends on our attitudes and customs rather than to facts that directly relate to reality.
nonmoral – Something that is neither morally right nor morally wrong. For example, mathematics is nonmoral, and a person who scratches an itch is acting nonmorally. “Nonmoral” can be contrasted with “amoral.”
nonrational evidence – (1) Evidence that is not related to induction or deductive reasoning. For example, intuitive evidence or self-evidence. See “non-inferential reasoning” for more information.
nonrational persuasion – Fallacious and manipulative forms of persuasion. Nonrational persuasion does not always take the form of an argument, and it often appeals to our biases. For example, the news could continually have stories about how our enemies harm innocent people to give us the impression that our enemies are evil. This is similar to the “one-sidedness fallacy,” but no actual argument needs to be presented. People are likely to jump to conclusions on their own.
NOR – A synonym for the “Pierce stroke.”
norm – A principle, imperative, standard, or prescription concerning preferable or required behavior.
normative – A category that is primarily concerned with standards, ideals, or guiding principles. Normative constraints are often thought to be action-guiding or motivational. “Normative” is often equated with “prescriptive.”
normative theories of ethics – Moral theories that tell us how we can determine the difference between right and wrong actions, determine what we ought to do, or determine what we ought to be. Normative theories of ethics are also concerned with ideals, values, and virtues. Normative theories of ethics are central to “applied ethics.”
normalization – Values and behavior are normalized when they become stable within a group of people, generally by excluding the alternatives. Normalization is likely to occur when the interests of various people converge and the values and behavior in question are mutually beneficial for the people. However, normalization can harm other people (especially a minority) that does not mutually benefit along with the others. For example, a minority could be used as a servant class because it benefits the majority, but it would give the minorities a disadvantage insofar as it limits their opportunities.
noumenal world – The world as it really exists in and of itself. Our understanding of reality is often thought to be corrupted by flawed interpretation and perception. The “noumenal world” can be contrasted with the “phenomenal world.”
noumenon – An object or reality that exists separately from experience and can’t be known through the senses. “Plato’s Forms” are a possible example of noumenon.
nous – (1) Greek for “common sense, understanding, or intellect.” It refers to our capacity to reason. (2) According to Neoplatonists, “Nous” is the mind or intellect of “the Good.”
O-type proposition – A proposition with the form “some a are not-b.” For example, “some cats are not female.”
objective morality – See “moral objectivism.”
obligation – A requirement of rationality, ethics, or some other normative domain. A plausible example is that we are obligated not to kill other people just because they make us angry. See “duty” for more information.
obligatory – Beliefs that are rationally required, actions that are morally required, or a requirement of some other normative domain. “Obligatory” requirements are often contrasted with the “supererogatory” and “permissible” categories.
objection – An argument that opposes a belief or another argument. They’re meant to give us a reason to disagree with the belief or argument. For example, we could object to the belief that it’s okay to kill others who make us angry by saying, “You don’t want others to kill you just because you make them angry, so you shouldn’t kill them just because they make you angry either.” “Objections” are often contrasted with “positive arguments.”
objective certainty – A synonym for “epistemic certainty.”
objective ought – What a person should do based on few (or no) constraints on the person’s knowledge. What we objectively ought to do is often thought to be based on the actual effects our behavior has. For example, utilitarians often say that we ought to do whatever maximizes happiness, even if we have no idea what that is. A person might try to help others by sharing food and accidentally give others food poisoning, and utilitarians might say that the person objectively ought not to have done so, even though the person might have done what was likely to help others from her point of view. “Objective ought” can be contrasted with “subjective ought.”
objective reason – A synonym for “agent-neutral reason.”
objective right and wrong – What is right or wrong considering few (or no) constraints of a person’s knowledge. What is considered to be objectively right or wrong is often thought to be based on the actual effects our behavior has. For example, if you win the lottery, then there’s a sense that buying a lottery ticket was the “objectively right” thing to do, even though you had no reason to expect to win. “Objective right and wrong” can be contrasted with “subjective right and wrong.”
objectivity – See “ontological objectivity” or “epistemic objectivity.”
obverse – A categorical proposition is the obverse of another categorical proposition when it has a certain different quantification and a negated second term. There are four different forms of obversion: (a) The obverse of “all a are b” is “no a are non-b.” (b) The obverse of “no a are b” is “all a are non-b.” (c) The obverse of “some a are b” is “some a are not non-b.” (d) The obverse of “some a are not b” is “some a are non-b.” It is always valid to infer the obverse of a categorical propositions because the two propositions mean the same thing.
obversion – To infer the obverse of a categorical proposition. See “obverse” for more information.
Occam’s razor – The view that we shouldn’t accept an otherwise equally good explanation if it is more complicated, often stated as “we shouldn’t multiply entities beyond necessity.” Occam’s razor could be taken to be a reason to believe an explanation that’s simpler than the alternatives, but it is not an overriding reason to believe an explanation. For example, sometimes ghosts might be an explanation for why objects move around in a house, but Occam’s razor might be a good reason for us to reject the existence of ghosts anyway.
oligarchy – A political system where the rulers are wealthy people.
omnibenevolent – All-good.
omnipotent – All-powerful.
omnipresent – Existing everywhere.
omniscient – All-knowing.
offensive – A synonym for “suberogation.”
omissible – Not obligatory, but permissible. For example, jumping up and down is generally considered to be permissible and non-obligatory. “Omissible” is not a synonym of “permissible” because all obligatory actions are also taken to be permissible. “Omissible” can be contrasted with “obligatory” and “permissible.”
The One – A Neoplatonist term for “the Good.”
one-sidedness – (1) A fallacy committed by an argument that presents reasons to believe something while ignoring or marginalizing the reasons against believing it. For example, a person selling a vacuum cleaner could tell us how it can pick metal objects off the floor, but omit mentioning that it tends to break after being used a few times. “One-sidedness” is also known as “selective evidence” and highly related to “cherry picking” and “quoting out of context.” (2) To be incapable or unwilling to see things from more than one reasonable point of view.
ontological naturalism – See “metaphysical naturalism.”
ontological objectivity – refers to non-mental existence (and minds), but not what exists as part of the mind (e.g. thoughts and feelings). In this sense rocks are objective, but pain is not. “Ontological objectivity” can be contrasted with “ontological subjectivity.”
ontological randomness – When something happens that could not possibly be reliably predicted because it could have happened otherwise. If anything ontologically random happens, then determinism is false—there are events that occur that are not sufficiently caused to happen due to the laws of nature and state of affairs. Ontological randomness can be contrasted with “determined” events and the acts of “free will.” It is generally thought that acts of free will are not random (and perhaps they’re not determined either). Imagine that you time travel to the past without changing anything, and all people make the same decisions, but a different person won the lottery as a result. That would indicate that there are elements of randomness that effect reality. “Ontological randomness” can be contrasted with “epistemic randomness.”
ontological subjectivity – Mental existence—anything that exists as part of our mind, such as thoughts and feelings. “Ontological subjectivity” can be contrasted with “ontological objectivity.”
ontology – The study of “being” as such—what is the case or the ultimate part of reality. It’s sometimes used to be a synonym for “metaphysics.”
operands – The input involved with an operation or predicate. For example, “Gxy” is a statement of predicate logic with two operands—two things being predicated. “G” can stand for “jumps over.” In that case “Gxy” means “x jumps over y” and “x” and “y” are each an operands. See “predicate logic” and “operation” for more information.
operation – Something with variables, input, and output. For example, addition is a function with two different numbers as input, and another number that’s the output. You can input 1 and 3 and the output is 4. (1+3=4.) Statements of predicate logic are also said to involve an operation insofar as predicates are taken to be operations. For example, “Fj” is a statement of predicate logic that could also be taken to be an operation. In this case “F” could stand for “is intelligent” and “j” can stand for “Jennifer.” In that case the input is “j” and the output is “Jennifer is intelligent.” See “predicate logic” and “operands” for more information.
ordinary language – Language as it is used by people in everyday life. Words in ordinary language are generally defined in terms of “common usage” (i.e. how people tend to use the word). “Ordinary language” can be contrasted with “formal language” and “jargon.”
original position – A situation within John Rawls’s theory of justice that sets ideal conditions for deliberation concerning the production of a social contract. Rawls argues that rational principles for justice are decided within the original position under a “veil of ignorance.” He argues that a result of the people’s risk-aversion would be the adoption of the “maximin rule.” (Not to be confused with a maximum rule.)
ostensible meaning – The surface meaning of a speech act without any involvement of “mind reading” or psychological understanding. For example, someone might say, “I love chocolate” in a sarcastic tone. The ostensible meaning is stated, but the intended meaning is the opposite.
ought – Equivalent with “should.” What ought to exist is what should exist, what ought to be done is what should be done. It’s better to do what ought to be done. Many philosophers argue that if something has intrinsic value, then we ought to promote that value. For example, we ought to give to charity when it will help people who would otherwise suffer. What ought to be the case is often contrasted with what is the case—what state of affairs actually exists. See the “is/ought gap” for more information.
ousia – Greek for “being,” “substance,” or “essence.”
outer sense – Sense perception used to experience the external world, such as through the five senses (touch, taste, sound, smell, and sight). See “perception” for more information. “Outer sense” can be contrasted with “inner sense.”
overconfidence effect – The cognitive bias defined by the tendency of people to systematically have a false sense of certainty. For example, we systematically think our answers on tests are more likely true than they really are. We might think every answer we give on a test has a 90% chance of being correct when we actually only got 50% of the correct answers.
overdetermination – When there’s more than one sufficient cause for a state of affairs. For example, one grenade explosion would be sufficient to kill a person, and two simultaneous grenade explosions could overdetermine someone’s death. Overdetermination is a potential problem with some interactionist theories of the mind-body interaction—If physical reality can sufficiently determine how the body will move, then the idea of the mind also causing the body to move would be an example of overdetermination.
overman – A superior kind of human being. Friedrich Nietzsche argues that we should try to become or create an “overman”—a person who will create new values and be life-affirming to the point of desiring an “eternal return.”
overprecision – A fallacy committed by an argument that requires precise information for the premises in order to reach the conclusion, and it uses misleadingly precise premises in order to do so. For example, a person was told that a frozen mammoth was five thousand years old five years ago, so she might insist that the frozen mammoth is now “5,005 years old.”
pantheism – The view that god is the universe.
panpsychism – The view that all physical things or particles have a psychological element.
paradigm – A comprehensive understanding of a domain or a comprehensive worldview. There could theoretically be two paradigms that proponents claim to be “more justified” than the other because each paradigm could have different principles that determine what counts as good justification. Paradigms are thought to influence how we interpret our experiences and how we will respond to our observations.
paradox – (1) An apparent contradiction that challenges our assumptions. (2) A statement or group of statements that leads to a contradiction or defies logic. A paradox could contain a statement that can’t be true or false because they both lead to a contradiction. Consider the following sentence: “This sentence is false.” If it’s false, then it’s true. If it’s true, then it’s false. There’s a contradiction either way.
parsimony – Metaphysical simplicity, or having few entities in a metaphysical system. See “Occam’s razor” for more information.
particular – Actual concrete objects and things in the world. For example, a rock, a dog, and a person.
partners in crime – A synonym for “partners in guilt.”
partners in guilt – A defense of a theory or belief against an objection that points out that the alternatives face the exact same objection. Sometimes one theory or hypothesis can’t be rejected on some basis because the alternative theories or beliefs have exactly the same flaws. For example, Einstein’s theory of physics faces certain anomalies, such as dark energy; but all alternative theories of physics we know about have even more anomalies.
partonomy – A synonym for “meronomy.”
per se – A Latin phrase meaning “in itself” or “without qualification.” People generally use the phrase “per se” to refer to what something is not. (e.g. “The President is not a communist per se, but he does want to increase taxes.”)
perception – Experiences caused by the five senses—sight, sound, touch, taste, and smell. Perception causes unified experiences that we interpret as giving us information about the world.
perdurance theory – See “perdurantism.”
perdurantism – The view of persistence and identity that states that a persisting thing only partly exists at any given moment, and it’s entire existence must be understood in terms of its existence at every single moment that it exists. Perdurantism states that each persisting thing has distinct temporal parts throughout its existence in addition to having spatial parts. See “temporal parts” for more information. “Perdurantism” is often contrasted with “endurantism.”
perdure – For a single thing to only partly exist at any given moment in time, and for its full existence to require a description of it at every single moment in time that it exists. How a thing can persist and be the same thing according to “perdurantism.” See “perdurantism” for more information.
perfect duty – An obligation that requires certain behavior with no room for personal choice. For example, we have a perfect duty not to kill people just because they make us angry. “Perfect duties” are contrasted with “imperfect duties.”
permissible – Beliefs that are compatible with epistemic normative requirements, or actions that are compatible with moral requirements. They are allowed or optional, but not required. “Permissible” actions and beliefs are often contrasted with “obligatory” and “impermissible” ones.
perspectivism – The view that there are multiple ways to reasonably interpret our experiences (based on one’s perspective), but some perspectives can be more justified than others.
perlocutionary act – A speech act with an intended consequence or function. For example, the prelocutionary act of asking for salt at a dinner table is to get someone to pass some salt.
permitted – See “permissible.”
person – A rational being similar in key ways to being a human being. For example, Spock from Star Trek would be a person, even though he is not a human being. Some philosophers argue that dolphins and great apes are also persons.
persuasion – Attempts to convince people that something is true.
petitio principii – Latin for “assuming the initial point.” Refers to the “begging the question fallacy.” (See “begging the question.”)
phenomenon – An observation of an object or state of affairs. For example, seeing a light turn on at a neighbor’s house.
phenomenal world – The experience we have of the world, or the way we understand the world based on our experiences.
phenomenalism – The view that experience or sensations provide no evidence of external objects.
phenomenology – The philosophical study of our mental activity and first person experiences. Phenomenology can help us know what it’s like to be a person or have certain experiences.
philodoxer – “A lover of opinion.” Philodoxers love their own opinion more than the truth. They are contrasted with “philosophers” who love the truth more than their own opinion. Philodoxers are more close-minded than philosophers.
philosopher – (1) “A lover of wisdom.” Used as a contrast to “sophists” who claim they are wise and “philodoxers” who love their own opinion more than the truth. (2) A lover of learning. Someone who spends a great deal of time to learn and correct her beliefs. (3) A professional who is highly competent regarding philosophy, and spends a lot of time teaching philosophy or creating philosophical works.
philosophy – (1) Literally means “love of wisdom.” The quest to attain knowledge and improve ourselves. It generally refers to various domains of study that involve systematic attempts to greater understanding while attempting to be reasonable other than those domains that have been designated to mathematicians or scientists. Arguments and theories concerning the proper domain of philosophy is known as “meta-philosophy.” (2) In ordinary language, ‘philosophy’ refers to opinions regarding what’s important in life or how one should conduct oneself. For example, a person might say, “A penny saved is a penny earned—that’s my philosophy.”
philosophical logic – Logical domains with a strong connection to philosophical issues, such as modal logic, epistemic logic, temporal logic, and deontic logic. “Philosophical logic” can be contrasted with “philosophy of logic.”
philosophy of logic – A philosophical domain concerned with issues of logic. For example, questions involving the role of logic, the nature of logic, and the nature of critical thinking. “Philosophy of logic” can be contrasted with “philosophical logic.”
phronesis – Greek for “wisdom.” Aristotle uses it to refer to “practical wisdom.”
physical – Objects that are causally related to reality as it’s described by physicists as consisting of particles and energy. For example, tables, chairs, animal bodies, and rocks.
physical anti-realism – The view that physical reality (i.e. the natural world) does not really exist (or is less real than some ultimate reality), but that our experiences of the world could still be useful to us. See “idealism” for more information. “Physical anti-realism” can be contrasted with “physical realism.”
physical contingence – The status of propositions that describe a physical state of affairs that is compatible with the laws of nature. For example, consider the physically contingent proposition—“Water can boil.” This statement describes something that’s physically contingent because it describes a situation that we know to be compatible with the laws of nature. It sometimes happens, but it does not always happen. See “physical modality” for more information.
physical impossibility – The status of propositions that describe a physical situation or entity that is incompatible with the laws of nature. For example, consider the plausibly physically impossible statement—“Human beings can jump to the moon.” This is a plausible example of a physically impossible statement because we have reason to believe that the laws of nature and physical abilities of human beings are incompatible with a human being jumping to the moon. See “physical modality” for more information.
physical modality – The status of propositions that describe physical situations or entities given the laws of nature—physically contingent statements are true if they are compatible with the laws of nature, physically necessary statements are those that that describe situations or entities that always exist because of the laws of nature, and physically impossible statements are true when they describe situations or entities that can’t exist because of the laws of nature. For example, scientists say it’s physically impossible to go faster than the speed of light while in a material form.
physical necessity – The status of propositions that describe a physical situation or entity that must happen because of the laws of nature. For example, consider the plausibly physically necessary statement—“Objects will fall when dropped while ten feet from the surface of the Earth.” This is a plausible example of a physically necessary statement because it describes something that seems fully determined to happen given the laws of nature. See “physical modality” for more information.
physical possibility – (1) The status of a proposition that could be physically contingent or physically necessary. For example, it’s physically possible for a human to jump over a small rock or for light to move at 299,792,458 meters per second. (2) Sometimes “physical possibility” is a synonym for “physical modality.”
physical realism – The view that physical reality (i.e. the natural world) exists. Physical realists deny that there is a reality that is more real than physical reality, that physical reality exists in the mind of God, etc. “Physical realism” can be contrasted with “physical anti-realism.”
physicalism – The view that nothing exists other than physical reality, but not necessarily restricted to the reality as described by physicists. Some physicalists think that chemistry, biology, and psychology describes reality as well, even though physicists don’t study these things. “Physicalism” is often taken to be a synonym for “materialism.”
Pierce stroke – A symbol used in formal logic to mean “neither this-nor-that” or “not-a and not-b.” (“a” and “b” are any two propositions.) The symbol used is “↓.” For example, “all dogs are lizards ↓ all dogs are fish” means that “it’s not the case that all dogs are lizards, and it’s not the case that all dogs are fish.” See “formal logic” and “logical connective” for more information.
Platonic Forms – A non-natural, eternal, unchanging part of reality. Plato viewed this part of reality as consisting of “ideals.” We could find out the ideal right, ideal justice, ideal good, and so on. These ideals are the part of reality we refer to when we make moral assertions. Some philosophers accept that “abstract entities” of some sort exist (perhaps for numbers) without wanting to accept all of the traditional views regarding Platonic Forms. (These philosophers can be called “platonists” with a lower-case “p.”)
platonism – The view that Platonic forms or abstract entities exist. See “Platonic Forms” for more information.
plausible – A statement is plausible if it is likely true or highly intuitive given the current evidence.
pluralism – (1) The view that some topic or issue requires multiple irreducible things. (2) The metaphysical view that reality can’t be ultimately reduced to just one thing. Perhaps mind, matter, and abstract entities are all separate irreducible parts of reality.
political libertarianism – The view that we should have limited government, very limited or no government welfare, very little to no government regulation of the economy, and free market capitalism. Libertarians sometimes say there are ultimately only two moral principles: (a) the principle of non-injury and (b) the right to property. We could then know what is right or wrong in every situation using these two principles.
politics – The domain concerned with laws, power over the public sphere, and governments.
polyadic predicates – A predicate that applies to two or more things. For example, “John is taller than Jen” could be expressed as “Tab.” (In this case “T” stands for “is taller,” “a” stands for “John” and “b” stands for “Jen.”
polytheism – The view that more than one god exists.
positive argument – A series of statements meant to support a conclusion rather than oppose a belief or argument. For example, “We should care about others because they can be happy or suffer” is a positive argument. “Positive arguments” are often contrasted with “objections” (i.e. “negative arguments”).
positive categorical proposition – A categorical proposition with the form “all a are b” or “some a are b.” For example, “some mortals are men.”
positive conclusion – A synonym for “affirmative conclusion.”
positive liberty – The power and resources necessary to have the freedom to do certain things. For example, we are have the positive liberty to live if we have the necessary food and medical care. Positive freedom could require internal traits, such as critical thinking skills and absence of addiction. “Positive liberty” is often contrasted with “negative liberty.”
positive premise – A synonym for “affirmative premise.”
positive rights – Rights to various goods or services. For example, some philosophers argue that the right free education is an example of a positive right. Positive rights are often contrasted with “negative rights.”
possibility – (1) A modal domain involving what is contingent, possible, necessary, and impossible. See “metaphysical modality,” “physical modality,” or “logical modality” for more information. (2) The property of not being impossible. For example, it is physically possible for a human being to jump a foot off the ground, but it’s physically impossible for a human being to jump to the moon.
possible world – A concept that contrasts what actually exist to what could exist assuming that there’s a sense that the laws of physics could have been different (or not exist at all). The concept of possible worlds is used in order to help us understand the difference between metaphysically contingent, metaphysically possible, metaphysically necessary, and metaphysically impossible propositions. We can say metaphysically contingent statements are true in some possible worlds and not others, metaphysically possible statements are true in at least one possible world, metaphysically necessary statements are true in all possible worlds, and metaphysically impossible statements are never true in a possible world. For example, many philosophers argue that the laws of logic exist in every possible world, and they would be metaphysically necessary as a result. See “metaphysical modality” for more information.
post hoc ergo propter hoc – Latin for “after this, therefore because of this.” A logical fallacy that is committed when an argument concludes that something causes something else just because the first thing happened before the second thing (or always happens before the other thing). For example, we shouldn’t conclude that breathing causes people to die just because people always breathe before they die. Also related is the “cum hoc ergo propter hoc” fallacy.
post-hoc justification – A justification given for a belief that we already have. Although we often have a hard time explaining why our beliefs are justified, even if we know they clearly are, post-hoc justifications generally do not explain why we actually have a belief. As a result, they often exist to persuade or even manipulate others into sharing our belief. Post-hoc justifications are often motivated by bias rather than a genuine interest in the truth, and they are often rationalizations rather than genuinely good arguments. For example, people have been shown to be generally repulsed by consensual incest and they have an intuition that consensual incest is wrong, but most of the arguments they give against consensual incest are rationalizations. See “rationalization” for more information.
postmodernism – (1) A philosophical domain that is often characterized by the attempt to transcend labels, skepticism towards philosophical argumentation, and caution concerning the potential hazardous effects philosophy can have on everyday life. (1) In ordinary language, ‘postmodernism’ refers to a perspective associated with views that “everyone’s beliefs are equal,” that “effective philosophical reasoning is impossible,” and that “morality is relative.”
poststructuralism – The view that deconstruction is an important way to understand literary works, and that we should be skeptical of the idea that we can fully understand the meaning behind a literary work. Poststructuralists often claim that the “signifier” and “signified” are dependent on a culture or convention, and that understanding language requires a reference to parts of the language. For example, we can define words in terms of other words, so we might have to know hundreds of words of a language before we can use it to communicate well. Poststructuralists sometimes agree that that meaning can be best understood as what words don’t refer to and how they differ from other words within a language.
postulate – An axiom or belief assumed for the sake of argument. For example, many argument about the world require us to assume that an external world exists.
practical – Issues that concern real-life consequence rather than abstractions that can’t make a difference to our lives. Practical philosophy often concerns how we should live and what decisions we should make. Ethics is the most practical philosophical domain.
practical rationality – (1) Proper thinking involving means-end reasoning or ethical reasoning. “Practical rationality” determines how we ought to do “practical reasoning.” For example, a person who jumps up and down to fall asleep is likely being irrational in this sense. However, a person who lays down in a bed and closes her eyes in order to go to sleep is likely being rational. (2) According to some philosophers, such as Immanuel Kant, practical rationality covers ethical reasoning in addition to means-end reasoning. A moral action is more rational than the alternatives, and immoral actions could be said to be “irrational.”
practical reason – (1) Means-end reasoning. Reasoning that we use in order to know how to effectively accomplish our goals. For example, to eat food to alleviate hunger would be seem to be an appropriate way to use practical reasoning. (2) Ethical reasoning. Reasoning that determines what actions we should do, all things considered. For example, it could be considered to be appropriate to decide to give to a charity, but it could not be considered to be appropriate to decide to murder people.
practical wisdom – Knowledge about how to achieve goals and live a good life. Aristotle contrasted “practical wisdom” with “theoretical wisdom.”
pragmatic – Things that concern what is useful or practical.
pragmatic theory of justification – The view that what is true is actually what is useful to a person. What we should consider to be a “justified belief” is based on what helps us make predictions or live a better life.
pragmatic theory of truth – The view that good justifications are those that are useful to us. What we should consider to be “true beliefs” is based on what helps us make predictions or help us live better lives in some other way.
pragmatism – See “pramagic theory of truth” or “pragmatic theory of justification.
praiseworthy – Actions done by morally responsible people that are better than what we would reasonably expect, or that achieve more good than is morally required. See “responsibility” and “supererogatory” for more information. “Praiseworthy” actions can be contrasted with “blameworthy” ones.
predestination – (1) The view that a deity determines everything that happens, usually thought to be “for the best.” Predestination in this sense is thought to logically imply that determinism is true, and it has inspired debates over “free will” for that reason. See “divine providence” for more information. (2) In ordinary language, predestination is often used as a synonym for “fate” or “destiny.”
predetermination – See “predestination.”
predicate calculus: A synonym for “predicate logic.”
predicate constants – Constants in predicate logic are specific things that are predicated. The lower case letters “a, b, [and] c” are commonly used. For example, consider the statement, “George Washington is an animal.” In this case we can write this statement in predicate logic as “Ag” where “A” means “is an animal” and “g” stands for “George Washington.” In this case “g” is a constant because it refers to something specific that’s being predicated. Sometimes variables are used instead of constants. For example, “Ax” means “x is an animal” and “x” can be anything. See “predicate logic,” “predicate variables,” and “predicate letters” for more information. “Predicate constants” can be contrasted with “logical constants.”
predicate letters – Capital letters used in predicate logic are used as symbols for predicates, and the letters generally used are F, G,and H. For example, “F” can stand for “is tall.” In that case “Fx” is a statement that means “x is tall.” See “predicate logic,” “relation letters,” “predicate constants,” and “predicate variables” for more information.
predicate logic – Formal logical systems that include quantification. For example, predicate logic allows us to validly infer that “there is at least one thing that is both a dog and a mammal because all dogs are mammals, and there is at least one thing that is a dog.” This argument could not be validly inferred using propositional logic. See “quantifier” for more information. “Predicate logic” can be contrasted with “propositional logic.”
predicate term – A synonym for “major term.”
predicate variables – Variables in predicate logic are things that are predicated without anything in particular being mentioned. The lower-case letters “x, y, [and] z” are usually used. For example, consider the statement “it is an animal.” In this case “it” is something, but nothing in particular. It could be Lassie the dog, Socrates, or something else. We could write “it is an animal” in predicate logic with a variable as “Fx.” In this case “F” means “is an animal” and “x” is the variable. See “predicate logic,” “predicate letters” and “predicate constants” for more information. “Predicate variables” can be contrasted with “propositional variables.”
preference – To have a greater desire for something or to value something more than something else. For example, many people have a preference to live rather than to die.
preferable – (1) What a person would rather have than something else, or what a person desires or values more than something else. For example, many people agree that it’s preferable to experience pleasure than pain. (2) A better or more valuable option. For example, many people believe that pleasure has a positive value and would think it’s preferable for people to experience more rather than less pleasure.
premise – A statement used in an argument that is used in order to give a reason to believe a conclusion. For example, consider the following argument—“We generally shouldn’t hurt people because it’s bad for people to suffer.” In this case the premise is “it’s bad for people to suffer.” Keep in mind that many arguments have more than one premise. “Premises” are often contrasted with “conclusions.”
premise indicator – A term used to help people identify that a premise is being stated. For example, “because” or “considering that.” See “premise” for more information.
prescriptive – What is advisable or preferable. For example, “you shouldn’t steal from people” is a prescriptive statement. “Prescriptive” is often equated with “normative” and it can be contrasted with “descriptive.”
prescriptivism – The view that moral judgments are not true or false (and don’t refer to facts or real-world properties). Instead, they refer to “prescriptions” (imperatives or commands). For example, the judgment that “stealing is wrong” might actually mean “don’t steal!” Prescriptionism is an “anti-realist noncognitivist meta-ethical” theory.
prima facie – Latin for “at first face” or “at first sight.” ‘Prima facie’ refers to something that counts as a consideration in favor of something but can be overridden. For example, prima facie evidence is a reason to believe something is true, but there can be better reasons to believe it’s false. The fact that Copernicus’s theory of the Sun being the center of the solar system was simpler than the alternative of the Earth being at the center was prima facie evidence that his theory better described the world.
primary social goods – Goods that everyone values. John Rawls suggests that liberty, opportunity, income, wealth, and sources of self-respect should be included in this category.
primary qualities – The physical qualities an object has, such as extension, shape, size, and motion. John Locke thought that primary qualities could be known to be the qualities an object actually has rather than the subjective way we experience the object. John McDowell argued that primary qualities can be described in ways other than the way we perceive them. For example, we can describe the shape of an object using mathematics. “Primary qualities” are contrasted with “secondary qualities.”
prime mover – Arisotle’s understanding of a “first cause” or the god that makes motion possible. It’s something that can cause things to happen without being caused to do so. Many people think God is a prime mover that created the universe, and the universe couldn’t exist unless it was created in this way. However, Aristotle actually thought that the universe always existed.
primitive concept – A concept that can’t be properly defined or understood in terms of other concepts. For example, G. E. Moore argued that the concept of “goodness” is primitive and that it could not be defined in terms of other concepts. Primitive concepts might need to be understood in terms of examples. “Primitive concepts” can be contrasted with “definable concepts.”
primitives – The building blocks of thought or reality. Ontological primitives are the building blocks of reality, such as subatomic particles. Concepts that must be presupposed for a theory or logical system are primitives. For example, the law of non-contradiction could be a primitive.
principal attributes – The essence or defining characteristics of substances. According to René Descartes, extension is the principal attribute for matter and thought is the principle attribute for mind.
principle – (1) A law, rule, or brute fact. For example, the law of non-contradiction is a plausible example. (2) A guiding rule or value. What people call “moral principles.” For example, the principle that states that we generally shouldn’t harm others.
principle of charity – See “charity.”
principle of bivalence – The logical principle that states that all propositions have one truth-value, and they are either true or false. The principle of bivalance is similar to “the law of excluded middle,” but that law does not guarantee that all propositions are true or false. Some philosophers reject the principle of bivalence and argue that there can be more than two truth-values. Perhaps some propositions could be indeterminate or have degrees of truth. The “principle of bivalence” is similar to the “law of excluded middle.”
principle of parsimony – The principle that states that simplicity is a feature that can count in favor of a theory. We should prefer a simpler theory if it is otherwise just as good as explaining the relevant phenomena as another theory. Copernicus’s theory that the Sun is the center of the solar system was simpler than the alternative and could make predictions just as well, so we had a reason to prefer his theory to the best alternative that was available at the time. See “Occam’s razor.”
principle of sufficient reason – The principle that states that everything that exists has a sufficient explanation as to why it exists rather than something else. For example, we might think that everything that happens has a causal explanation as to why it happens rather than something else. Many philosophers reject this principle and think it’s possible that there are “brute facts” that have no explanation.
principle of utility – The moral principle that states that we ought to seek the greatest good for the greatest number. The principle of utility is generally understood as equating “goodness” with “pleasure” and “harm” with “pain.” Therefore, the greatest good for the greatest number is meant to be the most pleasure for the greatest number, and the least pain for the greatest number. We can try to determine how much pleasure and pain is caused by various choices we can make to determine which choices are best—an action will be right insofar as it causes more happiness than the alternatives. For example, killing people would be taken to be generally wrong because it generally causes people more suffering than alternative courses of action. See “utilitarianism” for more information.
probabilism – The view that the degrees of confidence we have for various beliefs ought to be based on probability calculus. Probabilism states that we often lack certainty, but we should still try to believe whatever is likely true. For example, we ought to be confident that we won’t roll a six when we roll a six-sided-die, but we should not be confident that we will roll a two. See “psychological certainty” and “probability calculus” for more information.
probability calculus – Mathematical rules that determine the odds of various propositions being true. For example, the probability of a tautology being true is 100%, and the probability of a contradiction being true is 0%. Also, the odds of two unproven propositions being true is lower than merely one of the two being true.
probability distribution – A list of possible outcomes and the odds of each outcome of occurring, which is often related to what decision we should make. For example, the odds of a day being sunny could be 40% and the odds of that day being rainy could be 60%.
problem of evil – The question about how a divinity could exist if evil exists. It is sometimes thought that a divinity exists that’s all-powerful, all-knowing, and all-good, but that would imply that the divinity would assure us that less evil exists than actually exists. For example, it is plausible that such a divinity would not make lead such a convenient yet poisonous metal that would take us thousands of years to discover to be poisonous.
problem of induction – The fact that induction appears to be difficult or impossible to sufficiently justify with argumentation. It is argued that induction can’t be sufficiently justified by the fact that it was reliable in the past because that would require a circular argument (the assumption that induction is reliable); and it also seems implausible to think that induction can be justified as being self-evident. See “induction” for more information.
process metaphysics – A synonym for “process theology.”
process theism – The belief that God evolves as part of a process into a more perfect being.
process theology – The philosophical systematic attempt to understand or speculate about “process theism.”
projectionism – (1) The view that moral judgments are based on our emotions, but we experience those emotions as being objective facts or properties of external reality. For example, to see a small child be tortured could be observed as an immoral act, but the projectionist would argue that the observation would actually just reflect the fact that the observer has some other negative emotional experience directed towards an action. (2) The view that we take something to be an objective fact or property of external reality, but it is not actually an objective fact or property. Instead, we are projecting our own attitudes or emotions onto things. For example, some people believe that we project colors onto the world (that aren’t really there) when we talk about red cars and green grass.
proof – (1) An argument that supports a belief. It’s often taken to refer to a sufficient reason to agree with a belief. See “positive argument” for more information. (2) The evidence for a belief. It’s often taken to refer to sufficient evidence for a belief. See “justification” for more information.
proof by absurdity – A synonym for “indirect proof.”
property – An attribute, element, or aspect of something. Examples of properties include green, soft, valid, and good.
property dualism – The view that things have up to two different kinds of basic properties: physical and mental. For example, a single event could be described in terms of physical properties (such as a certain brain state) and psychological properties (such as experiencing pain). Property dualists are not substance dualists because they don’t think the mind and body are made of different kinds of stuff.
proposition – A truth claim or the conceptual meaning behind an assertion. The statement, “Socrates is a man and he is mortal” contains two propositions: (a) Socrates is a man and (b) Socrates is mortal. Propositions are not statements because there can be multiple statements that refer to the same proposition. For example, “Socrates is a man and he is mortal” and “Socrates is mortal and he is a man” are two different statements that refer to the same proposition.
proposition type – Different logical forms categorical propositions can take. There are four proposition types: A, I, O, and E. Each of these refers to a different logical form: (A) all a are b, (I) some a are b, (O) some a are not b, and (E) no a are b.
propositional calculus – A synonym for “propositional logic.”
propositional connective – A synonym for “logical connective.”
propositional logic – A formal logical system that reduces arguments to propositions and logical connectives. For example, the statement “All dogs are mammals or all dogs are reptiles” becomes translated as “A or B.” (The logical connective is “or.”) Propositional logic lacks “quantification.” “Propositional logic” can be contrasted with “predicate logic.”
propositional letter – Symbols used to stand for specific propositions in propositional logic. Upper-case letters are often used. For example, “A” can stand for “George Washington was the first President of the United States.” “Propositional letters” can be contrasted with “propositional variables.” See “propositional logic” for more information.
propositional variable – Symbols used in propositional logic to represent propositions. Capital letters tend to stand for specific propositions. For example, “A” could stand for “Socrates is mortal.” Lower-case letters or Greek letters tend to stand for any possible proposition. For example, “a” could stand for any possible proposition. “Propositional variables” can be contrasted with “predicate variables.” See “propositional logic” for more information.
prove – To give proof for a conclusion. It is often used to refer to sufficient evidence to believe something, but sometimes it only refers to the requirement to support our beliefs within a debate. See “proof” for more information.
providence – See “divine providence.”
psychological – The domain that includes thoughts, feelings, experience, the first-person perspective, semantics, intentionality, and qualia. It is often said that psychological activity occurs in a mind. “Psychological” reality is often contrasted with “physical reality.”
psychological certainty – The feeling of some degree of confidence about a belief. To be psychologically certain that something is true is to feel highly confident that it’s true. For example, a person might feel absolutely confident that trees really exist and later find out that our entire world takes place within a dream.
psychological egoism – The view that people can only act in their perceived self-interest. For example, a person could not give money to the poor unless she believed it would benefit her somehow. (Perhaps it could improve her reputation.)
public reason – John Rawls’s concept of reason as it should exist to justify laws and public policy. Ideally, everyone should be able to rationally accept the laws and policies no matter what their worldview is, so laws and public policies should be justified in secular ways that don’t require acceptance of controversial beliefs. Public reason does not require controversial religious beliefs or a comprehensive worldview precisely so it can help assure us that every reasonable person would find the laws and public policies to be justified—even if they have differing worldviews.
pure intuition – An a priori cognition. According to Immanuel Kant, a pure intuition is the way we know about “non-discursive concepts,” such as space and time.
pyrrhonism – The philosophy of an ancient group of philosophers known as “the skeptics.” They didn’t know if we could know anything and thought that we should suspend judgment as a result—to neither believe nor disbelieve anything. They argued that we should even suspend judgment concerning the belief that “we can’t know anything.”
qualia – The “what it’s like” or qualitative description of subjective experience. For example, the taste of chocolate, feel of pain, the appearance of green, and so on.
QED – See “quod erat demonstrandum.”
quantificational logic – A synonym for “predicate logic.”
quantifier – (1) A symbol used in logic to designate a quantity or modality. Quantifiers are used to make it clear if a proposition concerns all of something, not all of something, something that actually exists, or something that doesn’t actually exist. The two main quantifiers in logic are “∀” that stands for “all” and “∃” that stands for “exists.” For example, “∃x(Fx and Gx)” means that there is an x that is an F and a G. For example, there is something that’s a dog and a mammal. See “modal quantifiers” and “deontic quantifiers” for more information. (2) A word used to designate quantity, such as “all” and “some.” For example, “all people are rational animals” uses the “all” quantifier.
quasi-realism – (1) A view that there are no moral facts that tries to make sense out of our moral language and behavior. For example, some quasi-realists are emotivists, but they argue that moral judgments can in a sense be true or false. For example, quasi-realist emotivists could agree that saying, “Stealing is wrong” does not merely express a dislike of stealing insofar as people use it to make assertions. Quasi-realism is meant to be more sensitive to our common sense and intuitions than the alternatives. Quasi-realism can be compatible with multiple anti-realist meta-ethical theories. Also see “fast track quasi-realism” and “slow track quasi-realism.” (2) An anti-realist position (that rejects that some type of facts exist) that attempts to be more sensitive to our common sense beliefs and intuitions by explaining why our language about something seems factual, but is not actually factual.
quater – A chemical that is functionally equivalent to water that quenches thirst, boils at the same temperature, and so on; but it’s not H2O. Quater was part of a thought experiment and was used to argue that it’s intuitive to think water is more than a chemical with certain functions—the chemical composition of water is part of what it is. See “Twin Earth” for more information.
questionable analogy – See “weak analogy.”
quod erat demonstrandum – Latin for “that which was to be demonstrated.” It roughly means “therefore” and is used to refer to a conclusion of a proof. It’s often abbreviated as “QED.”
quoting out of context – When someone uses a quotation to support a belief when the quote put into the proper context wouldn’t support that belief after all. For example, Elizabeth could say, “We know that UFOs exist, but we don’t know that aliens visit the Earth from other planets” and Tony could quote her as saying, “We know UFOs exist” to give others the impression that Elizabeth believes that aliens visit the Earth. See “one-sidedness” for more information.
randomness – See “epistemic randomness” or “ontological randomness.”
rational persuasion – An attempt to persuade others that something is true through well-supported valid argumentation.
rationalism – (1) The view that there are non-tautological forms of knowledge or justification other than observational evidence. For example, the laws of logic are a plausible example of something we can justify without observational evidence that’s not tautological. (2) The view that we should try to reason well and form beliefs based on the best evidence available.
rationality – (1) At minimum, the ability to draw logical conclusions from valid arguments and avoid contradictory beliefs. Rationality could also refer to using effective methods to accomplish our goals or even to effective reasoning in general. (2) The field concerned with what we ought to believe. We ought to believe conclusions that are well justified and disbelieve conclusions that are well refuted. For example, the belief that at least five people exist is well justified, so we ought to believe it (and it would be rational to believe it). We could say that we ought not disbelieve that at least five people exist (and it would be irrational to disbelieve it). (3) “Rationality” is often equated with “reasonableness.”
rationalization – Nonrational arguments given to believe something without a genuine concern for what’s true. Rationalizations are meant to superficially appear to be genuinely good arguments, but they fail on close examination. For example, a person who believes that the Earth is flat, and is told that we have pictures of the Earth from space and we can see that it’s round could rationalize that the pictures are probably fake. A great deal of philosophical writing could be closer to rationalization than to genuinely good argumentation, but rationalization plagues everyday thought and can be difficult to avoid. See “moral rationalization” and “post-hoc justification” for more information.
realism – A domain that is taken to be factual (part of the real world), and not merely a social construction or conventional. For example, “moral realism” is the view that there is at least one moral fact that is not determined by something like a social contract. “Realism” is often contrasted with “anti-realism.”
reasonable pluralism – Disagreement among people who have reasonable yet incompatible beliefs. A plausible example is of a person who believes that intelligent life exists on another planet and another person who doesn’t think life exists on another planet. John Rawls coined this phrase because he believed that society should fully embrace cultural diversity involving various worldviews and religious beliefs insofar as such religious beliefs and worldviews can be reasonably believed—the evidence we have for many of our beliefs is inconclusive, but it can be reasonable to have the beliefs until they are falsified (or some other standards of reason are violated).
reasonableness – To hold beliefs that are sufficiently justified and reject beliefs that are insufficiently justified. The ability to reason well and behave in accordance to reasonable beliefs. “Reasonableness” is often equated with “rationality.”
reasoning – The thought process that leads to an inference. For example, a person who knows that all dogs are mammals and that Lassie is a dog can come to the realization that Lassie is a mammal. Reasoning that’s made explicit along with the conclusion are “arguments.” One potential difference between reasoning and arguments is that reasoning does not necessarily include the inference, but arguments must include a conclusion. Everything we say about reasoning or arguments tends to correspond to both. For example, fallacious arguments corresponds to fallacious reasoning of the same type, logically valid arguments has a corresponding logically valid reasoning, etc. Moreover, inductive and deductive types of arguments correspond to inductive and deductive types of reasoning.
rebuttal – See “objection.”
red herring – A fallacious kind of argument that is meant to distract people from arguments and questions made by the opposing side. These kinds of arguments are meant to derail the conversation or change the subject. For example, a politician might be asked if we should end our wars, and she might reply, “What’s really important right now is that we improve the economy and create jobs. We should do that by lowering taxes.”
redistribution of wealth – To take wealth away from some people and give it to others. It is sometimes thought that it is morally justified to tax the wealthy to provide certain services for the poor. For example, many people insist that Robin Hood is a hero because he risks his well being to take from the rich to give to the poor (who would otherwise suffer from an unjust system).
redistributionism – The view that we should have “redistribution of wealth” (perhaps to take from the wealthy to help the poor).
reducibility – To be able to express everything from one logical system in another. For example, a propositional logical system with the logical connectives for “and” and “not” can state everything said by other logical connectives. Therefore, a system of propositional logic that has logical connectives for “not,” “and,” “and/or,” “implies,” and “if and only if” can be reduced to a system that only has connectives for “and” and “not.” See “expressibility” and “logical connectives” for more information.
reductio ad absurdum – Latin for “reduction to the absurd.” Also known as the “argument from absurdity.” It’s a form of argument that justifies why an argument or claim should be rejected insofar as it would have absurd consequences. For example, consider the following argument—“Stars exist; the Sun is a star; therefore Stars don’t exist.” This argument leads to an absurd consequence in the form of a logical contradiction (i.e. that something exists and doesn’t exist.)
reduction – To conclude or speculate that the parts of something are identical to the whole. For example, water is H2O, and diamonds are carbon molecules with a certain configuration. We often say that something is reducible to something else if it’s “nothing but” that other thing. For example, water is nothing but H2O.
reductionism – (1) Relating to identity theories or identity relations. For example, scientists think that water is identical with H2O. (2) The view that something is nothing but the sum of its parts. Some philosophers think that particles and energy (the reality described by physics) are the only real parts of the universe, and that the universe is actually nothing but physical reality as described by physicists.
redundancy – (1) When something is redundant or not needed. For example, secular ethics attempts to explain right and wrong without appealing to controversial religious entities—those entities would then be redundant to the explanation. This could be seen as an epistemic virtue of practical importance. It could be said that we should often hedge our bets by not having moral beliefs that depend on controversial entities that might not exist. Sometimes redundancy might help give us a reason to reject entities. (2) To have backup plans for our beliefs and arguments. For example, a conclusion could be redundant if we have several reasons to think it’s true. To refute one argument in favor of the conclusion would then be insufficient to give us a reason to stop believing the conclusion is true.
reference – (1) The objects that terms refer to. The terms “morning star” and “evening star” have different meanings—the “morning star” is the last star you can see in the morning and the “evening star” is the first star you can see at night. However, they both have the same reference (i.e. Venus). Gottlob Frege contrasted “reference” with “sense.” (2) A source of information used for citations. (3) Someone who can vouch for your qualifications.
reference fixing – An initial moment when someone names a thing (or type of thing). Reference fixing could involve a person pointing to an object that others perceive or by describing an object in order to make it clear what exactly is being given a name. For example, germs could have been described as whatever microscopic living organisms were causing people to get sick in certain ways. Reference fixing is often part of a “causal theory of reference.”
reference borrowing – Continuing a historical tradition of using a term to refer to something. Merely using a term for the same thing as someone else is to engage in reference borrowing. Reference borrowing is often part of a “causal theory of reference.”
reflective equilibrium – An ideal state when our beliefs and intuitions are consistent after deliberation and debate. Reflective equilibrium requires that we form beliefs based on our experiences and intuitions, and that we reject certain intuitive beliefs when they are incompatible with other more important intuitive beliefs or observations until we reach perfect coherence—when our beliefs and observations are all logically compatible and no longer contain contradictions. For example, some utilitarians could believe that we should sometimes kill innocent people to use their organs to save other lives because such a counterintuitive position is plausibly implied by utilitarian principles. A related concept to “reflective equilibrium” is “coherentism.”
refutation – An argument that opposes another argument or belief. It often refers to arguments that provide us with sufficient reason to reject a belief or argument. See “objection.”
refute – To disprove a belief or oppose an argument using another argument. It often refers to giving a sufficient reason to reject a belief or argument. For example, we can refute the belief that all crows are black by finding an albino crow.
regress – A solution that has the same problem it’s supposed to solve. For example, we might assume that everything needs to be created, and conclude that God created the universe; but our assumption will lead us to think that something needed to create God as well. Another example is to say that we only know something when we can justify it using an argument, but the argument will require that the premises of the argument also be known, and therefore we will need arguments for those premises as well. That can lead to an “infinite regress.”
reification – (1) Inappropriately treating something as an object, such as treating human beings as means to an end. For example, paying factory workers as little as possible and having them work in unsafe conditions just to make more profit. (2) To inappropriately think of abstract entities or concepts as concretely existing entities. For example, “courage” should not be thought of as a person.
relation letters – Predicate letters used in predicate logic that involve two or more things that are predicated. Capital letters are used to represent predicates, and “F, G, [and] H” are most commonly used. For example, “F” can stand for “attacks.” In that case “Fxy” is a statement that means “x attacks y.” See “predicate logic,” “predicate letters,” “predicate constants,” and “predicate variables” for more information.
relational predicate logic – A system of predicate logic that can express both monadic and polyadic predicates. See “monadic predicate” and “polyadic predicate” for more information.
relations of ideas – Statements that can be justified by (or true in virtue of) the definitions of words. For example, “all bachelors are unmarried” is a relation of idea, and we can justify the fact that it’s true by appealing to the definitions of words. Relations of ideas are said to be tautological and non-substantive. David Hume thought the only statements that could be justified are “relations of ideas” and “matters of fact.”
relativism – See “epistemic relativism” or “cultural relativism.”
relevance – To be appropriately related. What is said in a philosophical discussion or debate should be relevant in that it should appropriately relate to the primary topic of conversation. Certain arguments and certain beliefs are related to the topic of conversation, and are worth talking about in order to understand the topic or to know what we should believe regarding the topic. Extreme forms of irrelevance are off-topic. Additionally, objections must be properly related to the arguments and beliefs they oppose, and giving objections that are somewhat irrelevant could change the subject or be a fallacious “red herring.”
relevance logic – A logical system that requires more than the simple truth table for conditional statements. According to classical logic, “If all dogs are mammals, then gold is a metal” is true. However, this doesn’t seem to be true using ordinary language and relevance logic attempts to explain why. According to relevance logic, both parts of the conditional must be related in the right way.
reliabilism – The view that better justifications for beliefs are more reliable than other ones, and that justified beliefs are justified because they were formed by a reliable belief-forming process. Reliabilism generally stresses that justified beliefs are more likely true than the alternatives precisely because justifications assure us that our beliefs are more likely true than they would be otherwise.
religious humanism – The view that human interests are of primary importance rather than those of gods or supernatural beings, but while still endorsing a religion. Religious humanism states that the main importance of religion is in serving humans rather than in serving supernatural beings.
res cogitans – Latin for “thinking thing.”
res extensa – Latin for “extended thing.”
responsibility – (1) Being in control of one’s moral decisions. A person who is morally responsible can be legitimately praised or blamed for her moral actions. Moral responsibility requires a certain level of sanity, competence, and perhaps free will. It is plausible that small children and nonhuman animals lack responsibility because they might lack the competence required. Additionally, there are excuses that can temporarily invalidate a person’s moral responsibility, such as when people are harmed on accident or when a person is coerced into harming others. (2) To be morally required to act a certain way. For example, parents are responsible for caring for their children.
retribution – The justification for punishment that considers a criminal to deserve to be harmed. For example, we could say that a murderer deserves to die in order to justify using the death penalty against the murderer. Retribution is sometimes criticized as a form of vengeance.
retributive justice – A principle of justice that states that punishment as some form of harm is the appropriate response to crime insofar as criminals deserve to be harmed. Retributive justice often states that the punishment should be proportionate to (or the same as) the crime itself. For example, murder would be appropriately punished with the death penalty.
revealed theology – The systematic study of gods using information attained by revelation—direct communication with one or more gods or supernatural beings. “Revealed theology” is often contrasted with “natural theology.”
revisionary – Definitions or concepts that depart from the usual or intuitive associations we have with certain terms. For example, people who say that knowledge requires beliefs we can justify well using argumentation might contradict the ordinary understanding of knowledge in that we seem to know that “1+1=2” and yet we might not know how to justify it well using argumentation.
rhetoric – (1) Persuasion using ordinary language. In this sense both rational argumentation and fallacious argumentation could be considered to be forms of rhetoric. Rhetoric is the specialization of public speaking, persuasion used by lawyers, and oratory used by politicians. (2) Argumentation used for the purpose of persuasion. This type of rhetoric can involve technical terminology used by specialists. This type of rhetoric is compatible with both public speaking and essays written by philosophers. (3) Persuasion that uses nonrational forms of persuasion through language. Fallacious arguments, propaganda, and various forms of manipulation could be considered to be rhetoric in this sense. This type of rhetoric is thought to be a source of power for sophists, pseudoscience advocates, snake oil salesmen, and cult leaders.
rhetorical arguments – Arguments used for persuasion. Rhetorical arguments are thought to be very important in politics and in the court room. See “rhetoric” for more information.
right – (1) Correct or appropriate as opposed to “wrong.” (2) “Morally right” as opposed to “morally wrong.” (3) To be on the other end of an obligation. For example, to have the right to life means that other people are obligated not to kill you without an overriding reason to do so.
rigid designator – Something that refers to the same thing in all possible worlds and never refers to anything else. For example, some philosophers argue that water refers to H2O in every possible world.
Ross’s intuitionism – William David Ross’s ethical theory that requires us to accept meta-ethical intuitionism. He argues that there are intrinsic values and prima facie duties, but such values and duties can conflict. Additionally, we can’t rationally determine what we should do using moral theories in all circumstances precisely because values and duties can conflict.
rule utilitarianism – A form of consequentialism that states that we should rely on simplified rules in order to maximize goodness (positive value) and minimize harm (negative value). Rule utilitarians often equate “goodness” with “happiness” and “harm” with “suffering.” Rule utilitarianism is sometimes inspired by skepticism regarding our ability to know how to maximize goodness given our situation. Many people are willing to harm others “for a greater good” who don’t get the expected results they hoped for. “Rule utilitarianism” is contrasted with “act utilitarianism.”
rules of equivalence – A synonym for “rules of replacement.”
rules of implication – Rules that state when we can have a valid inference. The rules state that certain argument forms are valid, such as “modus ponens,” which states that the propositions “if a, then b” and “a” can be used to validly infer “b” (“a” and “b” stand for any two propositions.) See “valid,” “rules of implication” and “rules of equivalence” for more information.
rules of inference – Rules that state when we can have a valid inference. The rules state that certain argument forms are valid, such as “modus tollens,” which states that the propositions “if a, then b” and “not-b” can be used to validly infer “not-a” (“a” and “b” stand for any two propositions.) The rules also state when two statements are logically equivalent, such as “a” and “not-not-a.” See “valid,” “rules of implication,” and “rules of equivalence” for more information.
rules of replacement – Rules that tell us when two propositions mean the same thing. We can replace a proposition in an argument with any equivalent proposition. For example, we know that “all dogs are animals and all cats are animals” means the same thing as “all cats are animals and all dogs are animals” because of the rule known as commutation—“a and b” and “b and a” both mean the same thing. (“a” and “b” stand for any two propositions.)
salva veritate – Latin for “with unharmed truth.” It refers to potential changes to statements that will not alter the truth of the statement. For example, we could generally talk about “trilaterals” instead of “triangles” without changing the truth of our claims about them because they both refer to the same kind of mathematical object.
satisfiability – The ability to interpret a set of statements of formal logic in a way that would make them true. Statements that can all be simultaneously interpreted as true in this way are satisfiable. Consider the statement “a → b.” In this case “a” and “b” stand for any propositions and “→” stands for “implies.” We can interpret “a” as being “all bats are mammals” and “b” as being “all bats are animals.” In that case we can interpret the whole statement as saying “if all bats are mammals, then all bats are animals,” which is a true statement. Therefore, “a → b” is satisfiable. See “formal logic” and “interpretation” for more information.
saving the phenomena – Explanations that are consistent with our experiences or account for our experiences. For example, someone who claims that beliefs and desires don’t exist should tell us why they seem to exist as part of our experiences. Explanations that fail to save the phenomena are likely to be counterintuitive and inconsistent with our experiences.
schema – A synonym for “scheme of abbreviation.”
scheme of abbreviation – A guide used to explain what various symbols refer to for a set of symbolic logical propositions, which can be used to translate a proposition of symbolic logic into natural language. For example, consider the logical proposition, “A ∧ B.” A scheme of abbreviation for this proposition is “A: The President of the USA is a man; B: The President of the USA is a woman.” “∧” is used to mean “and/or.” We can then use this scheme of abbreviation to state the following proposition in natural language—“The President of the USA is a man or a woman.”
scientific anti-realism – The view that we shouldn’t believe in unobservable scientific entities, even if they are part of an important theory. For example, we do not observe electrons, but we know what effect they seem to have on things when we accept certain theories and models. Scientific anti-realists would claim that we don’t know if electrons really exist or not. One type of scientific anti-realism is “instrumentalism.”
scientific method – The way science makes discoveries, which involves hypotheses, observations experiments, and mathematical models. It is often thought to follow the “hypothetico-deductive method.”
scientific realism – The view that unobservable scientific entities are likely real as long as their existence is properly supported by the effects they have. For example, germs were originally unobservable, but scientists hypothesized that various diseases were caused by them. “Scientific realism” can be contrasted with “scientific anti-realism.”
scientism – The view that science is the best source of knowledge for everything. For example, anyone who agrees with scientism would likely think that morality is either nonfactual or that science is the best way for us to attain moral knowledge. The word ‘scientism’ is generally used as a pejorative to refer to inappropriate views that science can be best used to attain knowledge when there are better non-scientific methods to attain knowledge. The non-pejorative term that is often used in philosophy rather than ‘scientism’ is ‘epistemic naturalism.’
secondary qualities – Certain qualities an object has, such as color, smell, and taste. John Locke argued that secondary qualities only exist because we perceive them and they are not actually part of the object itself. Many people think this implies that secondary qualities are illusions, but John McDowell argues that they exist as part of our experience and are not illusory because illusions are deceptive, but there is nothing deceptive about secondary qualities. According to McDowell, experiences of secondary qualities don’t trick us into having false beliefs about reality. “Secondary qualities” can be contrasted with “primary qualities.”
secular – Without any religious requirement or assumptions. For example, the argument that “God dislikes homosexuality, so homosexuality is immoral” would require us to accept religious assumptions. Secular arguments are meant to be persuasive to people of every religion and to those who lack a religion.
secular humanism – A view that reason and ethics are of great importance, and that we should reject all supernatural beliefs. Secular humanism also states that human beings are of supreme moral importance, so ethical systems should primarily concern human welfare. “Secular humanism” can be contrasted with “religious humanism.”
secularism – (1) The separation of church and state. To remove religious domination or requirements from politics. (2) To separate religious requirements or domination from any institution or practice.
secundum quid – Latin for “according to the particular case.” It’s generally used to refer to the “hasty generalization” fallacy.
selective evidence – See “one-sidedness.”
selective perception – A cognitive bias defined by people’s tendency to interpret their experiences in a way consistent with (and perhaps as confirming) their beliefs and expectations. For example, liberals who experience conservatives giving bad arguments could take that experience as confirmation that conservatives don’t argue well in general. This bias is related to “theory-laden observation” and the “confirmation bias.”
self-contradiction – A statement is a self-contradiction when it can’t be true because of the logical form. For example, “Socrates is a man and he is not a man.” This statement can’t be true because it is impossible to be something and not that thing. It has the logical form, “a and not-a.” (“a” is any proposition.)
self-evidence – A form of justification based on noninferential reasoning or intuition. It is often thought that self-evidence is based on the meaning of concepts. Mathematical beliefs, such as “1+1=2” is a plausible example. If something is self-evident, then justification has come to an end. Self-evidence can help assure us that we can justify beliefs without leading to an infinite regress or vicious circularity. According to Robert Audi, understanding that a proposition is true because it’s self evident can require background knowledge and maturity, it could take time to realize that a proposition is self-evident, and beliefs that are justified through self-evidence could be fallible. This opposes the common understanding that self-evident propositions can be known by anyone, are known immediately, and are infallible.
self-defeating – (1) The property of a belief (or theory) that gives us a reason to reject itself (the belief or theory). For example, the verification principle states that statements are meaningless unless they can be verified, but it seems impossible to verify the verification principle itself. That would suggest that the verification principle is self-defeating because it’s meaningless. (2) The property of an action that undermines itself. A self-defeating prophecy makes it impossible to come true. For example, a prophecy that states that a person will die by driving a car could decide never to drive a car again (and will therefore avoid dying that way). In that case the prophecy failed to predict the future after all.
self-serving bias – The bias defined by people’s tendency to attribute their successes to positive characteristics they possess, and their tendency to attribute their failures to external factors that are outside of their control. For example, a person could think she does well on a test because of knowing the material, but that same person could say she failed another test because the test was too hard. This could be related to the “illusory superiority” bias.
semantic completeness – A logical system is semantically complete if and only if it can prove everything it is supposed to be able to prove. For example, propositional logic is semantically complete if and only if it can be used to determine whether any possible argument is valid.
semantic externalism – The view that the meaning of terms can be partially based on things external to our minds. For example, a semantic externalist would likely agree that no chemicals other than H2O could be water, even if they are functionally equivalent and cause the same experiences—quench thirst, boils when hot, etc.
semantic internalism – The view that the meaning of terms can be entirely based on things in our minds. For example, a semantic internalist would likely agree that we could find out that some chemical other than H2O is also water if it is functionally equivalent and causes the same experiences—it quenches thirst, boils when hot, etc.
semantics – The meaning of words or propositions. Some people are said to be “debating semantics rather than substance” when they argue about what terms mean rather than what is true or false regardless of how we define terms. “Semantics” is often contrasted with “syntax.”
sensation – (1) An experience caused by one or more of the five senses (sight, sound, touch, taste, and smell). (2) To experience qualia or a feeling.
sense – (1) What Gottlob Frege called “sinn” to refer to the meaning or description of a word. For example, “the morning star” and “the evening star” both have different senses, but refer to the same thing (i.e. the reference is the planet Venus). The sense of “the morning star” is “the last star we can see in the morning,” and the sense of “the evening star” is “the last star we can see at night.” Gottlob Frege contrasted “sense” with “reference.” (2) The ability to understand. For example, we might talk about someone’s good sense. (3) To perceive. For example, we might say that we sense people in the room when we can see them. (4) An ability of perception; such as sight, sound, touch, taste, and smell. These are said to be “the five senses.”
sense data – The experiences caused by sense perception (the five senses). Sense data can be understood without interpretation and exist exactly as they are experienced. The visual sense data of a green apple includes a visual experience consisting of a blotch of green.
sense perception – See “perception.”
sensible intuition – According to Immanel Kant, sensible intuition refers to the concepts required for experience. For example, space and time. Without those concepts it would be impossible to experience the phenomenal world. For example, visual experience would just involve blotches of color that would be impossible to interpret as being information about an external world.
sentential – The property of being related to sentences or propositions. For example, sentenial logic is a synonym for “propositional logic.”
sentential logic – A synonym for “propositional logic.”
sentimentalism – See “moral sentimentalism.”
set – (a) A group of things that all share some characteristic. For example, the set of cats includes every single cat that exists. (b) In Egyptian mythology, Set is the god of deserts, storms, and foreigners. Set has the head of an animal similar to a jackal, and he is known as “Sēth” in Ancient Greek.
Sheffer stroke – A symbol used in propositional logic to mean “not both” or “it’s not the case that a and b are both false.” (“a” and “b” are any two propositions.) The symbol used is generally “|” or “↑.” For example, “Dogs are mammals ↑ dogs are lizards” means that “it is not the case that dogs are both mammals and lizards.”
Ship of Theseus – A ship used as part of a thought experiment. Imagine a ship is slowly restored and all the parts are eventually replaced. This encourages us to ask the question—Is it the same ship?
signifier – A sign that conveys meaning. For example the word ‘red’ is a signifier for a color. Signifiers are contrasted with the “signified.”
signified – The entity, state of affairs, meaning, or concepts referred to by a sign. For example, the word ‘Socrates’ refers to an actual person (the signified). The “signified” is often contrasted with “signifiers.”
simpliciter – Latin for “simply” or “naturally.” It’s used to describe when something is considered without qualification. For example, torturing a helpless nonhuman animal is morally wrong simpliciter.
simplicity – (1) See “Occam’s razor.” (2) To lack complexity.
simplification – A rule of inference that states that we can use “a and b” as premises to validly conclude “a.” (“a” and “b” stand for any two propositions.) For example, “Socrates is a man and Socrates is mortal; therefore, Socrates is a man.”
sinn – German for “sense.”
skepticism – (1) Disbelief. Skepticism of morality could be the belief that morality doesn’t really exist. (2) A state of uncertainty. A skeptic about gods might not believe or disbelieve in gods. (3) An attitude defined by a healthy level of doubt. (4) See “Pyrrhonism.”
slave morality – A type of morality that is life-denying and views the world primarily in terms of evil. Slave morality tends to define “goodness” in terms of not being evil (i.e. not harming others).
slippery slope – (1) An argument that requires us to believe that incremental causal changes will likely happen given that we make certain decisions. For example, having violence on television might desensitize people to violence and lead to even greater violence on television in the future by an ever-increasing demand for more thrilling forms of entertainment. (2) An informal fallacy committed by arguments that require us to believe that some decision will likely lead to incremental changes for the worse without sufficient evidence for us to accept that the changes are likely to actually happen. For example, some people argue that we shouldn’t legalize same-sex marriage because that would likely lead to marriages between brothers and sisters, and eventually it would lead to marriages between humans and nonhuman animals.
slow track quasi-realism – An attempt to make sense out of moral language (such language involving moral facts, moral arguments, and moral truth) without endorsing moral realism by explaining how various particular moral statements can be coherent without moral realism. Even so, slow track quasi-realism does not require us to make the assertion that all moral language is perfectly consistent. “Slow track quasi-realism” is often contrasted with “fast track quasi-realism.” See “quasi-realism” for more information.
social construct – Something that exists because of collective attitudes and agreement. For example, money is a social construct that would not exist without people having certain collective attitudes.
social construction – The ability of collective attitudes and actions to create something. For example, our collective attitudes and actions create money, language, and the Presidency of the USA. These things would stop existing if we no longer believed in them.
social contract – The implicit mutual agreement or common acceptance of laws or ethical principles. A social contract does not have to be consciously agreed-upon. It’s an explanation for the legitimacy of laws or ethical principles insofar as we prefer to have them given the choice (and we could rationally agree to them).
social contract theory – (1) A theory that explains the legitimacy of laws or ethical principles in terms of a “social contract.” (2) The view that ethics originates from a social contract. Perhaps what’s morally right and wrong is based on the social contract (what people agree is right or wrong within their society).
social convention – See “convention.”
social darwinism – The view that the people should fight for survival through competition, such as through free-market capitalism. It is thought that those who do well in society (e.g. by making lots of money) are superior and deserve to do well, but that those who don’t do well deserve not to do well. Social darwinists believe that helping failing groups (e.g. poor people) will help keep those groups around when it would be better to let the groups die out.
social progress – For a culture to be improved through changes in political institutions, economic systems, education, technology, or some other cultural factor. Technological improvement is perhaps the least controversial form of social progress. Also see “sociocultural evolution” for more information.
social reality – Reality that exists because of the collective attitudes and actions of many people. For example, money, language, and the Presidency of the USA only exist because of the attitudes and actions of people. These things would stop existing if our attitudes and actions were changed in certain ways. See “institutional fact” for more information.
socialism – (1) An economy where the means of production (factories and natural resources) are publicly owned rather than privately owned. See “communism” for more information. (2) An economy that resembles communism to some degree, or that has more social programs than usual. This is also often called a “mixed system” (that has elements of both communism and capitalism).
sociocultural evolution – The view that people continue to find ways to adapt to their environment using technology, political systems, laws, improved education, and other cultural factors. See “social progress” for more information.
socratic dialectic – See “dialectic.”
soft atheism – To not believe in gods without believing gods don’t exist. Soft atheism is a form of indecision—to neither believe in gods nor to believe they don’t exist. “Soft atheism” is contrasted with “hard atheism.”
soft determinism – The view that the universe is deterministic and that people have free will. Soft determinsts are compatibilists, but not all compatibilists are soft determinists.
solipsism – The view that one’s mind is the only thing that exists. All other people and external objects are merely illusions or part of a dream.
sophism – A fallacious argument, generally used to manipulate or deceive others. Generally refers to “informal fallacies.”
sophist – (1) “Wise person.” (2) A rhetoric teacher from ancient Greece. Some of those teachers traveled to other countries, and questioned the taboos and cultural beliefs of the Greeks because those taboos and cultural beliefs were not shared by everyone in other countries. (3) Someone who is willing to use fallacious reasoning to manipulate the beliefs of other people. This sense of “sophist” is often contrasted with “philosopher.”
sophistry – Using nonrational argumentation that generally contain errors (flaws of reasoning). Sophistry generally refers to the manipulative use of “informal fallacies.”
sound argument – An argument that’s valid and has true premises. For example, consider the following sound argument—“If all dogs are mammals, then all dogs are animals. All dogs are mammals. Therefore, all dogs are animals.”
sound logical system – A logical system is sound if every statement it can prove using the axioms and rules of inference is a tautology (a logical truth). See “tautology” for more information.
soundness – See “sound argument” or “sound logical system.”
spatial parts – Physical parts of an object, such as molecules, hairs, or teeth. “Spatial parts” can be contrasted with “temporal parts.”
speech act – A communicative action using language. For example, commanding, requesting, or asking.
speicesism – Prejudice or bias against other species—humans would be speciesists for being prejudiced or biased against nonhuman animals. According to Peter Singer, speciesism refers to the bias of those who think that their own species are superior without any characteristic being the reason for that superiority (e.g. higher intelligence). If Singer is correct, then it could be morally right for one species to generally be treated better than other species, but it could also be morally right to treat some members of the species to be treated worse due to lacking certain positive characteristics.
spillover – A synonym for “externalities.”
spooky – Something is spooky if it is mysterious, supernatural, or other-worldly. We have a view of the world full of atoms and energy, and anything that isn’t explained by physical science is going to be regarded with a skeptical attitude by many philosophers.
spurious accuracy – A synonym for “overprecision.”
square of opposition – A visual aid used in logic to derive various logical relations between categories and quantifiers. For example, the square shows that if we can know that not all x are y, then we know that there is at least one x that is not a y. If we know that not all animals are dogs, then we know that there is at least one animal that is not a dog.
state-by-state dominance – A synonym for “statewise dominance.”
state of affairs – A situation or state of reality. For example, the state of affairs of dropping an object while standing on the Earth will lead to a state of affairs consisting of the object falling to the ground.
statement – Classically defined as a sentence that’s true or false. However, some philosophers argue that a statement could have some other truth value, such indeterminate (i.e. neither true nor false). For example, “this sentence is false” might be indeterminate.
statement letter – See “propositional letter.”
statement variable – See “propositional variable.”
statewise dominance – The property of a decision that can be said to be “superior” to another based on the decision-maker’s preferences and the fact that the outcomes of the decision are more likely to be preferable. Every possible outcome of a statewise dominant decision is just as preferable as the other except at least one outcome must be more preferable. See “stochastic dominance” for more information.
stipulative definition – A definition used to clarify what is meant by a term in a specific context. Stipulative definitions are often given to avoid the ambiguity or vagueness words have in common usage that would make communication more difficult. “Stipulative definitions” can be contrasted with “lexical definitions.”
stochastic – Regarding probability calculus. Stochastic systems have predictable and unpredictable elements that can be taken to be part of a probability distribution. For example, See “probability calculus” and “probability distribution” for more information.
stochastic dominance – The property of a decision that can be said to be “superior” based on the decision-maker’s preferences and the odds of leading to a preferable outcome. For example, a decision to eat food rather than starve to death has stochastic dominance assuming that the decision-maker would prefer to live and avoid pain.
stoicism – The philosophy of the Stoics. They thought virtue was the only good, that the virtuous are happy, that it’s virtuous to embrace reality rather than condemn it, that the universe is entirely physical, and that a pantheistic deity assures us that everything that happens is part of a divine plan.
straw man – A fallacious form of argument committed when we misrepresent another person’s arguments or beliefs in order to convince people that the arguments or beliefs are less reasonable than they really are. For example, Andrea might claim that “stealing is generally wrong,” and Charles might then reply, “No. Andrea wants us to believe that stealing is always wrong, but sometimes stealing might be necessary for survival.” Straw man arguments are not “charitable” to another person’s arguments and beliefs—to present them as rationally defensible as they really are.
strong A.I. (artificial intelligence) – A computer that has a mind of its own that is similar to the mind of a person.
strong argument – An inductive argument that is unlikely to have true premises and a false conclusion at the same time. Such an argument is thought to be a good reason to believe the conclusion to be true as long as we assume the premises are true. For example, “Half the people who had skin cancer over the last one-hundred years were given Drug X and their cancer was cured, and no one was cured who wasn’t given Drug X. Therefore, Drug X is likely a cure for skin cancer.” “Strong arguments” are sometimes contrasted with “cogent arguments.”
strong atheism – A synonym for “hard atheism.”
strong conclusion – An ambitious conclusion. Strong conclusions require more evidence than weak conclusions. For example, consider the following argument—“If objects fall whenever we drop them, then the best explanation is that invisible fairies move objects in a downward direction. Objects fall whenever we drop them. Therefore, the best explanation for is that invisible fairies move objects in a downward direction.” In this case the conclusion is too strong and we should present better evidence for it or not argue for it at all.
structuralism – (1) In philosophy of mathematics, structuralism refers to the view that the meaning of mathematical objects is exhausted by the place each object has within a mathematical system. For example, the number one can be defined as the natural number after zero. Structuralism is a form of “mathematical realism.” (2) In literary theory, structuralism refers to an attempt to introduce rational criteria for literary analysis. Structuralism also refers to the view that there is a formal meta-language that can help us understand all languages. (3) In philosophy of science, structuralism refers to the view that we should translate theories of physics into formal systems.
suberogatory – Actions or beliefs that are inferior to alternatives (or somewhat harmful), but are permissible. Suberogatory beliefs are compatible with rational requirements or normative epistemic constraints; and suberogatory actions are inferior to alternatives (or somewhat bad), but are compatible with moral requirements. For example, being rude is not generally serious enough to be “morally wrong,” but it is suberogatory. “Suberogatory” can be contrasted with “supererogatory.”
subject term – A synonym for “minor term.”
subjective certainty – A synonym for “psychological certainty.”
subjective idealism – The view that there is no material substance and that external reality exists only in our mind(s). For example, George Berkeley argued that only minds exist and that all of our experiences of the external world are caused by God. We all live in a shared dream world with predictable laws of nature.
subjective ought – What we ought to do with consideration of the knowledge of the person who will make a moral decision. What we subjectively ought to do is based on what is reasonable for us to do given our limited understanding of what will happen. For example, some utilitarians say we ought to do whatever we have reason to think will likely maximize happiness. We might say that a person who gives food to a charity is doing what she ought to do as long as it was very likely to help people and very unlikely to harm them, even if many of the people who eat the food have an unexpected allergic reaction. “Subjective ought” can be contrasted with “objective ought.”
subjective reason – See “agent-relative reason.”
subjective right and wrong – What is right or wrong with consideration of the knowledge of the person who will make a moral decision. Subjective right and wrong involves proper and improper moral reasoning. For example, we might say that a person ought not buy a lottery ticket because that person has no reason to expect to win, even if she really does buy one and win. We might say, “You won, but you had no reason to think you’d win. You just got lucky.” “Subjective right and wrong” can be contrasted with “objective right and wrong.”
subjectivism – A view that moral judgments refer to subjective states. For example, “stealing is wrong” would be true if the person who says it hates stealing, but it would be false if the person who states it likes stealing. Subjectivism has been criticized for being counterintuitive insofar as people who disagree about what’s morally right or wrong do not think they are merely stating their subjective states. When we give arguments for a moral position, we often think other people should agree with us because morality is “not just a matter of taste.” If subjectivism is true, then moral disagreement would be impossible, and moral justification would plausibly be irrational.
subjectivity – See “ontological subjectivity” and “epistemic subjectivity.”
subsentential – Something relating to parts of sentences, such as predicate logic.
subsentential logic – A synonym for “predicate logic.”
substance – (1) A type of foundational domain of reality. For example, materialists think matter is the only substance, and dualists think that both matter and mind are substances. (2) The most basic kinds of stuff that don’t require anything else to exist. For example, According to Rene Descartes, there are two different substances: matter and mind.
substance dualism – See “dualism.”
substantive – Non-tautological. For example, saying that rocks fall because of gravity is a substantive claim about reality. See “tautology” for more information.
sufficient condition – A condition that assures us that something else will happen or exist. For example, hitting a fly with a hammer is sufficient to kill the fly. Sufficient conditions can be contrasted with “necessary conditions.”
sufficient reason – (1) A justification that is good enough to make a conclusion reasonably believed. (2) A state of affairs that has a sufficient cause or explanation for existing and being the way it is.
sui generis – Latin for “of it’s own kind” or “unique in its characteristics.” A separate category that is different from all other categories. For example, some philosophers think minds are sui generis and can’t be reduced to brain activity. This is related to the concept of being “irreducible” because anything unique in this sense can’t be fully understood in terms of its parts.
summum bonum – Latin for “the supreme or highest good.”
super man – See “overman.”
supererogatory – Actions that are good or praiseworthy, but are not morally required. They are “above the call of duty.” “Supererogatory” actions can be contrasted with “obligatory” and “suberogatory” actions.
supervenience – Something supervenes on something else if underlying conditions perfectly correlate with it. A state of affairs (A) supervenes on another state of affairs (B) if any change in (A) requires a change in (B). The mind seems to supervene on the brain, and morality seems to supervene on physical and psychological facts. Any change in the mind seems to require a change in the brain, and any change in the moral status of a situation (what one ought to do) seems to require different circumstance involving physical and psychological facts.
supporting argument – A synonym for “positive argument.”
suppressed conclusion – A synonym for “unstated conclusion.”
suppressed premise – A synonym for “unstated premise.”
syllogism – (1) An argument consisting of two premises and a conclusion. (2) A synonym for “deductive argument.”
symbolic logic – A formal logical system devoid of content. Symbols are used to replace content and logical connectives. For example, “if all men are mortal, then Socrates is mortal” could be written as “A → B.” In this case “A” stands for “all men are mortal, “B” stands for “Socrates is mortal” and “→” stands for “implies.” See “formal logic” and “logical connectives” for more information.
syntactic completeness – A logical system is syntactically complete if and only if adding an unprovable axiom will produce at least one contradiction.
syntactical variable – A synonym for “metavariable.”
syntax – The arrangement of words or symbols. Syntax can involve rules and symbol manipulation. For example, “Like chocolate do I” would be an improper way to say “I like chocolate” in the English language. Logical form has syntax, but lacks semantics. “Syntax” is often contrasted with “semantics.”
synthesis – (1) A combination of things that become something new. For example, the synthesis of copper and tin creates bronze. (2) In dialectic, it refers to a new thesis (hypothesis or mode of being) proposed to avoid the pitfalls of the old thesis. It is considered to be a “synthesis” as long as the new thesis has similarities to the old thesis because it’s then a new and improved theory based on both the old thesis.
synthetic – Statements that cannot be true by definition. Instead, they can be true because of how they relate to something other than their meaning, such as how they relate to the world. For example, “humans are mammals” is synthetic and can be justified through empirical science. “Synthetic” is the opposite of “analytic.”
synthetic a priori – Statements that are not true by definition or entirely justified by observation. David Hume seemed to overlook this category when he divided all knowledge into matters of facts and relations of ideas. Immanel Kant coined this term and thought that “space and time exists” is an example of a synthetic a priori statement insofar as we can know that human beings require concepts of space and time in order to observe anything. However, Kant did not think we could know if space and time refers to anything other than as something that’s part of our experience.
tabula rasa – Latin for “blank slate.” Refers to the hypothesis that people were born not knowing anything.
tacit knowledge – Knowledge that is not stated or consciously understood, but is unconsciously understood, intuitively held, or implied by one’s other knowledge. Tacit knowledge is often attained or held without knowledge of the person who has it. “Tacit knowledge” can be contrasted with “explicit knowledge.”
tautology – (1) A statement with a logical form that guarantees that it is true. The statement “Socrates was a man or he wasn’t a man” is true no matter what because it has the logical form “a or not-a.” (“a” is any proposition.) (2) A rule of replacement that has two forms: (a) “a” and “a and a” both mean the same thing. (b) “a” and “a and/or a” both mean the same thing. (“a” stands for any propositions) For example, “Socrates is a man” means the same thing as “Socrates is either a man or a man.”
technê – Greek for “know-how, craft, or skill.”
teleology – A system or view that’s goal-oriented. Aristotelian teleology is the view that things in nature have a purpose and that they are good if they achieve their purpose well. Utilitarianism is also considered by many to be “teleological” because it posits that maximizing happiness is the appropriate goal for people.
temporal interpretation of modality – The view that “necessity” and “possibility” are based on time. For something to be necessary is for it to be true of all times, and for something to be possible is for it to be true at some time. It’s necessary that dogs are mammals because dogs are mammals at all times, and it’s possible for it to rain because there is at least one time that it rains. See “modality” and “truth conditions” for more information. The “temporal interpretation of modality” can be contrasted with the “world’s interpretation of modality.”
temporal parts – Time-dependent parts of a persisting thing often thought of as time-slices. The view that persisting things have temporal parts is based on the assumption that a persisting thing only exists in part at any given time-slice. We can talk about the temporal parts of a person in terms of the person yesterday, the person today, and the person tomorrow; and the person is thought to only exist in her entirety given every moment of her existence. We can talk about a person in any given time slice (such as August 3, 10:30 am), but that is not the entirety of the person. One reason some philosophers believe in temporal parts is because it can explain how an object can have two conflicting properties, such as how a single apple can be both green (while growing) and red (when ripe). If it has temporal parts, then we can say it is green in an earlier time-slice, and red in a later time-slice.
temporal modality – What makes a proposition true or false based on whether it is being applied to the past, present, or future. For example, dinosaurs existed in the past, but they do not presently exist.
term: See “terminology.”
terminology – (1) A word or phrase used to refer to something. For example, “critical thinking” is a term consisting of two words used to refer to a single concept. (2) A collection of words or phrases used to refer to something. For example, the technical concepts philosophers discuss involves a lot of terminology.
testability – The property of a hypothesis or theory that makes it possible to produce experiments that can reliably provide counterevidence against the theory. It can be said that something is testable in this sense if (a) there are certain events that could occur that would be incompatible with the theory and (b) the incompatible events could be produced in repeatable experiments.
testimonial evidence – The experience of a person used as evidence for something. Testimonial evidence is often fallacious, but it can count for something and be used when we use inductive reasoning. For example, if hundreds of people all find that a drug effectively works for them and no one found that the drug was ineffective, then that would be evidence that the drug is really effective. See “anecdotal evidence.”
theism – The view that one or more personal gods exist.
theocracy – A government dominated or ruled by a theistic religious group. The rulers of theocracies claim to know the mind and will of one or more god to legitimize their power.
theological noncognitivism – The view that judgments concerning gods are neither true nor false. Theological noncognitivists might think that there is no meaningful concept of “gods.” In that case statements about gods would be nonsense. For example, some philosophers have argued that it’s impossible to prove gods exist and that only theories we can prove to be true are meaningful. (See “verificationism” for more information.)
theology – The systematic study of gods. Also see “natural theology” and “revealed theology.”
theorem – A statement that we can know is true because of the axioms and rules of inference of a logical system. For example, consider a logical system with “a or not-a” as an axiom and the rule of inference that “a implies a or b.” The following is a proof of that system—“A or not-A. Therefore, A or not-A, or B.” In this case “A or not-A, or B” is a theorem. See “derivation,” “axioms,” “rules of inference,” and “logical system” for more information.
theoretical knowledge – See “theoretical wisdom.”
theoretical wisdom – The attainment of knowledge concerning the nature of reality and logic. Aristotle contrasts “theoretical wisdom” with “practical wisdom.”
theoretical virtues – The positive characteristics that help justify hypotheses or theories, such as simplicity and comprehensiveness. Theories that have greater theoretical virtues than alternatives are “more justified” than the alternatives.
theory – A comprehensive explanation for various phenomena. A hypothesis is not necessarily different from a “theory,” but the term ‘theory’ tends to be used to refer to hypotheses that have been systematically defended and tested without facing strong counter-evidence. In science, theories are taken as being our best explanations that should be believed and relied upon for practical everyday life. However, philosophers are often unable to say when a philosophical theory is the most justified and generally don’t insist that a philosophical theory should be accepted by everyone.
theory-laden observation – Observations are theory-laden when they are influenced by assumptions or interpretation. For example, visual experience is a collection of color blotches, but we interpret the experience as a world extended in space and time.
thesis – (1) An argumentative essay. For example, a doctoral thesis. (2) The conclusions made within an argumentative essay. For example, Henry David Thoreau concluded that people should stop paying their taxes in certain situations in his essay “Civil Disobedience.” (3) The claim or solution made within a dialectic. For example, capitalism could be considered to be a thesis used to live our lives with greater freedom, according to a Hegelian dialectic. See “dialectic” for more information.
thick concepts – Concepts that are fleshed out and involve a detailed understanding, such as deception and the veil of ignorance. These concepts are not as indefensible to us as general concepts that are less fleshed out, such as belief and desire. “Thick concepts” can be contrasted with “thin concepts.”
thin concepts – Concepts that are very general and perhaps even indispensable, such as right, wrong, belief, and desire. These concepts relate to our experiences and can be explained in further detail by competing interpretations or theories. “Thin concepts” can be contrasted with “thick concepts.”
thing in itself – Reality or a part of reality as it actually exists apart from flawed interpretation or perception.
third-person point of view – What it’s like to understand or experience the behavior and thoughts of other people as an observer. For example, to view another person eating breakfast is done from the third-person point of view. The “third-person point of view” is often contrasted with the “first-person point of view.”
token – (1) A particular concrete instance or manifestation. For example, some philosophers argued that every token mental event (such as a particular pain) is identical to a token physical event (such as something happening in the brain), but pain in general is not necessarily caused by any generalized type of physical event. For example, the same experience of pain might also be identical to some mechanical activity of a sentient robot. “Tokens” are often contrasted with “types.” (2) A symbolic object or action. For example, coins can be given out to be used as money at a carnival. (3) To have members of a different group just to give the impression of inclusiveness. For example, to have a black man play the part of an expendable character in a horror movie. The black man is often the first of the main characters to die in a horror movie.
totalitarianism – A political system where people have little to no freedom, and the government micromanages many details of the lives of citizens.
thought experiment – An imaginary situation used to clarify a hypothesis or support a particular belief. For example, someone might say, “Hurting people is never wrong.” Another person might reply with a thought experiment—“Imagine someone decided to beat you up just because you made her angry. Don’t you think that would be wrong?”
trans-world identity – For something to exist in multiple worlds. Some philosophers talk about there being multiple possible worlds. For example, they might say that it was possible for George Washington to become the King of the United States because there’s a possible world where he became the king. In this case we could say that George Washington has trans-world identity because he exists in multiple possible worlds. “Trans-world identity” can be contrasted to “world-bound individuals.” See “modality” and “possible world” for more information.
transcendence – (1) The property of being beyond or outside. (2) Being beyond and independent of the physical universe. Some people believe God is transcendent. “Transcendence” of this type can be contrasted with “immanence.”
transcendental apperception – According to Immanuel Kant, this is what makes experience possible. It is the unity and identity of the mind—our ability to have a single point of view or field of experience. Without a transcendental appreception, our experiences would be free-floating, lack continuity, and lack unification. “Transcendental apperception” can be contrasted with “empirical apperception.”
transcendental argument – An argument concerning what is required (or might be required) for a state of affairs. Transcendental arguments are often arguments made involving the necessary condition for the possibility of something else. For example, visual experience seems to require the assumption that an external world exists that can be seen (or we would only see blotches of colors).
transcendentalism – A literary, political, and philosophical field that was based on the Unitarian church. Transcendentalists often criticized conformity, criticized slavery, and encouraged solitude in the wild outdoors.
translation – (1) The restatement of a statement in natural language to a statement of formal logic. For example, “either George Washington was a dog or a mammal” can be translated into propositional logic as “P ∨ Q,” where “P” stands for “George Washington was a dog” and “Q” stands for “George Washington was a mammal” and “∨” is a logical connective meaning “and/or.” See “logical form” for more information. (2) In ordinary language, translation refers to a restatement of a sentence (or set of sentences). For example, the sentence “Il pleut” can be translated from French to English as “It’s raining.”
transparency – (1) The property of an epistemic state that guarantees that we know when the epistemic state exists. Weakly transparent epistemic states guarantee that we know we have them, and strongly transparent epistemic states guarantee that we know when we have them and when we don’t. For example, pain is a plausible example of a strongly transparent epistemic state. (2) The property of being in the open without pretense. (3) The property of being see-through. For example, glass windows are usually see-through.
transposition – A rule of replacement that states that “if a, then b” means the same thing as “if not-b, then not-a.” (“a” and “b” stand for any two propositions.) For example, “if Socrates is a dog, then Socrates is a mammal” means the same thing as “if Socrates is not a mammal, then Socrates is not a dog.”
Trigger’s Broom – A broom used in a thought experiment in which all the parts of the broom have been replaced. This encourages us to ask the question, “Is it still the same broom?”
true – The property that some propositions have that makes them based on reality. According to Aristotle, a statement is true if it corresponds with reality. For example, “Socrates was a man” is true. However, there might be other uses of the word ‘true,’ such as, “it is true that the pawn can move two spaces forward when it is first moved in a game of Chess.” Many such “truths” are based on agreements or human constructions and are not factual in the usual sense of the word. See “correspondence theory of truth” and “deflationary theory of truth” for more information. “True” is the opposite of “false.”
truth conditions – The conditions that make a statement true. For example, the truth condition of “the cat is on the mat” is a cat on a mat. The statement is true if and only if a cat is on a mat. Sometimes truth conditions are controversial, such as when we say it’s “necessary that people are rational animals.” It could be true if and only if people are rational animals in all times, or in all possible worlds, or perhaps given some other condition.
truth-functional – Complex propositions using logical connectives that can be determined to be true or false based on the truth-values of the simple propositions involved. For example, “dogs are mammals, and cats aren’t reptiles” is a complex proposition that contains two simple propositions: (a) “Dogs are mammals” and (b) “cats aren’t reptiles.” In this case we can determine the truth by knowing the truth of the simple propositions. Both propositions are true and form a single conjunction, so the complex proposition is also true.
truth preservation – The property of reasoning that can’t have true premises and false conclusions. See “valid” for more information.
truth table – A visual display used in formal logic that has columns and rows that are used to determine which propositions are true or false, what arguments are valid, what propositions are equivalent, etc. For example, the truth table for “p and/or q” is used to show that it’s true unless both p and q are false at the same time. (“George Washington is either a person and/or a dog” because at least one of those options are true.) This truth table looks like the following:
truth tree – A visual-oriented method used in formal logic to determine if a set of propositions are contradictory or consistent; or if an argument is valid or invalid; etc. For example, consider the argument form “if P, then Q; if Q, then R; therefore, if P, then R.” The following truth tree proves this argument form to be valid:
truth-values – Values involving the accuracy or factual nature of a proposition, statement, or belief. For example, true and false. There could be others, but those are the only two non-controversial truth-values.
tu quoque – Latin for “you too.” Often refers to a type of fallacious argument that implies that someone’s argument should be dismissed because the person who made the argument is a hypocrite. For example, a smoker might argue that cigarettes are unhealthy and someone else might reply, “But you smoke, so smoking is obviously not unhealthy!” See “ad hominem” for more information.
Turing machine – A machine that follows rules that cause it to make certain motions based on symbols. Turing machines are capable of using finite formal systems of logic and mathematics. Turing machines were originally hypothetical devices, but computers could be considered to be a type of Turing machine.
Turing test – A test used to examine the ability of a machine to speak natural language within a conversation. Machines that speak natural language within conversations in exactly the same way as real human beings pass the Turing test. Any machine that passes the Turing test could be said to adequately simulate human behavior in regards to its ability to simulate a conversation in natural language. There could be tests similar to the Turing test that tests a machine’s ability to simulate other types of human behavior.
Twin Earth – A hypothetical world or planet exactly like ours in almost every respect. For example, Hilary Putnam invented this concept to introduce a world with a substance exactly like water that has a different chemical composition. See “quater” and “possible world” for more information.
type – A kind of thing or a general category. For example, some philosophers argued that every mental event type (such as pain) is identical to a physical event type (such as brainwaves). This would imply that the same experience of pain could never be identical to some mechanical activity of a sentient robot—it would therefore be impossible to have a robot with the same thoughts and feelings as a human being. “Types” are sometimes contrasted with “tokens.”
Übermensch – German for “overman.”
underdetermination – The status of having insufficient evidence to know what we should believe. For example, taking a pill and being cured of an illness shortly afterward could be evidence that the pill cured the illness, but it’s also possible that the person would be cured of the illness regardless of taking the pill. Fallacies related to underdetermination include “hasty generalization,” “cum hoc ergo propter hoc” and “anecdotal evidence.”
undistributed middle – A fallacious categorical syllogism committed when the middle term is neither distributed in the major premise nor the minor premise. For example, “All dogs are mammals. All animals are mammals. Therefore, no dogs are animals.” See “distribution” and “middle term” for more information.
universal – (1) What’s true for everyone or everything relevant. For example, it’s a universal fact of humans that they are all mammals. (2) A concept can be said to be “a universal” when it refers to a type of thing (e.g. goodness). There were realists who thought universals existed apart from our generalizations, conceptualists who thought universals existed only in the mind, and nominalists who thought that universals were merely convenient “names” we give to describe various particular objects. The opposite of a “universal” is a “particular.”
universal quantifier – A term or symbol used to say something about an entire class. For example, “all” and “every” are universal quantifiers used in ordinary language. “All horses are mammals” means that if a horse exist, then it is a mammal.” This statement does not say that any horses actually exist. The universal quantifier in symbolic logic is “∀.” See “quantifier” for more information.
Universal Reason – The mental or intelligent element of the universe conceived as a deity by the Stoics. The Stoics saw the entire universe as a god—matter is the body and Universal Reason is the mind. They believed that Universal Reason has a divine plan and determines that everything that happens in the universe happens for a good reason.
universal truth – A truth that always applies, such as the truth of the law of non-contradiction.
universalizability – Something applicable to everyone. Immanuel Kant thinks that moral principles must be universalizable in that everyone ought only act for a reason that one could will for someone else who is in the same situation. Universalizability seems necessary to avoid hypocrisy. For example, it would be wrong for us attack another person just because she makes us angry because it would be wrong for other people to attack us just because we make them angry.
univocal – Something that completely lacks ambiguity and only has one possible meaning. “1+1=2” is a plausible example. “Univocal” is often contrasted with “equivocal.”
unstated assumption – An assumption of an argument that is not explicitly stated, but is implied or required for the argument to be rationally persuasive. The assumption could be a premise or conclusion. For example, consider the argument “the death penalty is immoral because it kills people.” This argument requires a hidden assumption—perhaps that “it’s always immoral to kill people.”
unstated conclusion – A conclusion of an argument that is not explicitly stated, but is implied. For example, consider the argument “the death penalty kills people and it’s immoral to kill people.” This argument implies the unstated conclusion—that the death penalty is immoral.
unstated premise – A premise of an argument that is not explicitly stated, but is implied or required for the argument to be rationally persuasive. For example, consider the argument “All humans are mortal because they’re mammals.” This argument requires an unstated premise—that “all mammals are mortal.”
useful fictions – Nonfactual concepts used for thought experiments or philosophical theories. For example, social contracts, the veil of ignorance, quater, grue, or the impartial spectator. These fictions can illuminate various philosophical issues or intuitions. Sometimes they present simplified situations to isolate important distinctions. For example, the concept of quater is of a chemical functionally equivalent to water that’s just like it in every single functional way, but it’s not H2O. Quater illuminates the intuition that the word ‘water’ does not merely refer to a chemical with certain functions because the chemical composition is also important.
usefulness – The importance of something for attaining a goal. See “instrumental value.”
utilitarianism – A moral theory that states that happiness or pleasure is the only thing with positive intrinsic value, and suffering or pain is the only thing with intrinsic disvalue. Right actions are determined by what action produces the greatest good compared to the alternatives, and all actions are wrong to the degree that they fail to produce the greatest good.
utility – (1) Relating to causing happiness. For example, the “principle of uility.” (2) The degree something causes pleasure or happiness, and reduces pain or suffering. An action that has the most utility causes more happiness (or pleasure) than the alternatives. See the “greatest happiness principle.” for more information. (3) The property of being useful or to leading to a preferable state of affairs. See “utility function” for more information.
utility function – How much an agent values the outcome of various decisions she has to choose from based on incomplete information concerning how the world is or will be. For example, a person might have to choose whether or not to wear sunscreen depending on how long she expects to spend in the sun while spending time at the beach. Perhaps if she spends at least three hours at the beach, then odds are that she will get a sunburn unless she wears sunscreen. However, wearing sunscreen could be a waste of time and resources if she only spends on hour at the beach.
utility theory – A view concerning how we should make decisions based on how much we value the outcomes of various decisions we could make. Decisions that are likely to lead to a desirable outcome could be considered to be “rational” and those that are unlikely to could be said to be “irrational.” For example, trying to eat food by going to sleep would generally be irrational because the odds of it leading to a desired outcome is low.
vagueness – Words and phrases are vague when it’s not clear where the boundaries are. For example, it’s not clear how many hairs can be on a person’s head for the person to be considered to be “bald.” Vagueness often makes it hard for us to know where to “draw the line.” “Vagueness” is often contrasted with “ambiguity.”
valid argument – An argument is valid when it has a logical form that assures us that true premises guarantee the truth of the conclusion. It is impossible for a valid argument to have true premises and a false conclusion at the same time. For example, consider the following valid argument—“If Socrates is a dog, then Socrates is a mammal. Socrates is a dog. Therefore, Socrates is a mammal.” The argumnt has the valid argument form “If A, then B; A; therefore, B.” “Valid” is the opposite of “invalid.” See “logical form” for more information.
valid formula – A statement that is true under all interpretations. For example, “A or not-A” is true no matter what proposition “A” stands for. If “A” stands for “nothing exists,” then the statement is “either nothing exists or it’s not the case that nothing exists.” In propositional logic, “valid formula” is a synonym for “tautology.”
valid logical system – A logical system that has valid rules of inference. If a logical system is valid, then it’s impossible for true premises to be used with the rules of inference to prove a false conclusion. See “rules of inference” for more information.
validity – (1) See “valid argument,” “valid formula,” or “valid logical system.” (2) Sometimes strong inductive arguments are said to be “inductively valid.” See “strong argument” for more inforamtion.
value – What we describe as good or bad. Positive value is also known as “goodness.” See “intrinsic value,” “extrinsic value,” “instrumental value,” and “inherent value.”
variable – (1) See “propositional variable” or “predicate variable.” (2) A symbol used to represent something else, or a symbol used to represent a range of possible things. For example, “x + 3 = y” has two variables that can represent a range of possible things (“x” and “y.”)
veil of ignorance – According to John Rawls, the best way to know which principles of justice are the most justified would require people to be in a position with full scientific knowledge but without knowing who they will be in society (rich, poor, women, men, etc.) We are to imagine that they would be risk-adverse and would not grant people of any particular group unfair advantages.
Venn Diagram – A visual representation of a categorical syllogism that is generally used as a tool to determine if it’s logically valid or invalid. For example, consider the argument form, “All A are B. All B are C. Therefore, All A are C.” The following Venn Diagram proves this argument form to be valid because A is shaded in everywhere other than where A overlaps with C (which is a representation of the conclusion):
verificationism – (1) The “verification theory of meaning.” The view that statements are meaningless unless it is possible to verify that they are true. For example, if we can’t prove that creationism is true, then it would be considered to be meaningless. (2) The “verification theory of justification.” The view that statements are unjustified unless we can somehow verify that they are true. For example, the statement “induction is reliable” would be said to be unjustified because it’s plausible that we can’t verify that it’s true. See the “problem of induction” for more information.
vicious circularity – An objectionable type of circular reasoning or argumentation. For example, “Socrates is a man because Socrates is a man” is a viciously circular argument. However, “coherentism” is the view that circular types of justification are not vicious as long as enough observations and/or beliefs are involved. Perhaps the more observations and beliefs are involved, the less vicious circular reasoning is. For example, an argument of the form “a because b, b because c, and c because a” might be less vicious argument with the form “a because b, and b because a.”
vicious regress – An objectionable infinite regress. Consider the view that beliefs are unjustified unless we justify them with an argument (or argument-like reasoning). In that case we will have to justify all our beliefs with arguments that also have premises that must also be justified via argumentation. This view requires that justified beliefs be justified via infinite arguments. This regress could be considered to be “vicious” insofar as the solution of a problem has the same problem it’s supposed to solve. However, “infinitism” is the view that this infinite regress is not vicious.
virtue – A positive characteristic, which is generally discussed in ethics. Courage, moderation, and wisdom are the three most commonly discussed virtues. Some people are also said to be “virtuous” for being “good people.” Virtues can describe traits that make something better. For example, we could talk about “theoretical virtues” that make some theories more justified than others, such as comprehensiveness. Another translation of “virtue” from ancient Greek philosophy is “excellence.”
virtue epistemology – Epistemic theories that (a) focus on normative considerations, such as values, norms, and/or virtues (i.e. positive characteristics) that are appropriately associated with being reasonable; and (b) judge people as the primary bearer of epistemic values (e.g. virtues, rationality, reasonableness, etc.). Virtue epistemologists often talk about “intellectual virtues”—positive characteristics of people that help them reason properly, such as being appropriately open-minded and skeptical. See “virtue responsibilism” and “virtue reliabilism” for more information.
virtue ethics – Ethical theories that primarily focus on virtues, vices, and the type of person we are rather than “right and wrong.” Courage, moderation, and wisdom are virtues that many virtue ethicists discuss.
virtue reliabilism – A type of “virtue epistemology” that views “intellectual virtues” in terms of faculties; such as perception, intuition, and memory.
virtue responsibilism – A type of “virtue epistemology” that views “intellectual virtues” in terms of traits, such as open-mindedness, skepticism, humility, and conscientiousness.
virtuous – Having virtues or exhibiting virtues close to an ideal. See “virtue” for more information.
vice – Negative traits, such as cowardice, addiction, and foolishness. Vices can describe a person’s character traits or objectionable traits that make something else worse. For example, we talk about “vicious circularity” and “vicious regresses.”
vicious – (1) Having vices or exhibiting virtues to a very low degree. See “vice” for more information. (2) Being evil, aggressive, or violent. (3) Severely unpleasant.
will to power – Friedrich Nietzsche’s speculative metaphysics and psychological views that he believes to be compatible with natural science, but using better metaphors. Will to power is an alternative to free will and an alternative to laws of nature. One interpretation of will to power is that instead of claiming that free will can cause our actions in an indeterministic way, we do whatever our internal driving force dictates; and instead of claiming that laws of nature force objects into motion, the internal driving force of each object dictates how it moves. Will to power relates to psychology in that the unifying driving force of people and nonhuman animals is to attain greater power (i.e. personal freedom, self-control, health, strength, and domination over others).
wisdom – The ability to have good judgment. Wisdom is sometimes used refer to a person’s level of virtue and/or theoretical knowledge. See “practical wisdom” and “theoretical wisdom” for more information.
weak analogy – A fallacy that’s committed when an argument concludes that something is true based on an analogy. For example, swords and smoking can both kill people, but we can’t use that similarity to conclude that cigarette smoke is made of metal (just like swords). That would be a weak analogy. Not all arguments using analogies are fallacious. See “argument from analogy” for more information.
weak argument – Inductive arguments that have conclusions that are too strong given the evidence. The conclusion is not sufficiently supported by the evidence. For example, “three people took a drug last week and didn’t get sick, so the drug probably prevents people from getting sick.” Arguments that are too weak commit the “hasty generalization” fallacy.
weak atheism – A synonym for “soft atheism.”
weak conclusion – A modest conclusion. Weak conclusions require less evidence than strong conclusions. For example, consider the following argument—“If a light goes on at my neighbor’s house, then the best explanation is that a person is in the house. The light went on at my neighbor’s house. Therefore, the best explanation is that a person is in the house.” In this case the conclusion is weak and we would be unreasonable to demand very strong evidence in its favor as a result.
weakness of will – A situation of doing what one knows or believes to be morally wrong (i.e. the wrong thing to do, all things considered). For example, a person might think that stealing one hundred dollars from a friend is the morally wrong thing to do, and do it anyway.
well-formed formula – A properly-stated proposition of formal logic. For example, “a or b” is a well-formed formula, but “a b or” is not. (“a or b” could stand for any either-or statement, such as “either something exists or nothing exists.”)
WFF – See “well-formed formula.”
working hypothesis – A hypothesis that is provisionally accepted and could be rejected when new evidence is presented.
world-bound individuals – For something to only exist in one world. Some philosophers talk about there being multiple possible worlds, but they think each person is world-bound insofar as they can only exist in one world. For example, they might say that it was possible for Thomas Jefferson to become the first President of the United States because there’s a possible world where the person in that world who most resembles Thomas Jefferson became the first President of the United States. That possible world does not contain the actual Thomas Jefferson in it. “Trans-world identity” can be contrasted with “world-bound individuals.” See “modality” and “possible world” for more information.
world of ideas – The realm of the Forms. See “Plato’s Forms” for more information.
worlds interpretation of modality – A view of “necessity” and “possibility” based on worlds. For something to be necessary is for it to be true of all worlds, and for something to be possible is for it to be true at some world. It’s necessary that dogs are mammals because dogs are mammals at all words, and it’s possible for it to rain because there’s at least one world where it rains. The “worlds interpretation of modality” can be contrasted with the “temporal interpretation of modality.” See “possible worlds” and “modality” for more information.
worldview – A comprehensive understanding of everything (and how everything relates). Two worldviews could theoretically be equally justified and there might be no way to know which worldview is more accurate. Worldviews are likely influenced by cultures and are often influenced by religions. Worldviews are thought to help us interpret our experiences and influence perception.
worm theory – See “perdurantism.”
wrong – (1) Incorrect or inappropriate as opposed to “right.” For example, people who believe the Earth is flat are wrong. (2) “Morally wrong” as opposed to “morally right.” For example, murder is morally wrong.
youthism – Prejudice against younger people. A form of “ageism.” For example, the view that older people are generally more qualified for a job.
zeitgeist – German for “the spirit of an age.” It’s often used to refer to the biases, expectations, and assumptions of a group of people. Compare “zeitgeist” with “worldview.”
zombie – (1) Something that appears to be a human being or person that behaves exactly as we would expect a thinking person to behave, but actually has no psychological experiences or thoughts whatsoever. For example, a zombie could say, “I love coffee” but can neither taste coffee nor think it loves it. (2) In ordinary language, “zombie” refers to an undead person or walking corpse that either has no mind of its own or has an irresistible impulse to try to eat people. | http://ethicalrealism.wordpress.com/philosophy-dictionary-glossary/ | 13 |
26 | Science Fair Project Encyclopedia
A genetic algorithm (GA) is a heuristic used to find approximate solutions to difficult-to-solve problems through application of the principles of evolutionary biology to computer science. Genetic algorithms use biologically-derived techniques such as inheritance, mutation, natural selection, and recombination (or crossover). Genetic algorithms are a particular class of evolutionary algorithms.
Genetic algorithms are typically implemented as a computer simulation in which a population of abstract representations (called chromosomes) of candidate solutions (called individuals) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but different encodings are also possible. The evolution starts from a population of completely random individuals and happens in generations. In each generation, the fitness of the whole population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), modified (mutated or recombined) to form a new population, which becomes current in the next iteration of the algorithm.
Operation of a GA
The problem to be solved is represented by a list of parameters which can be used to drive an evaluation procedure, called chromosomes or genomes. Chromosomes are typically represented as simple strings of data and instructions, in a manner not unlike instructions for a von Neumann machine, although a wide variety of other data structures for storing chromosomes have also been tested, with varying degrees of success in different problem domains.
Initially several such parameter lists or chromosomes are generated. This may be totally random, or the programmer may seed the gene pool with "hints" to form an initial pool of possible solutions. This is called the first generation pool.
During each successive generation, each organism (or individual) is evaluated, and a value of goodness or fitness is returned by a fitness function. The pool is sorted, with those having better fitness (representing better solutions to the problem) ranked at the top. Notice that "better" in this context is relative, as initial solutions are all likely to be rather poor.
The next step is to generate a second generation pool of organisms, which is done using any or all of the genetic operators: selection, crossover (or recombination), and mutation. A pair of organisms are selected for breeding. Selection is biased towards elements of the initial generation which have better fitness, though it is usually not so biased that poorer elements have no chance to participate, in order to prevent the solution set from converging too early to a sub-optimal or local solution. There are several well-defined organism selection methods; roulette wheel selection and tournament selection are popular methods.
Following selection, the crossover (or recombination) operation is performed upon the selected chromosomes. Most genetic algorithms will have a single tweakable probability of crossover (Pc), typically between 0.6 and 1.0, which encodes the probability that two selected organisms will actually breed. A random number between 0 and 1 is generated, and if it falls under the crossover threshold, the organisms are mated; otherwise, they are propagated into the next generation unchanged. Crossover results in two new child chromosomes, which are added to the second generation pool. The chromosomes of the parents are mixed in some way during crossover, typically by simply swapping a portion of the underlying data structure (although other, more complex merging mechanisms have proved useful for certain types of problems.) This process is repeated with different parent organisms until there are an appropriate number of candidate solutions in the second generation pool.
The next step is to mutate the newly created offspring. Typical genetic algorithms have a fixed, very small probability of mutation (Pm) of perhaps 0.01 or less. A random number between 0 and 1 is generated; if it falls within the Pm range, the new child organism's chromosome is randomly mutated in some way, typically by simply randomly altering bits in the chromosome data structure.
These processes ultimately result in a second generation pool of chromosomes that is different from the initial generation. Generally the average degree of fitness will have increased by this procedure for the second generation pool, since only the best organisms from the first generation are selected for breeding. The entire process is repeated for this second generation: each organism in the second generation pool is then evaluated, the fitness value for each organism is obtained, pairs are selected for breeding, a third generation pool is generated, etc. The process is repeated until an organism is produced which gives a solution that is "good enough".
A slight variant of this method of pool generation is to allow some of the better organisms from the first generation to carry over to the second, unaltered. This form of genetic algorithm is known as an elite selection strategy.
There are several general observations about the generation of solutions via a genetic algorithm:
- Unless the fitness function is handled properly, GAs may have a tendency to converge towards local optima rather than the global optimum of the problem.
- Operating on dynamic data sets is difficult, as genomes begin to converge early on towards solutions which may no longer be valid for later data. Several methods have been proposed to remedy this by increasing genetic diversity somehow and preventing early convergence, either by increasing the probability of mutation when the solution quality drops (called triggered hypermutation), or by occasionally introducing entirely new, randomly generated elements into the gene pool (called random immigrants).
- Selection is clearly an important genetic operator, but opinion is divided over the importance of crossover verses mutation. Some argue that crossover is the most important, while mutation is only necessary to ensure that potential solutions are not lost. Others argue that crossover in a largely uniform population only serves to propagate innovations originally found by mutation, and in a non-uniform population crossover is nearly always equivalent to a very large mutation (which is likely to be catastrophic).
- GAs can rapidly locate good solutions, even for difficult search spaces.
- A number of experts believe that simpler optimization algorithms can find better local optima than GAs (given the same amount of computation time). Practitioners may wish to try other algorithms in addition to GAs, especially random-restart hill climbing.
- GAs can not effectively solve problems in which there is no way to judge the fitness of an answer other than right/wrong. These problems are like finding a needle in a haystack.
- As with all current machine learning problems it is worth tuning the parameters such as mutation probability and recombination probability to find reasonable setting for the problem class you are working on. There are theoretical upper and lower bounds for these parameters that can help guide selection.
The simplest algorithm represents each chromosome as a bit string. Typically, numeric parameters can be represented by integers, though it is possible to use floating point representations. The basic algorithm performs crossover and mutation at the bit level.
Other variants treat the chromosome as a list of numbers which are indexes into an instruction table, nodes in a linked list, hashes, objects, or any other imaginable data structure. Crossover and mutation are performed so as to respect data element boundaries. For most data types, specific variation operators can be designed. Different chromosomal data types seem to work better or worse for different specific problem domains.
There have also been attempts to introduce other evolutionary operations such as movement of genes, in the manner of transposons. These movements change the schema of the chromosome making effects of linkage visible.
There are also parallel implementations of genetic algorithms that use computers as 'islands' and implement migrations of populations from one computer to another over a network.
Some other variants introduce a variable fitness function. In classical genetic algorithms, the fitness function is unchanged over time. In simulated annealing the fitness function is changed over time and in artificial life, the fitness of any individual is affected by all individuals in the population with which it interacts.
Genetic algorithms are known to produce good results for some problems. Their major disadvantage is that they are relatively slow, being very computationally intensive compared to other methods, such as random optimization.
Recent speed improvements have focused on speciation, where crossover can only occur if individuals are closely-enough related. Genetic algorithms are extremely easy to adapt to parallel computing and clustering environments. One method simply treats each node as a parallel population. Organisms are then migrated from one pool to another according to various propagation techniques.
Another method, the Farmer/worker architecture , designates one node the farmer, responsible for organism selection and fitness assignment, and the other nodes as workers, responsible for recombination, mutation, and function evaluation.
Problems which appear to be particularly appropriate for solution by genetic algorithms include timetabling and scheduling problems, and many scheduling software packages are based on GAs. GAs have also been applied to engineering. Genetic algorithms are often applied as an approach to solve global optimization problems. Genetic algorithms have been successfully applied to the study of neurological evolution (see NeuroEvolution of Augmented Topologies).
As a general rule of thumb genetic algorithms might be useful in problem domains that have a complex fitness landscape as recombination is designed to move the population away from local minima that a traditional hill climbing algorithm might get stuck in.
Genetic algorithms or "GAs" originated from the studies of cellular automata, conducted by John Holland and his colleagues at the University of Michigan. Research in GAs remained largely theoretical until the mid-1980s, when The First International Conference on Genetic Algorithms was held at The University of Illinois. As academic interest grew, the dramatic increase in desktop computational power allowed for practical application of the new technique. In 1989, The New York Times writer John Markoff wrote about Evolver, the first commercially available desktop genetic algorithm. Custom computer applications began to emerge in a wide variety of fields, and these algorithms are now used by a majority of Fortune 500 companies to solve difficult scheduling, data fitting, trend spotting, budgeting and virtually any other type of combinatorial optimization.
Choose initial population Repeat Evaluate each individual's fitness Select best-ranking individuals to reproduce Mate pairs at random Apply crossover operator Apply mutation operator Until terminating condition (see below)
Terminating conditions often include:
- Fixed number of generations reached
- Budgeting: allocated computation time/money used up
- An individual is found that satisfies minimum criteria
- The highest ranking individual's fitness is reaching or has reached a plateau such that successive iterations are not producing better results anymore.
- Manual inspection. May require start-and-stop ability
- Combinations of the above
- Automated design, including research on composite material design and multi-objective design of automotive components for crashworthiness , weight savings, and other characteristics.
- Automated design of mechatronic systems using bond graphs and genetic programming (NSF).
- Calculation of Bound States and Local Density Approximations.
- Configuration applications, particularly physics applications of optimal molecule configurations for particular systems like C60 (buckyballs).
- Container loading optimization.
- Distributed computer network topologies .
- Electronic circuit design, known as Evolvable hardware.
- File allocation for a distributed system.
- Parallelization of GAs/GPs including use of hierarchical decomposition of problem domains and design spaces nesting of irregular shapes using feature matching and GAs.
- Game Theory Equilibrium Resolution.
- Learning Robot behavior using Genetic Algorithms.
- Mobile communications infrastructure optimization.
- Molecular Structure Optimization (Chemistry).
- Multiple population topologies and interchange methodologies.
- Protein folding and protein/ligand docking .
- Plant floor layout .
- Scheduling applications, including job-shop scheduling. The objective being to schedule jobs in a sequence dependent or non-sequence dependent setup environment for a minimal total tardiness.
- Solving the machine-component grouping problem required for cellular manufacturing systems.
- Tactical asset allocation and international equity strategies.
- Timetabling problems, such as designing a non-conflicting class timetable for a large university.
- Training artificial neural networks when pre-classified training examples are not readily obtainable (neuroevolution).
- Traveling Salesman Problem.
Genetic programming (GP) is a related technique popularized by John Koza, in which computer programs, rather than function parameters, are optimized. Genetic programming often uses tree-based internal data structures to represent the computer programs for adaptation instead of the list, or array, structures typical of genetic algorithms. GP algorithms typically require running time that is orders of magnitude greater than that for genetic algorithms, but they may be suitable for problems that are intractable with genetic algorithms.
Interactive genetic algorithms are genetic algorithms that use human evaluation. They are usually applied to domains where it is hard to design a computational fitness function, for example, evolving images, music, artistic designs and forms to fit users' aesthetic preference.
- Goldberg, David E (1989), Genetic Algorithms in Search, Optimization and Machine Learning, Kluwer Academic Publishers, Boston, MA.
- Goldberg, David E (2002), The Design of Innovation: Lessons from and for Competent Genetic Algorithms, Addison-Wesley, Reading, MA.
- Harvey, Inman (1992), Species Adaptation Genetic Algorithms: A basis for a continuing SAGA, in 'Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life', F.J. Varela and P. Bourgine (eds.), MIT Press/Bradford Books, Cambridge, MA, pp. 346-354.
- Koza, John (1992), Genetic Programming: On the Programming of Computers by Means of Natural Selection
- Michalewicz, Zbigniew (1999), Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag.
- Mitchell, Melanie, (1996), An Introduction to Genetic Algorithms, MIT Press, Cambridge, MA.
- Vose, Michael D (1999), The Simple Genetic Algorithm: Foundations and Theory, MIT Press, Cambridge, MA.
- IlliGAL - Illinois Genetic Algorithms Laboratory - Download technical reports and code
- Golem Project - Automatic Design and Manufacture of Robotic Lifeforms
- Introduction to Genetic Algorithms Using RPL2
- Talk.Origins FAQ on the uses of genetic algorithms, by Adam Marczyk
- Genetic algorithm in search and optimization, by Richard Baker
- Genetic Algorithm and Markov chain Monte Carlo:Differential Evolution Markov chain makes Bayesian Computing easy
- Differential Evolution using Genetic Algorithm
- Introduction to Genetic Algorithms and Neural Networks including an example windows program
- Genetic Algorithm Solves the Toads and Frogs Puzzle (requires Java)
- Not-So-Mad Science: Genetic Algorithms and Web Page Design for Marketers by Matthew Syrett
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Genetic_Algorithm | 13 |
20 | In our everyday experience, waves are formed by motion within a medium. Waves come in different varieties. Ocean waves and sound waves roll outward from a source through the medium of water and air. A
violin string waves back and forth along its length, held in place at the two ends of the medium, which is the violin string. A jerk on a loose rope will send a wave rolling along its length.
In 1802, Thomas Young demonstrated fairly convincingly that light had the properties of a wave. He did this by shining light through two slits, and noting that an interference pattern formed on a projection screen. Interference patterns are one of the signature characteristics of waves: two wave crests meeting will double in size; two troughs meeting will double in depth; a crest and a trough meeting will cancel eachother out to flatness. As wave ripples cross, they create a recognizable pattern, exactly matching the pattern on Young's projection screen.
If light were made of particles, they would travel in straight lines from the source and hit the screen in two places.
If light traveled as waves, they would spread out, overlap, and form a distinctive pattern on the screen.
For most of the 19th century, physicists were convinced by Young's experiment that light was a wave. By implication, physicists were convinced that light must be traveling through some medium. The medium was dubbed "luminiferous ether," or just ether. Nobody knew exactly what it was, but the ether had to be there for the unshakably logical reason that without some medium, there could be no wave.
In 1887, Albert Michelson and E.W. Morley demonstrated fairly convincingly that there is no ether. This seemed to imply that there is no medium through which a light "wave" travels, and so there is no medium that can even form a light "wave." If this is true, how can we see evidence of waves at all? Ordinary waves of whatever sort require a medium in order to exist. The Michelson-Morley experiment should have had the effect of draining the bathtub: what kind of waves can you get with an empty bathtub? Yet the light waves still seemed to show up in the Young double slit experiment.
Without the medium, there is no wave. Only a *klunk*.
In 1905, Albert Einstein showed that the mathematics of light, and its observed constancy of speed, allowed one to make all necessary calculations without ever referring to any medium. He therefore did away with the ether as a concept in physics because it had no mathematical significance. He did not, however, explain how a wave can exist without a medium. From that point on, physicists simply put the question on the far back burner. As Michio Kaku puts it, "over the decades we [physicists] have simply gotten used to the idea that light can travel through a vacuum even if there is nothing to wave."
The matter was further complicated in the 1920s when it was shown that objects -- everything from electrons to the chair on which you sit -- exhibit exactly the same wave properties as light, and suffer from exactly the same lack of any medium.
The First Computer Analogy. One way to resolve this seeming paradox of waves without medium is to note that there remains another kind of wave altogether. A wave with which we are all familiar, yet which exists without any medium in the ordinary sense. This is the computer-generated wave. Let us examine a computer-generated sound wave.
Imagine the following set up. A musician in a recording studio plays a synthesizer, controlled by a keyboard. It is a digital synthesizer which uses an algorithm (programming) to create nothing more than a series of numbers representing what a sampling of points along the desired sound wave would look like if it were played by a "real" instrument. The synthesizer's output is routed to a computer and stored as a series of numbers. The numbers are burned into a disk as a series of pits that can be read by a laser -- in other words, a CD recording. The CD is shipped to a store. You buy the CD, bring it home, and put it in your home entertainment system, and press the play button. The "music" has traveled from the recording studio to yourliving room. Through what medium did the music wave travel? To a degree, you might say that it traveled as electricity through the wires from the keyboard to the computer. But you might just as well say it traveled by truck along the highway to the store. In fact, this "sound wave" never existed as anything more than a digital representation of a hypothetical sound wave which itself never existed. It is, first and last, a string of numbers. Therefore, although it will produce wave like effects when placed in your stereo, this wave never needed any medium other than the computer memory to spread itself all over the music loving world. As you can tell from your CD collection, computers are very good at generating, storing, and regenerating waves in this fashion.
Calculations from an equation [here, y = sin (x) + sin (2.5 x)] produce a string of numbers, i.e., 1, 1.5, 0.4, 0, 0.5, 1.1, 0.3, -1.1, -2, -1.1, 0.1, and 0.5.
These numbers can be graphed to create a picture of the wave that would be created by combining (interfering) the two simple sine waves.
By analogizing to the operations of a computer, we can do away with all of the conceptual difficulties that have plagued physicists as they try to describe how a light wave -- or a matter wave -- can travel or even exist in the absence of any medium.
B. Waves of calculation, not otherwise manifest, as though they really were differential equations
The more one examines the waves of quantum mechanics, the less they resemble waves in a medium. In the 1920s, Ernst Schrodinger set out a formula which could "describe" the wave-like behavior of all quantum units, be they light or objects. The formula took the form of an equation not so very different from the equations that describe sound waves or harmonics or any number of things with which Isaac Newton would have been comfortable. For a brief time, physicists sought to visualize these quantum waves as ordinary waves traveling through some kind of a medium (nobody knew what kind) which somehow carried the quantum properties of an object. Then Max Born pointed out something quite astonishing: the simple interference of these quantum waves did not describe the observed behaviors; instead, the waves had to be interfered and the mathematical results of the interference had to be further manipulated (by "squaring" them, i.e., by multiplying the results by themselves) in order to achieve the final probability characteristic of all quantum events. It is a two-step process, the end result of which requires mathematical manipulation. The process can not be duplicated by waves alone, but only by calculations based on numbers which cycled in the manner of waves.
From Born, the Schrodinger wave became known as a probability wave (although actually it is a cycling of potentialities which, when squared, yield a probability). Richard Feynman developed an elegant model for describing the amplitude (height or depth representing the relative potentiality) of the many waves involved in a quantum event, calculating the interference of all of these amplitudes, and using the final result to calculate a probability. However, Feynman disclaimed any insight into whatever physical process his system might be describing. Although his system achieved a result that was exactly and perfectly in accord with observed natural processes, to him it was nothing more than calculation. The reason was that, as far as Feynman or anybody else could tell, the underlying process itself was nothing more than calculation.
The Second Computer Analogy. A process that produces a result based on nothing more than calculation is an excellent way to describe the operations of a computer program. The two-step procedure of the Schrodinger equation and the Feynman system may be impossible to duplicate with physical systems, but for the computer it is trivial. That is what a computer does -- it manipulates numbers and calculates. (As we will discuss later, the computer must then interpret and display the result to imbue it with meaning that can be conveyed to the user.)
Wave summary. Quantum mechanics involves "waves" which cannot be duplicated or even approximated physically; but which easily can be calculated by mathematical formula and stored in memory, creating in effect a static map of the wave shape. This quality of something having the appearance and effect of a wave, but not the nature of a wave, is pervasive in quantum mechanics, and so is fundamental to all things in our universe. It is also an example of how things which are inexplicable in physical terms turn out to be necessary or convenient qualities of computer operations.
II. The Measurement Effect
A. "Collapse of the wave function" -- consciousness as mediator, as though the sensory universe was a display to the user
During the course of an observation of a quantum event, the wave-like nature of the quantum unit is not observed. The evidence for the existence of quantum waves is entirely inferential, derived from such phenomena as the interference pattern on Mr. Young's projection screen. After analyzing such a phenomenon, the conclusion is that the only thing that could cause such a pattern is a wave. ("It is as if two waves were interfering.") However, actual observation always reveals instead a particle. For example, as instruments were improved, it turned out that the interference pattern observed by Young was created not by a constant sloshing against the projection screen, but by one little hit at a time, randomly appearing at the projection screen in such a way that over time the interference pattern built up. "Particles" of light were being observed as they struck the projection screen; but the eventual pattern appeared to the eye, and from mathematical analysis, to result from a wave.
Particles of Light
This presents conceptual difficulties that are almost insurmountable as we attempt to visualize a light bulb (or laser or electron gun) emitting a particle at the source location, which immediately dissolves into a wave as it travels through the double slits, and which then reconstitutes itself into a particle at the projection screen, usually at a place where the (presumed) overlapping wave fronts radiating from the two slits reinforce each other. What is more, this is only the beginning of the conceptual difficulties with this phenomenon.
Investigating the mechanics of this process turns out to be impossible, for the reason that whenever we try to observe or otherwise detect a wave we obtain, instead, a particle. The very act of observation appears to change the nature of the quantum unit, according to conventional analysis. Variations on the double slit experiment provide the starkest illustration.
If we assume that quantum units are particles, it follows that the particle must travel from the emission source, through one slot or the other, and proceed to the projection screen. Therefore, we should be able to detect the particle mid-journey, i.e., at one slot or the other. The rational possibilities are that the particle would be detected at one slot, the other slot, or both slots.
Experiment shows that the particle in fact is detected always at one slot or the other slot, never at both slots, seeming to confirm that we are indeed dealing with particles.
Placing electron detectors at the slots.
However, a most mysterious thing happens when we detect these particles at the slots: the interference patterns disappears and is replaced by a clumping in line with the source and the slots. Thus, if we thought that some type of wave was traveling through this space in the absence of observation, we find instead a true particle upon observation -- a particle which behaves just like a particle is supposed to behave, to the point even of traveling in straight lines like a billiard ball.
Results if electrons are detected at the slots.
To further increase the mystery, it appears that the change from wave to particle takes place not upon mechanical interaction with the detecting device, but upon a conscious being's acquiring the knowledge of the results of the attempt at detection. Although not entirely free from doubt, experiment seems to indicate that the same experimental set up will yield different results (clumping pattern or interference pattern at the projection screen) depending entirely on whether the experimenter chooses to learn the results of the detection at the slits or not. This inexplicable change in behavior has been called the central mystery of quantum mechanics.
Results if electrons are NOT detected at the slots.
At the scientific level, the question is "how?" The conventional way of describing the discrepancy between analysis and observation is to say that the "wave function" is somehow "collapsed" during observation, yielding a "particle" with measurable properties. The mechanism of this transformation is completely unknown and, because the scientifically indispensable act of observation itself changes the result, it appears to be intrinsically and literally unknowable.
At the philosophical level, the question is "why?" Why should our acquisition of knowledge affect something which, to our way of thinking, should exist in whatever form it exists whether or not it is observed? Is there something special about consciousness that relates directly to the things of which we are conscious? If so, why should that be?
The computer analogy. As John Gribbin puts it, "nature seems to 'make the calculation' and then present us with an observed event." Both the "how" and the "why" of this process can be addressed through the metaphor of a computer which is programmed to project images to create an experience for the user, who is a conscious being.
The "how" is described structurally by a computer which runs a program. The program provides an algorithm for determining the position (in this example) of every part of the image, which is to say, every pixel that will be projected to the user. The mechanism for transforming the programming into the projection is the user interface. This can be analogized to the computer monitor, and the mouse or joystick or other device for viewing one part of the image or another. When the user chooses to view one part of the image, those pixels must be calculated and displayed; all other parts of the image remain stored in the computer as programming. Thus, the pixels being viewed must follow the logic of the projection, which is that they should move like particles across the screen. The programming representing the parts of the image not being displayed need not follow this logic, and may remain as formulas. Calculating and displaying any particular pixel is entirely a function of conveying information to the user, and it necessarily involves a "change" from the inchoate mathematical relationships represented by the formula to the specific pixel generated according to those relationships. The user can never "see" the programming, but by analysis can deduce its mathematical operation by careful observation of the manner in which the pixels are displayed. The algorithm does not collapse into a pixel; rather, the algorithm tells the monitor where and how to produce the pixel for display to the user according to which part of the image the user is viewing.
The "why" is problematical in the cosmic sense, but is easily stated within the limits of our computer metaphor. The programming produces images for the user because the entire set up was designed to do just that: to present images to a user (viewer) as needed by the user. The ultimate "why" depends on the motivation of the designer. In our experience, the maker of a video game seeks to engage the attention of the user to the end that the user will spend money for the product and generate profits for the designer. This seems an unlikely motivation for designing the universe simulation in
which we work and play.
B. Uncertainty and complementary properties, as though variables were being redefined and results calculated and recalculated according to an underlying formula
We have seen one aspect of the measurement effect, which is that measurement (or observation) appears to determine whether a quantum unit is displayed or projected to the user (as a "particle"), or whether instead the phenomenon remains inchoate, unobserved, behaving according to a mathematical algorithm (as a "wave"). There is another aspect of measurement that relates to the observed properties of the particle-like phenomenon as it is detected. This is the famous Heisenberg uncertainty principle.
As with all aspects of quantum mechanics, the uncertainty principle is not a statement of philosophy, but rather a mathematical model which is exacting and precise. That is, we can be certain of many quantum measurements in many situations, and we can be completely certain that our results will conform to quantum mechanical principles. In quantum mechanics, the "uncertainty principle" has a specific meaning, and it describes the relationship between two properties which are "complementary," that is, which are linked in a quantum mechanical sense (they "complement" each other, i.e., they are counterparts, each of which makes the other "complete").
The original example of complementary properties was the relationship between position and momentum. According to classical Newtonian physics and to common sense, if an object simply exists we should be able to measure both where it is and how fast it is moving. Measuring these two properties would allow us to predict where the object will be in the future. In practice, it turns out that both position and momentum cannot be exactly determined at the same moment -- a discovery that threw a monkey wrench into the clockwork predictability of the universe. Put simply, the uncertainty relationship is this: for any two complementary properties, any increase in the certainty of knowledge of one property will necessarily lead to a decrease in the certainty of knowledge of the other property.
The uncertainty principle was originally thought to be more statement of experimental error than an actual principle of any great importance. When scientists were measuring the location and the speed (or, more precisely, the momentum) of a quantum unit -- two properties which turn out to be complementary -- they found that they could not pin down both at once. That is, after measuring momentum, they would determine position; but then they found that the momentum had changed. The obvious explanation was that, in determining position, they had bumped the quantum unit and thereby changed its momentum. What they needed (so they thought) were better, less intrusive instruments. On closer inspection, however, this did not turn out to be the case. The measurements did not so much change the momentum, as they made the momentum less certain, less predictable. On remeasurement, the momentum might be the same, faster, or slower. What is more, the range of uncertainty of momentum increased in direct proportion to the accuracy of the measurement of location.
In 1925, Werner Heisenberg conducted a mathematical analysis of the position and momentum of quantum units. His results were surprising, in that they showed a mathematical incompatibility between the two properties. Heisenberg was able to state that there was a mathematical relationship between the properties p (position) and m (momentum), such that the more precise your knowledge of the one, the less precise your knowledge of the other. This "uncertainty" followed a formula which, itself, was quite certain. Heisenberg's mathematical formula accounted for the experimental results far, far more accurately than any notion of needing better equipment in the laboratory. It seems, then, that uncertainty in the knowledge of two complementary properties is more than a laboratory phenomenon -- it is a law of nature which can be expressed mathematically.
A good way to understand the uncertainty principle is to take the extreme cases. As we will discuss later on, a distinguishing feature of quantum units is that many of their properties come in whole units and whole units only. That is, many quantum properties have an either/or quality such that there is no in between: the quantum unit must be either one way or the other. We say that these properties are "quantized," meaning that the property must be one specific value (quantity) or another, but never anything else. When the uncertainty principle is applied to two complementary properties which are themselves quantized, the result is stark. Think about it. If a property is quantized, it can only be one way or the other; therefore, if we know anything about this property, we know everything about this property.
There are few, if any, properties in our day to day lives that can be only one way or the other, never in between. If we leave aside all quibbling, we might suggest the folk wisdom that "you can't be a little bit pregnant." A woman either is pregnant, or she is not pregnant. Therefore, if you know that the results of a reliable pregnancy test are positive, you know everything there is to know about her pregnancy property: she is pregnant. For a "complementary" property to pregnancy, let us use marital status. (In law, you are either married or not married, with important consequences for bigamy prosecutions.)
The logical consequence of knowing everything about one complementary property is that, as a law of nature, we then would know nothing about the other complementary property. For our example, we must imagine that, by learning whether a married woman is pregnant, we thereby no longer know whether she is married. We don't forget what we once knew; we just can no longer be certain that we will get any particular answer the next time we check on her marital status. The mathematical statement is that, by knowing pregnancy, you do not know whether she is married; and, by knowing marital status, you do not know whether she is pregnant. In order to make this statement true, if you once know her marital status, and you then learn her pregnancy status (without having you forget your prior knowledge of marital status), the very fact of her marital status must become random yes or no. A definite maybe.
What is controlling is your state of certainty about one property or the other. In just this way, the experimentalist sees an electron or some other quantum unit whose properties depend on the experimentalist's knowledge or certainty of some other complementary property.
A computer's data. If we cease to think of the quantum unit as a "thing," and begin to imagine it as a pixel, that is, as a display of information in graphic (or other sensory) form, it is far easier to conceive of how the uncertainty principle might work. The "properties" we measure are variables which are computed for the purpose of display, which is to say, for the purpose of giving the user knowledge via the interface. A computed variable will display according to the underlying algorithm each time it is computed, and while the algorithm remains stable, the results of a particular calculation can be made to depend on some other factor, including another variable.
It would be far easier to understand our changing impressions of the hypothetical woman if we knew that, although she appeared to be a person like ourselves, in fact she was a computer projection. As a computer projection, she could be pregnant or not pregnant, married or single, according to whatever rules the computer might be using to create her image.
Complementary properties are simply paired variables, the calculation of which depends on the state of the other. Perhaps they share a memory location, so that when one variable is calculated and stored, it displaces whatever value formerly occupied that location; then the other variable would have to be calculated anew the next time it was called for. In this way, or in some analogous way, we can see that the appearance of a property does not need to be related to the previously displayed value of the property, but only to the underlying algorithm.
III. The Identical/Interchangeable Nature of "Particles" and Measured Properties.
As though the "particles" were merely pictures of particles, like computer icons.
Quantum units of the same type are identical. Every electron is exactly the same as every other electron; every photon the same as every other photon; etc. How identical are they? So identical that Feynman was able seriously to propose that all the electrons and positrons in the universe actually are the same electron/positron, which merely has zipped back and forth in time so often that we observe it once for each of the billions of times it crosses our own time, so it seems like we are seeing billions of electrons. If you were to study an individual quantum unit from a collection, you would find nothing to distinguish it from any other quantum unit of the same type. Nothing whatsoever. Upon regrouping the quantum units, you could not, even in principle, distinguish which was the unit you had been studying and which was another.
The complete and utter sameness of each electron (or other quantum unit) has a number of consequences in physics. If the mathematical formula describing one electron is the same as that describing another electron,
then there is no method, even in principle, of telling which is which. This means, for example, that if you begin with two quantum electrons at positions A and B, and move them to positions C and D, you cannot state whether they traveled the paths A to C and B to D, or A to D and B to C. In such a situation, there is no way to identify the electron at an end position with one or the other of the electrons at a beginning position; therefore, you must allow for the possibility that each electron at A and B arrived at either C or D. This impacts on the math predicting what will happen in any given quantum situation and, as it turns out, the final probabilities agree with this interchangeable state of affairs.
The computer analogy. Roger Penrose has likened this sameness to the images produced by a computer. Imagine the letter "t." On the page you are viewing, the letter "t" appears many times. Every letter t is exactly like every other letter t. That is because on a computer, the letter t is produced by displaying a particular set of pixels on the screen. You could not, even in principle, tell one from the other because each is the identical image of a letter t. The formula for this image is buried in many layers of subroutines for displaying pixels, and the image does not change regardless of whether it is called upon to form part of the word "mathematical" or "marital".
Similarly, an electron does not change regardless of whether it is one of the two electrons associated with the helium atom, or one of the ninety-two electrons associated with the uranium atom. You could not, even in principle, tell one from another. The only way in this world to create such identical images is to use the same formula to produce the same image, over and over again whenever a display of the image is called for.
IV. Continuity and Discontinuity in Observed Behaviors
A. "Quantum leaps," as though there was
no time or space between quantum
In our experience, things move from one end to the other by going through the middle; they get from cold to hot by going through warm; they get from slow to fast by going through medium; and so on. Phenomena move from a lower state to a higher state in a ramp-like fashion -- continuously increasing until they reach the higher state. Even if the transition is quick, it still goes through all of the intermediate states before reaching the new, higher state.
In quantum mechanics, however, there is no transition at all. Electrons are in a low energy state on one observation, and in a higher energy state on the next; they spin one way at first, and in the opposite direction next. The processes proceed step-wise; but more than step-wise, there is no time or space in which the process exists in any intermediate state.
It is a difficult intellectual challenge to imagine a physical object that can change from one form into another form, or move from one place to another place, without going through any transition between the two states. Zeno's paradoxes offer a rigorously logical examination of this concept, with results that have frustrated analysts for millennia. In brief, Zeno appears to have "proved" that motion is not possible, because continuity (smooth transitions) between one state and the next implies an infinite number of transitions to accomplish any change whatsoever. Zeno's paradoxes imply that space and time are discontinuous -- discrete points and discrete instants with nothing in between, not even nothing. Yet the mind reels to imagine space and time as disconnected, always seeking to understand what lies between two points or two instants which are said to be separate.
The pre-computer analogy. Before computer animation there was the motion picture. Imagine that you are watching a movie. The motion on the screen appears to be smooth and continuous. Now, the projectionist begins to slow the projection rate. At some point, you begin to notice a certain jerkiness in the picture. As the projection rate slows, the jerkiness increases, and you are able to focus on one frame of the movie, followed by a blanking of the screen, followed by the next frame of the movie. Eventually, you see that the motion which seemed so smooth and continuous when projected at 30 frames per second or so is really only a series of still shots. There is no motion in any of the pictures, yet by rapidly flashing a series of pictures depicting intermediate positions of an actor or object, the effective illusion is one of motion.
The computer analogy. Computers create images in the same manner. First, they compose a still image and project it; then they compose the next still image and project that one. If the computer is quick enough, you do not notice any transition. Nevertheless, the computer's "time" is completely discrete, discontinuous, and digital. One step at a time.
Similarly, the computer's "space" is discrete, discontinuous, and digital. If you look closely at a computer monitor, you notice that it consists of millions of tiny dots, nothing more. A beautifully rendered image is made up of these dots.
The theory and architecture of computers lend themselves to a step-by-step approach to any and all problems. It appears that there is no presently conceived computer architecture that would allow anything but such a discrete, digitized time and space, controlled by the computer's internal clock ticking one operation at a time. Accordingly, it seems that this lack of continuity, so bizarre and puzzling as a feature of our natural world, is an inherent characteristic of a computer simulation.
B. The breakdown at zero, yielding
infinities, as though the universe was being run by a computer clock
on a coordinate grid
Quantum theory assumes that space and time are continuous. This is simply an assumption, not a necessary part of the theory. However, this assumption has raised some difficulties when performing calculations of quantum mechanical phenomena. Chief among these is the recurring problem of infinities.
In quantum theory, all quantum units which appear for the purpose of measurement are conceived of as dimensionless points. These are assigned a place on the coordinate grid, described by the three numbers of height, depth, and width as we have seen, but they are assigned only these three numbers. By contrast, if you consider any physical object, it will have some size, which is to say it will have its own height, width, and depth. If you were to exactly place such a physical object, you would have to take into account its own size, and to do so you would have to assign coordinates to each edge of the object.
When physicists consider quantum units as particles, there does not seem to be any easy way to determine their outer edges, if, in fact, they have any outer edges. Accordingly, quantum "particles" are designated as simple points, without size and, therefore, without edges. The three coordinate numbers are then sufficient to locate such a pointlike particle at a single point in space.
The difficulty arises when the highly precise quantum calculations are carried out all the way down to an actual zero distance (which is the size of a dimensionless point -- zero height, zero width, zero depth). At that point [sic], the quantum equations return a result of infinity, which is as meaningless to the physicist as it is to the philosopher. This result gave physicists fits for some twenty years (which is not really so long when you consider that the same problem had been giving philosophers fits for some twenty-odd centuries). The quantum mechanical solution was made possible when it was discovered that the infinities disappeared if one stopped at some arbitrarily small distance -- say, a billionth-of-a-billionth-of-a-billionth of an inch -- instead of proceeding all the way to an actual zero. One problem remained, however, and that was that there was no principled way to determine where one should stop. One physicist might stop at a billionth-of-a-billionth-of-a-billionth of an inch, and another might stop at only a thousandth-of-a-billionth of-a-billionth of an inch. The infinities disappeared either way. The only requirement was to stop somewhere short of the actual zero point. It seemed much too arbitrary. Nevertheless, this mathematical quirk eventually gave physicists a method for doing their calculations according to a process called "renormalization," which allowed them to keep their assumption that an actual zero point exists, while balancing one positive infinity with another negative infinity in such a way that all of the infinities cancel each other out, leaving a definite, useful number.
In a strictly philosophical mode, we might suggest that all of this is nothing more than a revisitation of Zeno's Achilles paradox of dividing space down to infinity. The philosophers couldn't do it, and neither can the physicists. For the philosopher, the solution of an arbitrarily small unit of distance -- any arbitrarily small unit of distance -- is sufficient for the resolution of the paradox. For the physicist, however, there should appear some reason for choosing one small distance over another. None of the theoretical models have presented any compelling reason for choosing any particular model as the "quantum of length." Because nosuch reason appears, the physicist resorts to the "renormalization" process, which is profoundly dissatisfying to both philosopher and physicist. Richard Feynman, who won a Nobel prize for developing the renormalization process, himself describes the procedure as "dippy" and "hocus-pocus." The need to resort to such a mathematical sleight-of-hand to obtain meaningful results in quantum calculations is frequently cited as the most convincing piece of evidence that quantum theory -- for all its precision and ubiquitous application -- is somehow lacking, somehow missing something. It may be that one missing element is quantized space -- a shortest distance below which there is no space, and below which one need not calculate. The arbitrariness of choosing the distance would be no more of a theoretical problem than the arbitrariness of the other fundamental constants of nature -- the speed of light, the quantum of action, and the gravitational constant. None of these can be derived from theory, but are simply observed to be constant values. Alas, this argument will not be settled until we can make far more accurate measurements than are possible today.
Quantum time. If space is quantized, then time almost surely must be quantized also. This relationship is implied by the theory of relativity, which supposes that time and space are so interrelated as to be practically the same thing. Thus, relativity is most
commonly understood to imply that space and time cannot be thought of in isolation from each other; rather, we must analyze our world in terms of a single concept -- "space-time." Although the theory ofrelativity is largely outside the scope of this essay, the reader can see from Zeno's paradoxes how space and time are intimately related in the analysis of motion. For the moment, I will only note that the theory of relativity significantly extends this view, to the point where space and time may be considered two sides of the same coin.
The idea of "quantized" time has the intellectual virtue of consistency within the framework of quantum mechanics. That is, if the energies of electron units are quantized, and the wavelengths of light are quantized, and so many other phenomena are quantized, why not space and time? Isn't it easier to imagine how the "spin" of an electron unit can change from up to down without going through anything in the middle if we assume a quantized time? With quantized time, we may imagine that the change in such an either/or property takes place in one unit of time, and that, therefore, there is no "time" at which the spin is anywhere in the middle. Without quantized time, it is far more difficult to eliminate the intervening spin directions.
Nevertheless, the idea that time (as well as space) is "quantized," i.e., that time comes in individual units, is still controversial. The concept has been seriously proposed on many occasions, but most current scientific theories do not depend on the nature of time in this sense. About all scientists can say is that if time is not continuous, then the changes are taking place too rapidly to measure, and too rapidly to make any detectable difference in any experiment that they have dreamed up. The theoretical work that has been done on the assumption that time may consist of discontinuous jumps often focuses on the most plausible scale, which is related to the three fundamental constants of nature -- the speed of light, the quantum of action, and the gravitational constant. This is sometimes called the "Planck scale," involving the "Planck time," after the German physicist Max Planck, who laid much of the foundation of quantum mechanics through his study of minimum units in nature. On this theoretical basis, the pace of time would be around 10-44 seconds. That is one billionth-of-a-billionth-of-a-billionth-of-a-billionth of a second. And that is much too quick to measure by today's methods, or by any method that today's scientists are able to conceive of, or even hope for.
Mixing philosophy, science, time, and space. We see that the branch of physics known as relativity has been remarkably successful in its conclusion that space and time are two sides of the same coin, and should properly be thought of as a single entity: space-time. We see also that the philosophical logic of Zeno's paradoxes has always strongly implied that both space and time are quantized at some smallest, irreducible level, but that this conclusion has long been resisted because it did not seem to agree with humanexperience in the "real world." Further, we see that quantum mechanics has both discovered the ancientparadoxes anew in its mathematics, and provided some evidence of quantized space and time in its essentialexperimental results showing that "physical" processes jump from one state to the next without transition. The most plausible conclusion to be drawn from all of this is that space and time are, indeed, quantized. That is, there is some unit of distance or length which can be called "1," and which admits no fractions; and, similarly, there is some unit of time which can be called "1," and which also admits no fractions.
Although most of the foregoing is mere argument, it is compelling in its totality, and it is elegant in its power to resolve riddles both ancient and modern. Moreover, if we accept the quantization of space and time as a basic fact of the structure of our universe, then we may go on to consider how both of these properties happen to be intrinsic to the operations of a computer, as discussed above at Point IV(A).
As though all calculations were in
the CPU, regardless of the location
of the pixels on the screen.
A second key issue in quantum mechanics is the phenomenon of connectedness -- the ancient concept that all things are one -- because science has come increasingly to espouse theories that are uncannily related to this notion. In physics, this phenomenon is referred to as non-locality.
The essence of a local interaction is direct contact -- as basic as a punch in the nose. Body A affects body B locally when it either touches B or touches something else that touches B. A gear train is a typical local mechanism. Motion passes from one gear wheel to another in an unbroken chain. Break the chain by taking out a single gear and the movement cannot continue. Without something there to mediate it, a local interaction cannot cross a gap.
On the other hand, the essence of non locality is unmediated action-at-a-distance. A non-local interaction jumps from body A to body B without touching anything in between. Voodoo injury is an example of a non-local interaction. When a voodoo practitioner sticks a pin in her doll, the distant target is (supposedly) instantly wounded, although nothing actually travels from doll to victim. Believers in voodoo claim that an action here causes an effect there; that's all there is to it. Without benefit of mediation, a non-local interaction effortlessly flashes across the void.
Even "flashes across the void" is a bit misleading, because "flashing" implies movement, however quick, and "across" implies distance traveled, however empty. In fact, non-locality simply does away with speed and distance, so that the cause and effect simply happen. Contrary to common sense or scientific sensibility, it appears that under certain circumstances an action here on earth can have immediate consequences across the world, or on another star, or clear across the universe. There is no apparent transfer of energy at any speed, only an action here and a consequence there.
Non-locality for certain quantum events was theorized in the 1930s as a result of the math. Many years were wasted (by Einstein, among others) arguing that such a result was absurd and could not happen regardless of what the math said. In the 1960s, the theory was given a rigorous mathematical treatment by John S. Bell, who showed that if quantum effects were "local" they would result in one statistical distribution, and if "non-local" in another distribution. In the 1970s and '80s, the phenomenon was demonstrated, based on Bell's theorem, by the actual statistical distribution of experiments. For those die-hard skeptics who distrust statistical proofs, the phenomenon appears recently to have been demonstrated directly at the University of Innsbruck.
More than any of the bizarre quantum phenomena observed since 1900, the phenomenon of non-locality caused some serious thought to be given to the question, "What is reality?" The question had been nagging since the 1920s, when the Copenhagen school asserted, essentially, that our conception of reality had to stop with what we could observe; deeper than that we could not delve and, therefore, we could never determine experimentally why we observe what we observe. The experimental proof of non-locality added nothing to this strange statement, but seemed to force the issue. The feeling was that if our side of the universe could affect the other side of the universe, then those two widely separated places must somehow be connected. Alternative explanations necessarily involved signals traveling backward in time so that the effect "causes the cause," which seemed far too contrived for most scientists' tastes. Accordingly, it was fair to ask whether apparent separations in space and time -- I'm in the living room, you're in the den -- are fundamentally "real"; or whether, instead, they are somehow an illusion, masking a deeper reality in which all things are one, sitting right on top of each other, always connected one to another and to all. This sounds suspiciously like mysticism, and the similarity of scientific and mystical concepts led to some attempts to import Eastern philosophy into Western science. Zukav, in particular, wants desperately to find a direct connection between science and Buddhism, but he would concede that the link remains to be discovered.
Note that the experimental results had been predicted on the basis of the mathematical formalism of quantum mechanics, and not from any prior experiments. That is, the formal mathematical description of two quantum units in certain circumstances implied that their properties thereafter would be connected regardless of separation in space or time (just as x + 2 = 4 implies that x = 2). It then turned out that these properties are connected regardless of separation in space or time. The experimentalists in the laboratory had confirmed that where the math can be manipulated to produce an absurd result, the matter and energy all around us obligingly will be found to behave in exactly that absurd manner. In the case of non-locality, the behavior is uncomfortably close to magic.
The computer analogy. The non-locality which appears to be a basic feature of our world also finds an analogy in the same metaphor of a computer simulation. In terms of cosmology, the scientific question is, "How can two particles separated by half a universe be understood as connected such that they interact as though they were right on top of each other?" If we analogize to a computer simulation, the question would be, "How can two pictures at the far corners of the screen be understood as connected such that the distance between them is irrelevant?"
In fact, the measured distance between any two pixels (dots) on the monitor's display turns out to be entirely irrelevant, since both are merely the products of calculations carried out in the bowels of the computer as directed by the programming. The pixels may be as widely separated as you like, but the programming generating them is forever embedded in the computer's memory in such a way that -- again speaking quite literally -- the very concept of separation in space and time of the pixels has no meaning whatsoever for the stored information.
VI. The Relationship of Observed
Phenomena to the Mathematical Formalism
As though physical manifestations
themselves were being produced by
a mathematical formula.
Perhaps the most striking aspect of quantum theory is the relationship of all things to the math, as with the phenomenon of non-locality discussed above, which occurs in nature, so it seems, because that is the way the equations calculate. Even though the mathematical formulas were initially developed to describe the behavior of universe, these formulas turn out to govern the behavior of the universe with an exactitude that defies our concept of mathematics. As Nick Herbert puts it, "Whatever the math does on paper, the quantumstuff does in the outside world." That is, if the math can be manipulated to produce some absurd result, it will always turn out that the matter and energy around us actually behave in exactly that absurd manner when we look closely enough. It is as though our universe is being produced by the mathematical formulas. The backwards logic implied by quantum mechanics, where the mathematical formalism seems to be more "real" than the things and objects of nature, is unavoidable. In any conceptual conflict between what a mathematical equation can obtain for a result, and what a real object actually could do, the quantum mechanical experimentalresults always will conform to themathematical prediction.
Quantum theory is rooted in statistics, and such reality conflicts often arise in statistics. For example, the math might show that a "statistically average" American family has 2.13 children, even though we know that a family of real human beings must have a whole number of children. In our experience, we would never find such a statistically average family regardless of the math, because there simply is no such thing as 13/100ths of a child. The math is entirely valid, but it must yield to the census-taker's whole-child count when we get down to examining individual families. In quantum mechanics, however, the math will prevail -- as though the statistics were drawn up in advance and all American families were created equally with exactly 2.13 children, nevermind that we cannot begin to conceive of such a family. To the mathematician, these two situations are equivalent, because either way the average American family ends up with 2.13 children. But the quantum mechanical relationship of the math to the observation does not make any sense to us because in our world view, numbers are just symbols representing something with independent existence.
Mr. Herbert states that, "Quantum theory is a method of representing quantumstuff mathematically: a model of the world executed in symbols." Since quantum theory describes the world perfectly -- so perfectly that its symbolic, mathematical predictions always prevail over physical insight -- the equivalence between quantum symbolism and universal reality must be more than an oddity: it must be the very nature of reality.
This is the point at which we lose our nerve; yet the task for the Western rationalist is to find a mechanical model from our experience corresponding to a "world executed in symbols."
The final computer analogy. An example which literally fits this description is the computer simulation, which is a graphic representation created by executing programming code. The programming code itself consists of nothing but symbols, such as 0 and 1. Numbers, text, graphics and anything else you please are coded by unique series of numbers. These symbolic codes have no meaning in themselves, but arbitrarily are assigned values which have significance according to the operations of the computer. The symbols are manipulated according to the various step-by-step sequences (algorithms) by which the programming instructs the computer how to create the graphic representation. The picture presented on-screen to the user is a world executed in colored dots; the computer's programming is a world (the same world) executed in symbols. Anyone who has experienced a computer crash knows that the programming (good or bad) governs the picture, and not vice versa. All of this forms a remarkably tight analogy to the relationship between the quantum math on paper, and the behavior of the "quantumstuff" in the outside world.
Great Neck, New York
May 2, 1999
|1||M. Kaku, Hyperspace, at 8n.||Back|
|2||J. Gribbin, In Search of Schrodinger's Cat, 111.||Back|
|3||J. Gleick, Genius, 122.||Back|
|4||R. Penrose, The Emperor's New Mind, 25-26. See also D. Eck, The Most Complex
|5||N. Herbert, Quantum Reality, 212-13.||Back|
|6||"Entangled Trio to Put Nonlocality to the Test," Science 283, 1429 (Mar. 5, 1999).||Back|
|7||N. Herbert at 41.||Back|
|8||N. Herbert at 41.||Back|
| Back to Top |
| Back to The Notebook of Philosophy & Physics | | http://www.bottomlayer.com/bottom/argument/Argument4.html | 13 |
15 | The structure of the logical syllogism involves premise statements and a conclusion. Generally, syllogisms are made up of three statements (two premises and a conclusion). In those three statements, there are three concepts that are related (A, B, and C). The Syllogism then becomes:
There is also a way to introduce a negative. (Note how this changes the order of the C relationship to A and B.)
The following videos give further explanations as well as examples.
Article link used in the video:
Article link used in video:
Generally speaking, what is the academic definition of deductive reasoning? There seems to be multiple platforms to use deductive reasoning (ex. solving puzzles) or to analyze and create logical arguments, but I don't think I'm fully understanding what deductive reasoning is in a broader, general sense. Furthermore, what are some examples of deductive reasoning techniques that can be applied to any situation?
Deductive reasoning is the act of using clues to find an answer that might not otherwise be apparent. There's a quote from Star Trek VI that Spock said that pretty much sums up what deductive reasoning is all about: "if you eliminate the impossible, whatever remains, however improbable, must be the truth" (which is also very close to what Sherlock Holmes says in the Sir Arthur Conan Doyle stories).
In a practical sense, we use deductive reasoning to problem solve. We try something; if it doesn't work, we try something else. Every time we try something, we can eliminate that something as either the answer we are looking for or not. Theoretically, we could systematically assemble a jigsaw puzzle by trying each piece with every other piece. We will eventually get the finished puzzle in this manner, without ever looking at the picture. That might be the ultimate use of deductive reasoning.
Tying this concept to IT Support and Troubleshooting, if we have a problem like no picture on a computer monitor, we can identify possible solutions and then try each one. Is the power switch on? Are all the cables plugged in? Does another monitor work with this computer? Does this monitor work on another computer? Answers to each of these questions can help us narrow down to the eventual solution. That's deductive reasoning.
In my class, we use the logic puzzles and games to help hone deductive reasoning. It doesn't come easy for everybody, and the more you practice it, the more intuitive it becomes.
for Sophia online college credit courses.Join Now | http://www.sophia.org/logic-introduction-tutorial?subject=computer-science | 13 |
24 | Instructional Design (also called Instructional Systems Design (ISD)) is the practice of creating "instructional experiences which make the acquisition of knowledge and skill more efficient, effective, and appealing." The process consists broadly of determining the current state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. Ideally the process is informed by pedagogically (process of teaching) and andragogically (adult learning) tested theories of learning and may take place in student-only, teacher-led or community-based settings. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models but many are based on the ADDIE model with the five phases: analysis, design, development, implementation, and evaluation. As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology, though recently Constructivism (learning theory) has influenced thinking in the field.
History of the System Approach to Instructional Design
1940’s - The Origins of Instructional Design, World War II
- During the war a considerable amount of training materials for the military were developed based on the principles of instruction, learning, and human behavior. Tests for assessing a learner’s abilities were used to screen candidates for the training programs. After the success of military training, psychologists began to view training as a system, and developed various analysis, design, and evaluation procedures.
1946 – Edgar Dale’s Cone of Experience
- In 1946, Dale outlined a hierarchy of instructional methods and their effectiveness.
Editorial Note: The graphic associated with this section has been discredited, and the figures shown in it have no basis in research, and Dale's original model made no such claims. Further information on this can be found here: http://www.brainfriendlytrainer.com/theory/dale%E2%80%99s-cone-of-learning-figures-debunked
Mid-1950s through mid-1960s - The Programmed Instruction Movement
- In B. F. Skinner’s 1954 article “The Science of Learning and the Art of Teaching”, he stated that effective instructional materials, called programmed instructional materials, should include small steps, frequent questions, immediate feedback, and allow self-pacing.
- The Popularization of Behavioral Objectives - Robert Mager popularized the use of learning objectives with his1962 article “Preparing Objectives for Programmed Instruction”. In the article, he describes how to write objectives including desired behavior, learning condition, and assessment.
- In 1956, a committee led by Benjamin Bloom published an influential taxonomy of what he termed the three domains of learning: Cognitive (what one knows or thinks), Psychomotor (what one does, physically) and Affective (what one feels, or what attitudes one has). These taxonomies still influence the design of instruction.
Early 1960s - The Criterion-Referenced Testing Movement
- Robert Glaser first used the term “criterion-referenced measures” in 1962. In contrast to norm-referenced tests in which an individual's performance is compared to group performance, a criterion-referenced test is designed to test an individual's behavior in relation to an objective standard. It can be used to assess the learners’ entry level behavior, and to what extent learners have developed mastery through an instructional program.
1965 - Domains of Learning, Events of Instruction, and Hierarchical Analysis
- In 1965, Robert Gagne (see below for more information) described five domains of learning outcomes and nine events of instruction in “The conditions of Learning”, which remain foundations of instructional design practices.
- Gagne’s work in learning hierarchies and hierarchical analysis led to an important notion in instruction – to ensure that learners acquire prerequisite skills before attempting superordinate’s ones.
1967 - Formative Evaluation
- In 1967, after analyzing the failure of training material, Michael Scriven suggested the need for formative assessment – e.g., to try out instructional materials with learners (and revise accordingly) before declaring them finalized.
The 1970s - Growing of Interest in the Systems Approach
- During 1970s, the number of instructional design models greatly increased and prospered in different sectors in military, academia, and industry. Many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT), which concentrates on the means of presenting instructional materials (presentation techniques).
The 1980s - Introduction of Personal Computers into the Design Process
- During this decade, while interest in instructional design continued to be strong in business and the military, there was little evolution of ID in schools or higher education.
- This was the era, however, where educators and researchers began to consider how the personal computer could be used in an educational environment and efforts began to design instruction that utilized this new tool. PLATO (Programmed Logic for Automatic Teaching Operation) is one example of how computers began to be integrated into instruction. Many of the first uses of computers in the classroom were for “drill and skill” exercises. Computer-based educational games and simulations also became popular.
- This is also the time where there is a growing interest in how cognitive psychology can be applied to instructional design. In the late 1980s and throughout the 1990s cognitive load theory began to find empirical support for a variety of presentation techniques.
The 1990s - A Growing Interest in Constructivist Theory and the Importance of Performance
- As constructivist theory began to gain traction, its influence on instructional design became more prominent as a counterpoint to the more traditional cognitive learning theory. Constructivists believe that learning experiences should be “authentic” and produce real-world learning environments that allow the learner to construct their own knowledge. This emphasis on the learner was a significant departure away from traditional forms of instructional design.
- Another trend that surfaced during this period was the recognition of performance improvement as being an important outcome of learning that needed to be considered during the design process.
- The World Wide Web is developed and begins to surface as a potential online learning tool with hypertext and hypermedia being recognized as good tools for e-learning.
- As technology advanced and constructivist theory gained popularity, technology’s use in the classroom began to evolve from mostly drill and skill exercises to more interactive activities that required more complex thinking on the part of the learner.
- Rapid prototyping was first seen during the 1990s. In this process, an instructional design project is prototyped quickly and then vetted through a series of try and revises cycles. This is a big departure from traditional methods of instructional design that took far longer to complete.
The 2000s - Rise of the Internet and Online Learning
- The Internet, with its social media tools and multitudes of information resources, became a very popular tool for online learning, and instructional designers recognized the need to integrate e-learning into the creation of learning objects and curricula.
- There is a great increase in the number of online courses offered by higher education institutions.
- Technology advanced to the point that sophisticated simulations were now readily available to learners, thus providing more authentic and realistic learning experiences.
2010 and forward
- The influence of e-tools continues to grow and has seemingly encouraged the growth of informal learning throughout a person’s lifetime. The challenge for instructional designers is how to create learning opportunities that now may occur anywhere and anytime.
Instructional Media History
|1900s||Visual media||School museum as supplementary material (First school museum opened in St. Louis in 1905)||Materials are viewed as supplementary curriculum materials. District-wide media center is the modern equivalent.|
|1914-1923||Visual media films, Slides, Photographer||Visual Instruction Movement||The impact of visual instruction was limited because of teacher resistance to change, quality of the file and cost etc.|
|Mid 1920s to 1930s||Radio broadcasting, Sound recordings, Sound motion pictures||Radio Audiovisual Instruction movement||Education in large was not impacted.|
|World War II||Training films, Overhead projector, Slide projector, Audio equipment, Simulators and training devices||Military and industry at this time had strong demand for training.||Growth of audio-visual instruction movement in school was slow, but audiovisual device were used extensively in military services and industry.|
|Post World War II||Communication medium||Suggested to consider all aspects of a communication process (influenced by communication theories).||This view point was first ignored, but eventually helped to expand the focus of the audiovisual movement.|
|1950s to mid-1960s||Television||Growth of Instructional television||Instructional television was not adopted to a greater extent.|
|1950s-1990s||Computer||Computer-assisted instruction (CAI) research started in the 1950s, became popular in the 1980s a few years after computer s became available to general public.||The impact of CAI was rather small and the use of computer was far from innovative.|
|1990s-2000s||Internet, Simulation||The internet offered opportunites to train many people long distances. Desktop simulation gave advent to levels of Interactive Multimedia Instruction (IMI).||Online training increased rapidly to the point where entire curriculums were given through web-based training. Simulations are valuable but expensive, with the highest level being used primarily by the military and medical community.|
|2000s-2010s||Mobile Devices, Social Media||On-demand training moved to people's personal devices; social media allowed for collaborative learning.||The impact from both are too new to be measured.|
Cognitive load theory and the design of instruction
Cognitive load theory developed out of several empirical studies of learners, as they interacted with instructional materials. Sweller and his associates began to measure the effects of working memory load, and found that the format of instructional materials has a direct effect on the performance of the learners using those materials.
While the media debates of the 1990s focused on the influences of media on learning, cognitive load effects were being documented in several journals. Rather than attempting to substantiate the use of media, these cognitive load learning effects provided an empirical basis for the use of instructional strategies. Mayer asked the instructional design community to reassess the media debate, to refocus their attention on what was most important: learning.
By the mid- to late-1990s, Sweller and his associates had discovered several learning effects related to cognitive load and the design of instruction (e.g. the split attention effect, redundancy effect, and the worked-example effect). Later, other researchers like Richard Mayer began to attribute learning effects to cognitive load. Mayer and his associates soon developed a Cognitive Theory of Multimedia Learning.
In the past decade, cognitive load theory has begun to be internationally accepted and begun to revolutionize how practitioners of instructional design view instruction. Recently, human performance experts have even taken notice of cognitive load theory, and have begun to promote this theory base as the science of instruction, with instructional designers as the practitioners of this field. Finally Clark, Nguyen and Sweller published a textbook describing how Instructional Designers can promote efficient learning using evidence-based guidelines of cognitive load theory.
Instructional Designers use various instructional strategies to reduce cognitive load. For example, they think that the onscreen text should not be more than 150 words or the text should be presented in small meaningful chunks. The designers also use auditory and visual methods to communicate information to the learner.
Gagné's Theory of Instruction
Gagné's instructional theory is widely used in the design of instruction by instructional designers in many settings, and its continuing influence in the field of educational technology can be seen in the more than 130 times that Gagné has been cited in prominent journals in the field during the period from 1985 through 1990. Synthesizing ideas from behaviorism and cognitivism, he provides a clear template, which is easy to follow for designing instructional events. Instructional designers who follow Gagné's theory will likely have tightly focused, efficient instruction.
Overview of Gagné’s instructional theory
A taxonomy of Learning Outcomes
Robert Gagné classified the types of learning outcomes. To identify the types of learning, Gagné asked how learning might be demonstrated. These can be related to the domains of learning, as follows:
- Cognitive Domain
- Verbal information - is stated
- Intellectual skills - label or classify the concepts
- Intellectual skills - to apply the rules and principles
- Intellectual skills - problem solving allows generating solutions or procedures
- Cognitive strategies - are used for learning
- Affective Domain
- Attitudes - are demonstrated by preferring options
- Psychomotor Domain
- Motor skills - enable physical performance
Types of Learning Outcomes
Gagné, & Driscoll elaborated on the types of learning outcomes with a set of corresponding standard verbs:
- Verbal Information: state, recite, tell, declare
- Intellectual Skills
- Discrimination: discriminate, distinguish, differentiate
- Concrete Concept: identify, name, specify, label
- Defined Concept: classify, categorize, type, sort (by definition)
- Rule: demonstrate, show, solve (using one rule)
- Higher Order Rule: generate, develop, solve (using two or more rules)
- Cognitive Strategy: adopt, create, originate
- Attitude: choose, prefer, elect, favor
- Motor Skill: execute, perform, carry out
The Nine Events of Instruction (as Conditions of Learning)
According to Gagné, learning occurs in a series of learning events. Each learning event must be accomplished before the next in order for learning to take place. Similarly, instructional events should mirror the learning events:
- Gaining attention: To ensure reception of coming instruction, the teacher gives the learners a stimulus. Before the learners can start to process any new information, the instructor must gain the attention of the learners. This might entail using abrupt changes in the instruction.
- Informing learners of objectives: The teacher tells the learner what they will be able to do because of the instruction. The teacher communicates the desired outcome to the group.
- Stimulating recall of prior learning: The teacher asks for recall of existing relevant knowledge.
- Presenting the stimulus: The teacher gives emphasis to distinctive features.
- Providing learning guidance: The teacher helps the students in understanding (semantic encoding) by providing organization and relevance.
- Eliciting performance: The teacher asks the learners to respond, demonstrating learning.
- Providing feedback: The teacher gives informative feedback on the learners' performance.
- Assessing performance: The teacher requires more learner performance, and gives feedback, to reinforce learning.
- Enhancing retention and transfer: The teacher provides varied practice to generalize the capability.
Some educators believe that Gagné's taxonomy of learning outcomes and events of instruction oversimplify the learning process by over-prescribing. However, using them as part of a complete instructional package can assist many educators in becoming more organized and staying focused on the instructional goals.
Gagné's Influence on Instructional Design Theorists
Robert Gagné’s work has been the foundation of instructional design since the beginning of the 1960s when he conducted research and developed training materials for the military. Among the first to coin the term “instructional design”, Gagné developed some of the earliest instructional design models and ideas. These models have laid the groundwork for more present-day instructional design models from theorists like Dick, Carey, and Carey (The Dick and Carey Systems Approach Model), Jerold Kemp’s Instructional Design Model, and David Merrill (Merrill’s First Principle of Instruction). Each of these models are based on a core set of learning phases that include (1) activation of prior experience, (2) demonstration of skills, (3) application of skills, and (4) integration or these skills into real world activities. The figure below illustrates these five ideas.
Gaga main focus for instructional design was how instruction and learning could be systematically connected to the design of instruction. He emphasized the design principles and procedures that need to take place for effective teaching and learning. His initial ideas, along with the ideas of other early instructional designers, can be summed up in Psychological Principles in Systematic Development which was written by Roberts B. Miller and edited by Gaga. Gaga believed in internal learning and motivation which paved the way for theorists like Merrill, Li, and Jones who designed the Instructional Transaction Theory, Freighter and Stein’s Elaboration Theory, and most notably, Killer’s ARCS Model of Motivation and Design (see below).
Gagné's Influence on Education Today
Prior to Robert Gagné, learning was often thought of as a single, uniform process. There was little to no distinction between “learning to load a rifle and learning to solve a complex mathematical problem”. Gagné offered an alternative view which developed the ideas of different learners required different learning strategies. Understanding and designing instruction based on a learning style defined by the individual brought about new theories and approaches to teaching. Gagné 's understanding and theories of human learning added significantly to understanding the stages in cognitive processing and instructions. For example, Gagné argued instructional designers must understand the characteristics and functions of short term and long-term memory to facilitate meaningful learning. This idea encouraged instructional designers to include cognitive needs a top-down instructional approach.
Gagné ’s continuing influence on education has been best developed in The Legacy of Robert M. Gagne.
Gagné (1966) defines curriculum as a sequence of content units arranged in such a way that the learning of each unit may be accomplished as a single act, provided the capabilities described by specified prior units (in the sequence) have already been mastered by the learner.
His definition of curriculum has been the basis of many important initiatives in schools and other educational environments.. In the late 1950s and early 1960s, Gagné had expressed and established an interest in applying theory to practice with particular interest in applications for teaching, training and learning. Increasing the effectiveness and efficiency of practice was of particular concern. His ongoing attention to practice while developing theory continues to have an impact on education and training.
Gagné's work has had a significant influence on American education, military and industrial training. Gagné was one of the early developers of the concept of instructional systems design which suggests the components of a lesson can be analyzed and should be designed to operate together as an integrated plan for instruction. In "Educational Technology and the Learning Process" (Educational Researcher, 1974), Gagné defined instruction as "the set of planned external events which influence the process of learning and thus promote learning.".
Learning design
The concept of learning design arrived in the literature of technology for education in the late nineties and early 2000s with the idea that "designers and instructors need to choose for themselves the best mixture of behaviourist and constructivist learning experiences for their online courses". But the concept of learning design is probably as old as the concept of teaching. Learning design might be defined as "the description of the teaching-learning process that takes place in a unit of learning (eg, a course, a lesson or any other designed learning event)".
As summarized by Britain, learning design may be associated with:
- The concept of learning design
- The implementation of the concept made by learning design specifications like PALO, IMS Learning Design, LDL, SLD 2.0, etc...
- The technical realisations around the implementation of the concept like TELOS, RELOAD LD-Author, etc...
Difference between Learning Design and Instructional Design
Instructional design models
ADDIE process
Perhaps the most common model used for creating instructional materials is the ADDIE Model. This acronym stands for the 5 phases contained in the model (Analyze, Design, Develop, Implement, and Evaluate).
Brief History of ADDIE’s Development – The ADDIE model was initially developed by Florida State University to explain “the processes involved in the formulation of an instructional systems development (ISD) program for military interservice training that will adequately train individuals to do a particular job and which can also be applied to any interservice curriculum development activity.” The model originally contained several steps under its five original phases (Analyze, Design, Develop, Implement, and [Evaluation and] Control), whose completion was expected before movement to the next phase could occur. Over the years, the steps were revised and eventually the model itself became more dynamic and interactive than its original hierarchical rendition, until its most popular version appeared in the mid-80s, as we understand it today.
The five phases are listed and explained below:
Analyze – The first phase of content development begins with Analysis. Analysis refers to the gathering of information about one’s audience, the tasks to be completed, and the project’s overall goals. The instructional designer then classifies the information to make the content more applicable and successful.
Design – The second phase is the Design phase. In this phase, instructional designers begin to create their project. Information gathered from the analysis phase, in conjunction with the theories and models of instructional design, is meant to explain how the learning will be acquired. For example, the design phase begins with writing a learning objective. Tasks are then identified and broken down to be more manageable for the designer. The final step determines the kind of activities required for the audience in order to meet the goals identified in the Analyze phase.
Develop – The third phase, Development, relates to the creation of the activities being implemented. This stage is where the blueprints in the design phase are assembled.
Implement – After the content is developed, it is then Implemented. This stage allows the instructional designer to test all materials to identify if they are functional and appropriate for the intended audience.
Evaluate – The final phase, Evaluate, ensures the materials achieved the desired goals. The evaluation phase consists of two parts: formative and summative assessment. The ADDIE model is an iterative process of instructional design, meaning at each stage, the designer can assess the project's elements and revise them if necessary. This process incorporates formative assessment, while the summative assessments contain tests or evaluations created for the content being implemented. This final phase is vital for the instructional design team because it provides data used to alter and enhance the design.
Connecting all phases of the model are external and reciprocal revision opportunities. Aside from the internal Evaluation phase, revisions should and can be made throughout the entire process.
Most of the current instructional design models are variations of the ADDIE process.
Rapid prototyping
Sometimes utilized adaptation to the ADDIE model is in a practice known as rapid prototyping.
Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc. In fact, some proponents of design prototyping assert that a sophisticated understanding of a problem is incomplete without creating and evaluating some type of prototype, regardless of the analysis rigor that may have been applied up front. In other words, up-front analysis is rarely sufficient to allow one to confidently select an instructional model. For this reason many traditional methods of instructional design are beginning to be seen as incomplete, naive, and even counter-productive.
However, some consider rapid prototyping to be a somewhat simplistic type of model. As this argument goes, at the heart of Instructional Design is the analysis phase. After you thoroughly conduct the analysis—you can then choose a model based on your findings. That is the area where most people get snagged—they simply do not do a thorough-enough analysis. (Part of Article By Chris Bressi on LinkedIn)
Dick and Carey
Another well-known instructional design model is The Dick and Carey Systems Approach Model. The model was originally published in 1978 by Walter Dick and Lou Carey in their book entitled The Systematic Design of Instruction.
Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction as opposed to viewing instruction as a sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, "Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes". The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows:
- Identify Instructional Goal(s): goal statement describes a skill, knowledge or attitude (SKA) that a learner will be expected to acquire
- Conduct Instructional Analysis: Identify what a learner must recall and identify what learner must be able to do to perform particular task
- Analyze Learners and Contexts: Identify general characteristics of the target audience including prior skills, prior experience, and basic demographics; identify characteristics directly related to the skill to be taught; and perform analysis of the performance and learning settings.
- Write Performance Objectives: Objectives consists of a description of the behavior, the condition and criteria. The component of an objective that describes the criteria that will be used to judge the learner's performance.
- Develop Assessment Instruments: Purpose of entry behavior testing, purpose of pretesting, purpose of post-testing, purpose of practive items/practive problems
- Develop Instructional Strategy: Pre-instructional activities, content presentation, Learner participation, assessment
- Develop and Select Instructional Materials
- Design and Conduct Formative Evaluation of Instruction: Designer try to identify areas of the instructional materials that are in need of improvement.
- Revise Instruction: To identify poor test items and to identify poor instruction
- Design and Conduct Summative Evaluation
With this model, components are executed iteratively and in parallel rather than linearly.
Instructional Development Learning System (IDLS)
Another instructional design model is the Instructional Development Learning System (IDLS). The model was originally published in 1970 by Peter J. Esseff, PhD and Mary Sullivan Esseff, PhD in their book entitled IDLS—Pro Trainer 1: How to Design, Develop, and Validate Instructional Materials.
Peter (1968) & Mary (1972) Esseff both received their doctorates in Educational Technology from the Catholic University of America under the mentorship of Dr. Gabriel Ofiesh, a Founding Father of the Military Model mentioned above. Esseff and Esseff contributed synthesized existing theories to develop their approach to systematic design, "Instructional Development Learning System" (IDLS).
Also see: Managing Learning in High Performance Organizations, by Ruth Stiehl and Barbara Bessey, from The Learning Organization, Corvallis, Oregon. ISBN 0-9637457-0-0.
The components of the IDLS Model are:
- Design a Task Analysis
- Develop Criterion Tests and Performance Measures
- Develop Interactive Instructional Materials
- Validate the Interactive Instructional Materials
Other instructional design models
Other useful instructional design models include: the Smith/Ragan Model, the Morrison/Ross/Kemp Model and the OAR Model of instructional design in higher education, as well as, Wiggins' theory of backward design.
Learning theories also play an important role in the design of instructional materials. Theories such as behaviorism, constructivism, social learning and cognitivism help shape and define the outcome of instructional materials.
Motivational Design
Motivation is defined as an internal drive that activates behavior and gives it direction. The term motivation theory is concerned with the process that describe why and how human behavior is activated and directed.
Motivation Concepts Intrinsic and Extrinsic Motivation
- Instrinsic: defined as the doing of an activity for its inherent satisfactions rather than for some separable consequence. When intrinsically motivated a person is moved to act for the fun or challenge entailed rather than because of external rewards. Intrinsic motivation reflects the desire to do something because it is enjoyable. If we are intrinsically motivated, we would not be worried about external rewards such as praise.
- Examples: Writing short stories because you enjoy writing them, reading a book because you are curious about the topic, and playing chess because you enjoy effortful thinking
- Extrinsic: reflects the desire to do something because of external rewards such as awards, money and praise. People who are extrinsically motivated may not enjoy certain activities. They may only wish to engage in certain activities because they wish to receive some external reward.
- Examples: The writer who only writes poems to be submitted to poetry contests, a person who dislikes sales but accepts a sales position because he/she desires to earn an above average salary, and a person selecting a major in college based on salary and prestige, rather than personal interest.
John Keller has devoted his career to researching and understanding motivation in instructional systems. These decades of work constitute a major contribution to the instructional design field. First, by applying motivation theories systematically to design theory. Second, in developing a unique problem-solving process he calls the ARCS Motivation.
The ARCS Model of Motivational Design
The ARCS Model of Motivational Design was created by John Keller while he was researching ways to supplement the learning process with motivation. The model is based on Tolman's and Lewin's expectancy-value theory, which presumes that people are motivated to learn if there is value in the knowledge presented (i.e. it fulfills personal needs) and if there is an optimistic expectation for success. The model consists of four main areas: Attention, Relevance, Confidence, and Satisfaction.
Attention and relevance according to John Keller's ARCS motivational theory are essential to learning. The first 2 of 4 key components for motivating learners, attention and relevance can be considered the backbone of the ARCS theory, the latter components relying upon the former.
Attention: The attention mentioned in this theory refers to the interest displayed by learners in taking in the concepts/ideas being taught. This component is split into three categories: perceptual arousal, using surprise or uncertain situations; inquiry arousal, offering challenging questions and/or problems to answer/solve; and variability, using a variety of resources and methods of teaching. Within each of these categories, John Keller has provided further sub-divisions of types of stimuli to grab attention. Grabbing attention is the most important part of the model because it initiates the motivation for the learners. Once learners are interested in a topic, they are willing to invest their time, pay attention, and find out more.
Relevance: Relevance, according to Keller, must be established by using language and examples that the learners are familiar with. The three major strategies John Keller presents are goal oriented, motive matching, and familiarity. Like the Attention category, John Keller divided the three major strategies into subcategories, which provide examples of how to make a lesson plan relevant to the learner. Learners will throw concepts to the wayside if their attention cannot be grabbed and sustained and if relevance is not conveyed.
Confidence: The confidence aspect of the ARCS model focuses on establishing positive expectations for achieving success among learners. The confidence level of learners is often correlated with motivation and the amount of effort put forth in reaching a performance objective. For this reason, it’s important that learning design provides students with a method for estimating their probability of success. This can be achieved in the form of a syllabus and grading policy, rubrics, or a time estimate to complete tasks. Additionally, confidence is built when positive reinforcement for personal achievements is given through timely, relevant feedback.
Satisfaction: Finally, learners must obtain some type of satisfaction or reward from a learning experience. This satisfaction can be from a sense of achievement, praise from a higher-up, or mere entertainment. Feedback and reinforcement are important elements and when learners appreciate the results, they will be motivated to learn. Satisfaction is based upon motivation, which can be intrinsic or extrinsic. To keep learners satisfied, instruction should be designed to allow them to use their newly learned skills as soon as possible in as authentic a setting as possible.
Motivating Opportunities Model
Although Keller’s ARCS model currently dominates instructional design with respect to learner motivation, in 2006 Hardré and Miller proposed a need for a new design model that includes current research in human motivation, a comprehensive treatment of motivation, integrates various fields of psychology and provides designers the flexibility to be applied to a myriad of situations.
Hardré proposes an alternate model for designers called the Motivating Opportunities Model or MOM. Hardré’s model incorporates cognitive, needs, and affective theories as well as social elements of learning to address learner motivation. MOM has seven key components spelling the acronym ‘SUCCESS’- Situational, Utilization, Competence, Content, Emotional, Social, and Systemic. These components are described below.
Influential researchers and theorists
||This article contains embedded lists that may be poorly defined, unverified or indiscriminate. (December 2010)|
Alphabetic by last name
- Bloom, Benjamin – Taxonomies of the cognitive, affective, and psychomotor domains – 1955
- Bonk, Curtis – Blended learning – 2000s
- Bransford, John D. – How People Learn: Bridging Research and Practice – 1999
- Bruner, Jerome – Constructivism
- Carey, L. – "The Systematic Design of Instruction"
- Clark, Richard – Clark-Kosma "Media vs Methods debate", "Guidance" debate.
- Clark, Ruth – Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load / Guided Instruction / Cognitive Load Theory
- Dick, W. – "The Systematic Design of Instruction"
- Gagné, Robert M. – Nine Events of Instruction (Gagné and Merrill Video Seminar)
- Hannum, Wallace H., Professor, UNC-Chapel Hill – numerous articles and books; search via Google and Google Scholar
- Heinich, Robert – Instructional Media and the new technologies of instruction 3rd ed. – Educational Technology – 1989
- Jonassen, David – problem-solving strategies – 1990s
- Langdon, Danny G – The Instructional Designs Library: 40 Instructional Designs, Educational Tech. Publications
- Mager, Robert F. – ABCD model for instructional objectives – 1962
- Merrill, M. David – Component Display Theory / Knowledge Objects / First Principles of Instruction
- Papert, Seymour – Constructionism, LOGO – 1970s
- Piaget, Jean – Cognitive development – 1960s
- Piskurich, George – Rapid Instructional Design – 2006
- Simonson, Michael – Instructional Systems and Design via Distance Education – 1980s
- Schank, Roger – Constructivist simulations – 1990s
- Sweller, John – Cognitive load, Worked-example effect, Split-attention effect
- Reigeluth, Charles – Elaboration Theory, "Green Books" I, II, and III – 1999–2010
- Skinner, B.F. – Radical Behaviorism, Programed Instruction
- Vygotsky, Lev – Learning as a social activity – 1930s
See also
|Wikiversity has learning materials about Instructional design|
- ADDIE Model
- educational assessment
- confidence-based learning
- educational animation
- educational psychology
- educational technology
- e-learning framework
- electronic portfolio
- First Principles of Instruction
- human–computer interaction
- instructional technology
- instructional theory
- interaction design
- learning object
- learning science
- multimedia learning
- online education
- instructional design coordinator
- interdisciplinary teaching
- rapid prototyping
- lesson study
- Understanding by Design
- Merrill, M. D., Drake, L., Lacy, M. J., Pratt, J., & ID2_Research_Group. (1996). Reclaiming instructional design. Educational Technology, 36(5), 5-7. http://mdavidmerrill.com/Papers/Reclaiming.PDF
- Cognition and instruction: Their historic meeting within educational psychology. Mayer, Richard E. Journal of Educational Psychology, Vol 84(4), Dec 1992, 405-412. doi:10.1037/0022-06184.108.40.2065 http://psycnet.apa.org/journals/edu/84/4/405/
- Duffy, T. M., & Cunningham, D. J. (1996). Constructivism: Implications for the design and delivery of instruction. In D. Jonassen (Ed.), Handbook of Research for Educational Communications and Technology (pp. 170-198). New York: Simon & Schuster Macmillan
- Duffy, T. M. , & Jonassen, D. H. (1992). Constructivism: New implications for instructional technology. In T. Duffy & D. Jonassen (Eds.), Constructivism and the technology of instruction (pp. 1-16). Hillsdale, NJ: Erlbaum.
- Reiser, R. A., & Dempsey, J. V. (2012). Trends and issues in instructional design and technology. Boston: Pearson.
- Clark, B. (2009). The history of instructional design and technology. Retrieved from http://www.slideshare.net/benton44/history-of-instructional-design-and-technology?from=embed
- Bloom's Taxonomy. Retrieved from Wikipedia on April 18, 2012 at Bloom's Taxonomy
- Instructional Design Theories. Instructionaldesign.org. Retrieved on 2011-10-07.
- Reiser, R. A. (2001). "A History of Instructional Design and Technology: Part II: A History of Instructional Design". ETR&D, Vol. 49, No. 2, 2001, pp. 57–67. Retrieved from https://files.nyu.edu/jpd247/public/2251/readings/Reiser_2001_History_of_ID.pdf
- History of instructional media. Uploaded to YouTube by crozitis on Jan 17, 2010. Retrieved from http://www.youtube.com/watch?v=y-fKcf4GuOU
- A hypertext history of instructional design. Retrieved April 11, 2012 from http://faculty.coe.uh.edu/smcneil/cuin6373/idhistory/index.html
- Markham, R. "History of instructional design". Retrieved on April 11, 2012 from http://home.utah.edu/~rgm15a60/Paper/html/index_files/Page1108.htm
- Lawrence Erlbaum Associates, Inc. – Educational Psychologist – 38(1):1 – Citation. Leaonline.com (2010-06-08). Retrieved on 2011-10-07.
- History and timeline of instructional design. Retrieved April 11, 2012 from http://www.instructionaldesigncentral.com/htm/IDC_instructionaltechnologytimeline.htm
- Braine, B., (2010). "Historical Evolution of Instructional Design & Technology". Retrieved on April 11, 2012 from http://timerime.com/en/timeline/415929/Historical+Evolution+of+Instructional+Design++Technology/
- Sweller, J. (1988). "Cognitive load during problem solving: Effects on learning". Cognitive Science 12 (1): 257–285. doi:10.1016/0364-0213(88)90023-7.
- Chandler, P. & Sweller, J. (1991). "Cognitive Load Theory and the Format of Instruction". Cognition and Instruction 8 (4): 293–332. doi:10.1207/s1532690xci0804_2.
- Sweller, J., & Cooper, G.A. (1985). "The use of worked examples as a substitute for problem solving in learning algebra". Cognition and Instruction 2 (1): 59–89. doi:10.1207/s1532690xci0201_3.
- Cooper, G., & Sweller, J. (1987). "Effects of schema acquisition and rule automation on mathematical problem-solving transfer". Journal of Educational Psychology 79 (4): 347–362. doi:10.1037/0022-06220.127.116.117.
- Mayer, R.E. (1997). "Multimedia Learning: Are We Asking the Right Questions?". Educational Psychologist 32 (41): 1–19. doi:10.1207/s15326985ep3201_1.
- Mayer, R.E. (2001). Multimedia Learning. Cambridge: Cambridge University Press. ISBN 0-521-78239-2.
- Mayer, R.E., Bove, W. Bryman, A. Mars, R. & Tapangco, L. (1996). "When Less Is More: Meaningful Learning From Visual and Verbal Summaries of Science Textbook Lessons". Journal of Educational Psychology 88 (1): 64–73. doi:10.1037/0022-0618.104.22.168.
- Mayer, R.E., Steinhoff, K., Bower, G. and Mars, R. (1995). "A generative theory of textbook design: Using annotated illustrations to foster meaningful learning of science text". Educational Technology Research and Development 43 (1): 31–41. doi:10.1007/BF02300480.
- Paas, F., Renkl, A. & Sweller, J. (2004). "Cognitive Load Theory: Instructional Implications of the Interaction between Information Structures and Cognitive Architecture". Instructional Science 32: 1–8. doi:10.1023/B:TRUC.0000021806.17516.d0.
- Clark, R.C., Mayer, R.E. (2002). e-Learning and the Science of Instruction: Proven Guidelines for Consumers and Designers of Multimedia Learning. San Francisco: Pfeiffer. ISBN 0-7879-6051-9.
- Clark, R.C., Nguyen, F., and Sweller, J. (2006). Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load. San Francisco: Pfeiffer. ISBN 0-7879-7728-4.
- Anglin, G. J., & Towers, R. L. (1992). Reference citations in selected instructional design and technology journals, 1985-1990. Educational Technology Research and DEevelopment, 40, 40-46.
- Perry, J. D. (2001). Learning and cognition. [On-Line]. Available: http://education.indiana.edu/~p540/webcourse/gagne.html
- Driscoll, Marcy P. 2004. Psychology of Learning and Instruction, 3rd Edition. Allyn & Bacon.
- Gagné, R. M. (1985). The conditions of learning (4th ed.). New York: Holt, Rinehart & Winston.
- Gagné, R. M., & Driscoll, M. P. (1988). Essentials of learning for instruction. Englewood Cliffs, NJ: Prentice-Hall.
- Haines, D. (1996). Gagné. [On-Line]. Available: http://education.indiana.edu/~educp540/haines1.html
- Dowling, L. J. (2001). Robert Gagné and the Conditions of Learning. Walden University.
- Dick, W., & Carey, L. (1996). The systematic design of instruction. 4th ed. New York, NY: Harper Collin
- Instructional Design Models and Theories, Retrieved April 9, 2012 from http://www.instructionaldesigncentral.com/htm/IDC_instructionaldesignmodels.htm#kemp,
- Instructional Design Models and Theories, Retrieved April 9th 2012 from http://www.instructionaldesigncentral.com/htm/IDC_instructionaldesignmodels.htm#kemp
- Psychological Principles in System Development-1962. Retrieved on April 15, 2012 from http://www.nwlink.com/~donclark/history_isd/gagne.html
- Merrill, D.M., Jones, M.K., & Chongqing, L. (December 1990). Instructional Transaction Theory. Retrieved from http://www.speakeasydesigns.com/SDSU/student/SAGE/compsprep/ITT_Intro.pdf
- Elaboration Theory (Charles Freighter), Retrieved April 9, 2012 from http://www.instructionaldesign.org/theories/elaboration-theory.html
- Wiburg, K. M. (2003). [Web log message]. Retrieved from http://www.internettime.com/itimegroup/Is it Time to Exchange Skinner's Teaching Machine for Dewey's.htm
- Richey, R. C. (2000). The legacy of Robert M.Gagné . Syracuse, NY: ERIC Clearinghouse on Information & Technology.
- Gagné, R.M. (n.d.). Biographies. Retrieved April 18, 2012, from Answers.com Web site: http://www.answers.com/topic/robert-mills-gagn
- Conole G., and Fill K., "A learning design toolkit to create pedagogically effective learning activities". Journal of Interactive Media in Education, 2005 (08).
- Carr-Chellman A. and Duchastel P., "The ideal online course," British Journal of Educational Technology, 31(3), 229–241, July 2000.
- Koper R., "Current Research in Learning Design," Educational Technology & Society, 9 (1), 13–22, 2006.
- Britain S., "A Review of Learning Design: Concept, Specifications and Tools" A report for the JISC E-learning Pedagogy Programme, May 2004.
- IMS Learning Design webpage. Imsglobal.org. Retrieved on 2011-10-07.
- Branson, R. K., Rayner, G. T., Cox, J. L., Furman, J. P., King, F. J., Hannum, W. H. (1975). Interservice procedures for instructional systems development. (5 vols.) (TRADOC Pam 350-30 NAVEDTRA 106A). Ft. Monroe, VA: U.S. Army Training and Doctrine Command, August 1975. (NTIS No. ADA 019 486 through ADA 019 490).
- Piskurich, G.M. (2006). Rapid Instructional Design: Learning ID fast and right.
- Saettler, P. (1990). The evolution of American educational technology.
- Stolovitch, H.D., & Keeps, E. (1999). Handbook of human performance technology.
- Kelley, T., & Littman, J. (2005). The ten faces of innovation: IDEO's strategies for beating the devil's advocate & driving creativity throughout your organization. New York: Doubleday.
- Hokanson, B., & Miller, C. (2009). Role-based design: A contemporary framework for innovation and creativity in instructional design. Educational Technology, 49(2), 21–28.
- Dick, Walter, Lou Carey, and James O. Carey (2005) . The Systematic Design of Instruction (6th ed.). Allyn & Bacon. pp. 1–12. ISBN 0-205-41274-2.
- Esseff, Peter J. and Esseff, Mary Sullivan (1998) . Instructional Development Learning System (IDLS) (8th ed.). ESF Press. pp. 1–12. ISBN 1-58283-037-1.
- ESF, Inc. – Train-the-Trainer – ESF ProTrainer Materials – 813.814.1192. Esf-protrainer.com (2007-11-06). Retrieved on 2011-10-07.
- Smith, P. L. & Ragan, T. J. (2004). Instructional design (3rd Ed.). Danvers, MA: John Wiley & Sons.
- Morrison, G. R., Ross, S. M., & Kemp, J. E. (2001). Designing effective instruction, 3rd ed. New York: John Wiley.
- Joeckel, G., Jeon, T., Gardner, J. (2010). Instructional Challenges In Higher Education: Online Courses Delivered Through A Learning Management System By Subject Matter Experts. In Song, H. (Ed.) Distance Learning Technology, Current Instruction, and the Future of Education: Applications of Today, Practices of Tomorrow. (link to article)
- R. Ryan; E. Deci. "Intrinsic and Extrinsic Motivations". Contemporary Educational Psychology. Retrieved April 1, 2012.
- Brad Bell. "Intrinsic Motivation and Extrinsic Motivation with Examples of Each Types of Motivation". Blue Fox Communications. Retrieved April 1, 2012.
- Keller, John. "arcsmodel.com". John M. Keller. Retrieved April 1, 2012.
- Ely, Donald (1983). Development and Use of the ARCS Model of Motivational Design. Libraries Unlimited. pp. 225–245.
- Hardré, Patricia; Miller, Raymond B. (2006). "Toward a current, comprehensive, integrative, and flexible model of motivation for instructional design". Performance Improvement Quarterly 19 (3).
- Hardré, Patricia (2009). "The motivating opportunities model for Performance SUCCESS: Design, Development, and Instructional Implications". Performance Improvement Quarterly 22 (1). doi:10.1002/piq.20043.
- Instructional Design – An overview of Instructional Design
- ISD Handbook
- Edutech wiki: Instructional design model
- Debby Kalk, Real World Instructional Design Interview | http://en.wikipedia.org/wiki/Instructional_design | 13 |
32 | There are many definitions of logic as a field of study. One handy definition for Day One of an introductory course like this is that logic is the study of argument. For the purposes of logic, an argument is not a quarrel or dispute, but an example of reasoning in which one or more statements are offered as support, justification, grounds, reasons, or evidence for another statement. The statement being supported is the conclusion of the argument, and the statements that support it are the premises of the argument.
Studying argument is important because argument is the way we support our claims to truth. It is tempting to say that arguments establish the truth of their conclusions. But the study of logic forces us to qualify this statement. Arguments establish the truth of conclusions relative to some premises and rules of inference. Logicians do not care whether arguments succeed psychologically in changing people's minds or convincing them. The kinks and twists of actual human reasoning are studied by psychology; the effectiveness of reasoning and its variations in persuading others are studied by rhetoric; but the correctness of reasoning (the validity of the inference) is studied by logic.
To assess the worth of an argument, only two aspects or properties of the argument need be considered: the truth of the premises and the validity of the reasoning from them to the conclusion. Of these, logicians study only the reasoning; they leave the question of the truth of the premises to empirical scientists and private detectives.
An argument is valid if the truth of its premises guarantees the truth of its conclusion; or if the conclusion would necessarily be true on the assumption that all the premises were true; or if it is impossible for the conclusion to be false and all the premises true at the same time; or if the conclusion can be deduced from the premises in accordance with certain valid rules. It turns out that all these formulations are equivalent. (The last formulation is that of syntactic validity, the rest are formulations of semantic validity.) If an argument is not valid, it is invalid.
Note that only arguments can be valid or invalid, not statements. Similarly, only statements can be true or false, not arguments. Validity pertains to reasoning, not propositions, while truth pertains to propositions, not reasoning. The first fundamental principle of logic is the independence of truth and validity.
When the reasoning in an argument is valid and all its premises are true, then it is called sound. Otherwise the argument is unsound. If an argument is sound, then its conclusion must be true and we would be illogical to disbelieve it.
An argument is deductive if the premises claim to give conclusive grounds for the truth of the conclusion, or if the premises claim to support the conclusion with necessity. An argument is inductive if it makes the milder claim that its premises support but do not guarantee its conclusion. The black and white categories of validity and invalidity apply only to deductive arguments; inductive arguments are strong or weak. In a valid deductive argument with all true premises, the truth of the conclusion is necessary and its falsehood is impossible. In a strong inductive argument with all true premises, the truth of the conclusion is merely probable and its falsehood merely improbable. The kind of support that valid deductions provide their conclusions is not a matter of degree; it is "all or nothing". But the kind of support that strong inductions provide their conclusions is a matter of degree; it is "more or less". The conclusion of a valid deduction never contains more information than was contained the premises; the conclusion of an induction always does. That is why deductions possess certainty (they never tell us anything new) and why inductions are always uncertain in some degree.
Do not confuse inductions with bad deductions. The difference between deduction and induction is not the difference between good and bad reasoning, but between two ways to support the truth of conclusions. Deduction is the subject of a rigorous exact science; induction, unfortunately, is not.
A fallacy is a bad method of argument, whether deductive or inductive. Arguments can be "bad" (or unsound) for several reasons: one or more of their premises may be false, or irrelevant, or the reasoning from them may be invalid, or the language expressing them may be ambiguous or vague. There are certainly an infinity of bad arguments; there may even be an infinity of ways of arguing badly. The name fallacy is usually reserved for typical faults in arguments that we nevertheless find persuasive. Studying them is therefore a good defense against deception.
This file is an electronic hand-out for the course, Symbolic Logic.
Peter Suber, Department of Philosophy, Earlham College, Richmond, Indiana, 47374, U.S.A.
email@example.com. Copyright © 1997, Peter Suber. | http://legacy.earlham.edu/~peters/courses/log/terms1.htm | 13 |
24 | An algorithm is a procedure for carrying out a task which, given an initial state, will terminate in a clearly defined end-state. It can be thought of as a recipe, or a step-by-step method, so that in following the steps of the algorithm one is guaranteed to find the solution or the answer. One commentator describes it as:
|“||a finite procedure, written in a fixed symbolic vocabulary, governed by precise instructions, moving in discrete Steps 1, 2, 3, ..., whose execution requires no insight, cleverness, intuition, intelligence, or perspicuity, and that sooner or later comes to an end.||”|
The name is derived from a corruption of Al-Khwārizmī, the Persian astronomer and mathematician who wrote a treatise in Arabic in 825 AD entitled: On Calculation with Hindu Numerals. This was translated into Latin in the 12th century as Algoritmi de numero Indorum. The title translates as "Al-Khwārizmī on the numbers of the Indians", with Algoritmi being the translator's rendition of the original author's name in Arabic. Later writers misunderstood the title, and treated Algoritmi as a Latin plural of the neologism Algoritmus or Algorithmus, and took its meaning to be 'calculation method'. In the Middle Ages there were many variations of this, such as alkorisms, algorisms, algorhythms etc.
There are two main current usages:
- In elementary education, where an 'algorithm' is used for calculation, such as the decomposition algorithm, or the equal addition method of subtraction. Many of these algorithms are, in fact derivations from methods in Al-Khwārizmī's original treatise.
- In computing, where an algorithm is the methodology which underlies a computer program. This tells the computer what specific steps to perform (in what specific order) in order to carry out a specified task, such as calculating employees’ paychecks or printing students’ report cards.
All algorithms must adhere to two rules -
- They must work for any given input or network.
- They must have a single start point, and a single finish point.
There is a second type of algorithm - whereas most algorithms provably evaluate the most desirable end-state (for example, it is possible to mathematically prove that Dijkstra's algorithm gives the shortest route from one point to another) - others known as 'heuristic algorithms' cannot provably give the best solution (although they do give a fairly good result). While this may seem inferior, some problems are very difficult or even impossible to map out using normal algorithms, so heuristic ones are superior in these cases.
Examples of Algorithms
Dijkstra's algorithm finds the shortest path through a network, from one point to another.
- Start by labelling the first node (1,0,0). The first number, the ordinal value, denotes the order in which the arcs were labelled, the second, the label value (the distance travelled thus far), while the third, the working value, denotes the possible distance to that point.
- Then, update the working values for nodes separated to the current node(s) by one arc, by adding the weight of the arc to that node to the label value of the node at the other end. This value may decrease during the progression of the algorithm, as new, shorter, routes are found.
- Choose the node with the lowest working value, and promote its working value to the label value, and write in its new ordinal value. For example, if this was the second node labelled, its complete annotation would now read (2, x, x), where 'x' would be the distance between it and the first node.
- Now return to the second step. If there are no more nodes to be added, the algorithm has finished.
The planarity algorithm is used to determine whether a given shape is 2-dimensional or not. It has found particular use in road and circuit board design to avoid cross-overs.
- Identify a Hamiltonian Cycle in the network.
- Redraw the network with the Hamiltonian cycle on the outside.
- Identify any crossings between lines.
- Choose an arc with crossings to stay inside the cycle. Move any arcs with crossings to the outside.
- Repeat from step 3 until there are no more crossings. If this step is completed, it shows that the shape is planar (2-d). If it cannot be completed, the shape is 3-dimensional and not planar.
Critical path analysis
Critical path analysis is used to determine the fastest/most efficient way of completing a series of tasks, which often depend upon each other.
Kruskal's algorithm is used to find the minimum spanning tree in an network that you can see all of.
Prim's algorithm is used to find the minimum spanning tree when only some of a network is visible.
The Bellman-Ford algorithm is used to find the shortest path in a graph with negative weighted edges.
The Krim-Jacob algorithm is used to determine if a problem can be solved with abstract time and memory.
The Bin filling algorithm is used to find the most efficient way of combining several differently sized objects into a space(s) with a certain size. An example would be the problem of packing objects into the boot of a car; the bin-filling algorithm is in fact a mathematically formulated version of the rule of thumb 'put the big things in first'; but in can be used for many other problems - loading cars onto ferries, sending messages via routers on the internet, etc. It is a heuristic algorithm, as it does not give a provably maximal solution (the only way to do this, until Vijay Vazirani invented a new form of approximation algorithm, was to arrange the objects in every conceivable order).
- ↑ David Berlinski, The Advent of the Algorithm: The Idea that Rules the World (Harcourt Inc.: 2000). | http://www.conservapedia.com/Algorithm | 13 |
83 | 1. Arguments & Truth
Building Arguments with Statements
The Logic & Language series introduced logic as a way of representing and analyzing sentences, but I skirted around questions of truth and establishing good arguments. I'd like to take a stab at those topics in this video.
We've payed attention to the structure of sentences and used symbolic logic to represent that structure. On that front, logic provides a handy way to sort out and clarify statements. Please make sure to go through those videos - if you understand that material, you can see that the logic of a statement like 'water is liquid' may be represented as (∀x)Wx⊃Lx (for all x, if x is water then x is a liquid). Now that you have that kind of grasp on logical statements, you can extend your logic to whole arguments with multiple statements.
When I say argument, I'm afraid you may be hearing 'heated exchange of words'. I mean instead a series of statements made in support of an assertion, together with the assertion drawn from those statements. The supporting statements are premises of the argument and the assertion that follows from them is the conclusion.
Right now I want to convince you that coffee is a food, not a beverage. Yeah, you heard that right. I'll start with some sentences we both accept. (1) Coffee is a bean (C⊃B). Remember your predicates here - I'm stating that (∀x)Cx⊃Bx. (2) A bean is a food, or (B⊃F). So it follows that (3) coffee is a food (C⊃F)!
I brought you to a conclusion using two supporting premises. You might think I'm pulling something over on you here - maybe you can't even put your finger on it. Hold your criticism for now and consider the structure of my argument first.
The Form of an Argument (formal matters)
Is the form of my "Coffee is food" argument any good? Let's abstract the argument to A is B, B is C, therefore A is C. You'll find that, however you fill in this structure, the argument seems to work. I bet you're even smart enough to see why, you really are, so let's move along.
If, on the other hand, I tried to fill out an argument with a structure like (premise 1) A is B, (premise 2) A is C, (conclusion) therefore B is C, it turns out that the argument doesn't work. Apply this form to a few cases, and you'll figure out pretty quickly that it fails.
Arguments with a working structure are valid. Arguments that are broken get the negative label invalid (please read that as in-vá-lid). It would be premature of us to stop there and declare that we have a watertight argument. We've checked our structure, but we haven't thought about whether we've filled our structure with correct statements.
The Content of an Argument (informal matters)
The premises of an argument can be true or false, and the conclusion true or false as a result. I could propose the argument that all soups are cold (A⊃B), anything that's cold can't burn your mouth (B⊃C), so soups can't burn your mouth (A⊃C). The "Soup can't burn" argument has a valid structure, but the first of its supporting premises is false, which impacts the conclusion I'm drawing.
If our argument is valid and has true premises, it's a sound argument. This may strike you as straightforward, but it's easy to get lazy (or manipulative) with your logic and end up producing or being convinced by unsound arguments. (Or, should I say, it's easy for the other guy). Notice that either truth or validity can impact the soundness of your argument - unsound arguments include an invalid argument with true premises, a valid argument with one or more untrue premises, and an invalid argument with untrue premises; only a valid argument with true premises is sound.
You may be aware of this, but our argument structure introduced above - A is B, B is C, so A is C (dogs have brains, brains have neurons, dogs have neurons) - is called a syllogism. So my "Coffee is food" is a valid syllogism with, I will assert, true premises. If that's all correct, the argument is sound!
Not all arguments are syllogisms. Notice that the syllogism has two conditional premises: if A then B, if B then C. The conclusion follows from these conditions - therefore if A then C. We could just use one conditional premise - if A then B. If we assert in a second premise that A, we conclude that B.
Deductive Reasoning & Inductive Reasoning
I haven't looked outside these arguments to support them. I just relied on the structure and persuasiveness of the premises to make my point. The meaning and words are all there for you to evaluate, and if these are good, the conclusion's good, too. That's deductive reasoning (deduction), and the resulting argument is called a deductive argument.
Once I start checking my facts outside my bare words, things get trickier. If I want to support my assertion that coffee is beans, I have to use inductive reasoning (induction) to look at specific information in the real world. I then make abstract generalizations about 'all coffee' even though I could never hope to check if what I'm saying is true even of a fraction of the world's coffee. Such claims are probabilistic and open (until I can demonstrate that they are false), and arguments that take this line of reasoning are inductive.
For instance, I can't demonstrate that my first premise is true of all coffee ever, but I can state things that probably hold, which requires inductive reasoning. You might instinctively perform inductive reasoning by checking against your own experience ("Have I seen coffee beans?"; "Have I ever seen coffee that's not beans?"), but There are even more methodical approaches (operationalize (clearly define for observation) "coffee", "beans" and "food", statistical sampling).
We called valid and true deductive arguments sound. When we come across inductive arguments, it's preferable to talk about cogency. A cogent argument is "valid" (strong) and has probable premises.
I've thrown around the words true and truth. If you're like most of us, you intuitively feel you have some understanding of what those words mean. The difference between deductive and inductive thinking may have raised a few issues, but let's tackle the issue of truth straight on. Digest each one of the following questions, and use them to get a broad view on different theories of truth in logic & philosophy:
- Is a statement true when it aligns with, is consistent with and doesn't contradict other true statements? (Coherence)
- Is a statement true when it corresponds to something in the real world? (Correspondence)
- Can certain statements be asserted as true in and of themselves, self-evidently? Will truth then deductively follow from these assumptions? (Foundationalism)
- Is a statement true when it proves to be useful or practical? (Pragmatic accounts)
- Is a statement true when enough people argue or believe that it is true? (Consensus)
- Is truth not an actual property of a statement at all, but something else? Can 'true' or 'the truth' ever be predicated in a meaningful, non-redundant way? (Deflationism)
Whatever your intuitions, what do you make of my "coffee is a food" argument? Do you think it's sound? In the next video, I'll share a perspective on logical fallacies and take another look at that argument.
2. Logical Fallacies
When you're arguing a point using logic, you establish premises to support a conclusion. Many times, your conclusion doesn't actually follow from your line of reasoning, no matter how much you or your opponent are convinced by your argument. In that case, you may be relying on a fallacy to persuade yourself and your opponent. This word "fallacy" sounds charged and can certainly get tossed from side to side in heated debates, but in logic a fallacy is merely a descriptive term, and says nothing about how dumb you are nor about whether or not there exists some real way to support the same conclusion.
Let me show you how it works. I made an argument in the previous section demonstrating that coffee isn't a beverage, but a food. In this argument, I relied on two distinct meanings of the word 'coffee', and two different meanings of the word 'beans' in my premises. I was intentionally vague, and used compact language to hide my choices. This simple language makes the argument much more convincing than if I disclosed the intended meaning of these words. (I relied on equivocation).
The result: the conclusion looks like it follows from the premises, but it's really not supported by them. The conclusion may or may not be true, but it is uncalled for here. This reliance on the psychological persuasiveness of a statement is a hallmark feature of informal fallacies, which are informal because the truthfulness of the conclusion's content can't be derived from the truthfulness of the content of the premises. Still, the argument's form may be perfectly valid, like my 'food is coffee' syllogism.
Here are some examples of informal fallacies:
|argumentum ad baculum||appeal to force; threatening into submission||I'll hit you if you say coffee is a drink. So coffee is a food!|
|argumentum ad hominem||focus on character flaws & bias rather than argument at hand||John's a loudmouth, and he says coffee is a drink. So coffee must be a food.|
|argumentum ad verecundiam||focus on authority, expertise or credentials rather than the argument at hand||Dr. John has two PhDs, and he says coffee is a food. So coffee is a food.|
|argumentum ad ignorantiam||asserts a conclusion based on something unknown||We don't know everything about food or coffee yet. For all we know, maybe coffee is a food. So let's accept that coffee is a food.|
|petitio principii||begging the question; asserting the conclusion to demonstrate the conclusion||Since coffee is a food retail product, coffee is a food.|
And the list goes on and on. In all cases, informal fallacies rely on content that does something other than support the conclusion in order to get you to
accept the conclusion. You can get a feel for how the logic of the supporting line of reasoning actually runs if you just remove the fallacious statement,
I'll hit you if you say coffee is a drink, so coffee is a drink. Not so reasonable now, is it?
Recall that arguments don't just have content, they also have form. We called an argument with well-structured premises and a conclusion valid. It may not surprise you to hear of formal fallacies, which arise when we shape the argument in a way that the conclusion doesn't follow from the premises, but people can still be persuaded by the conlusion because it seems to follow. Let's play around with a few arguments to see if you can spot weaknesses in their structure.
First, a refresher: a basic syllogism. Fill it in as you like; we've seen that this form works:
|Premise 1:||A ⊃ B|
|Premise 2:||B ⊃ C|
|Conclusion:||∴ A ⊃ C|
Next, a second point of reference, and an even simpler argument. If P then Q. P, so Q. This is a basic reasoning strategy, and it works!
|P1:||P ⊃ Q|
What about this one? A is not B, B is not C, therefore A is C. Think about it for a moment, and check against other examples if you need to. The first premise only supports that A is NOT B, so anything we say about B in the second premise can't apply to A, positively or negatively. The conclusion doesn't follow.
|P1:||A ⊃ ~B|
|P2:||B ⊃ ~C|
|C:||∴ A ⊃ C|
This next argument runs aground on the same issue as the last. A is not B, so no matter what we say about B, it doesn't apply to A.
|A ⊃ ~B|
|B ⊃ C|
|∴ A ⊃ ~C|
What's wrong below? I've negated P in the conditional statement, then assumed that Q must also be negated. Perhaps Q is conditioned on something else. (If I flip the switch the light turns on; the light's on, but maybe someone else flipped some other switch!)
|P ⊃ Q|
Spot the problem in the next argument? No? Look closer. Keep looking... because you won't find it. This one's good, and I leave it to you and your clever ways to figure out why.
|P ⊃ Q|
One more for you. Can I argue that what follows from a conditional implies what precedes? Consider this form carefully. A implies B only states that if A holds, B also holds. It does NOT mean that B always & only follows from A. Come on, you remember your logical biconditionals, right?
|A ⊃ B|
That's all for fallacies (well, for this introduction to fallacies - I'm sure you'll encounter plenty more!) These last two topics introduced arguments and fallacies. I'll add them to my playlist for the intro to logic if you're visiting the Youtube channel and to the lessons on nativlang.com. I hope you've enjoyed a bit more logic, and thanks for learning with me. | http://www.nativlang.com/logic/logic-argument.php | 13 |
17 | Learn something new every day More Info... by email
A critical thinking rubric is a typical rubric that is commonly used by teachers to gauge a student’s critical thinking skills. There are many factors, and they differ from one critical thinking rubric to the next, but most of the factors are similar. Aside from assisting in scoring reports, these rubrics give teachers a standard by which to judge critical thinking and can help the teacher improve the entire class’s critical thinking abilities. The main problem of using this rubric is that it may be subjective according to the user and how he thinks the student has applied critical thinking.
Perhaps the most common people who use a critical thinking rubric are teachers. This primarily is used to judge how well a student has applied critical thinking to a report, and it can be used for scoring. Aside from reports, this also can be used for other projects or as a means to check how the student is doing outside of schoolwork. Other people may use this rubric to judge their own or other people’s critical thinking skills, but the rubric typically is made for teacher use.
When the critical thinking rubric is used, there are many factors that are used to create an overall score of a student’s critical thinking. These factors often are about how well the student took references in context, the student’s ability to explain situations or references, and the strength of the student’s thesis or theme. Depending on the rubric, each factor typically can be scored between 1 and 5, with 1 showing poor critical thinking.
Critical thinking often is considered a good quality that teachers try to foster in students, and a critical thinking rubric is able to assist with this. By observing students, or through judging class work, this rubric can be used to show the teacher the class’s average critical thinking power. From here, the teacher can attempt to improve critical thinking, if needed.
Just like most rubrics, there is one problem that affects the use of a critical thinking rubric: the teacher’s subjectivity. For example, one teacher may grade a student as a 3 for a certain factor, while another might grade the student at a 4. From how most rubrics are created, the problems should be minimal and teachers should give a similar average score, but the potential for this problem still is there. For this reason, teachers may have to attend seminars to understand what standards to apply when using a rubric. | http://www.wisegeek.com/what-is-a-critical-thinking-rubric.htm | 13 |
218 | This chapter reviews logical rules that produce valid arguments and common rule violations
that lead to fallacies.
Understanding fallacies helps us to avoid committing them and to recognize fallacious
arguments made by others.
Reasoning can be inductive or deductive.
Deductive reasoning is what we call "logic" informally.
It is a way of thinking mathematically about all kinds of things:
Given a set of assumptions (premises), what must then be true?
In contrast, inductive reasoning attempts to generalize from experience (data) to new situations:
How strong is the evidence that something is true or false about the world?
Inductive reasoning is inherently uncertain.
Deductive reasoning—if logical—is as certain as
mathematics can be.
Much of the meat of Statistics, covered in other chapters, concerns inductive reasoning.
Exceptional care is needed to draw reliable conclusions by inductive reasoning.
And good inductive reasoning requires correct deductive reasoning, the subject of this chapter.
Deductive reasoning that is mathematically correct (logical) is valid.
Deductive reasoning that is incorrect (logically faulty, illogical) is
Reasoning can be valid even if the assumptions on which it is based are false.
If reasoning is valid and based on true premises, it is
Many deductive and inductive arguments rely on statistical evidence.
Even the best statistical evidence can lead to wrong conclusions if it is used in
a fallacious argument.
The difficulty many people have understanding statistics makes statistics
especially effective "red herrings" to distract the listener.
Moreover, statistics can give fallacious arguments an undeserved air of scientific precision.
Fallacies have been studied at least since classical times.
Validity, soundness and
formal fallacies (which result
from not following valid rules of logic or misapplying valid rules)
are covered in more detail in
categories of things.
The post hoc ergo propter hoc fallacy
is addressed in more detail in
gets special attention in
principle of insufficient
reason, mentioned in
appeal to ignorance fallacy.
The base rate fallacy and
the Prosecutor's fallacy
are mentioned in
and sampling design, studied in
hasty generalization fallacy.
Simpson's paradox, a version of the
composition fallacy, is covered in
An argument is a sequence of statements, one of
which is called the conclusion.
The other statements are premises (assumptions).
The argument presents the premises—collectively— as evidence that the
conclusion is true.
For instance, the following is an argument:
If A is true then B is true.
A is true.
Therefore, B is true.
The conclusion is that B is true.
The premises are If A is true then B is true
and A is true.
The premises support the conclusion that B is true.
The word "therefore" is not part of the conclusion: It is a signal that the statement after it
is the conclusion.
thus, hence, so, and the phrases it follows that,
we see that, and so on, also flag conclusions.
The words suppose, let, given, assume, and so on, flag premises.
A concrete argument of the form just given might be:
If it is sunny, I will wear sandals. It is sunny. Therefore, I will wear sandals.
Here, A is "it is sunny" and
B is "I will wear sandals."
We usually omit the words "is true."
So, for example, the previous argument would be written
If A then B. A.
The statement not A
A is false.
An argument is valid
if the conclusion must be true whenever the premises are true.
In other words, an argument is valid if the truth of
its premises guarantees the truth of its conclusion.
Stating the conclusion explicitly is in some sense redundant, because the conclusion follows from
it serves to draw our attention to the fact that that particular statement
is one (of many) that must be true if the premises are true.
An argument that is not valid is invalid
If an argument is valid and its premises are true, the argument is
If an argument is not sound it is unsound.
An argument can be valid even if its premises are false—but
such an argument is unsound.
For instance, the following argument is valid but unsound:
Cheese more than a billion years old is stale.
The Moon is made of cheese.
The Moon is more than a billion years old.
Therefore, the Moon is stale cheese.
If all three premises were true, the conclusion would have to be true.
The argument is valid despite the fact that the Moon is not made of cheese,
but the argument is unsound—because the Moon is not made of cheese.
The logical form of the argument just above is (roughly):
For any x, if x is A and x is B then x is C. y is A. y is B.
Therefore, y is C.
Here, A is "made of cheese," B is
"more than a billion years old." and C is
The symbol x is a free variable that can stand for anything;
the symbol y stands for the Moon.
Note that this example uses A, B, and C
to represent properties of objects (categories, see
Some Valid Rules of Reasoning
are errors that result from misapplying or not following these rules.
For instance, consider the argument:
If A then B. B.
A concrete example of this might be:
If it is sunny, I will wear sandals. I will wear sandals. Therefore, it is sunny.
(I encourage you to "plug in" values in abstract expressions to get plain-language
examples—not only in this chapter, but anytime you encounter a mathematical expression.
That can really clarify the math.)
This is a fallacy known as affirming the consequent.
(In the conditional
If A then B, A is
called the antecedent and B is
called the consequent.
To affirm something is to assert that it is true; to deny something is to
assert that it is false.)
The premises say that if A is true, B
must also be true.
It does not follow that if B is true,
A must also be true.
To draw the conclusion that A is true,
we need an additional premise: If B then A.
That premise, together with the other two premises,
would allow us to conclude that A is true.
More generally, consider proving something from the premise
If A then B and an additional premise:
that A is true, that
A is false, that B is true,
or that B is false.
If the additional premise is that the antecedent A is true,
we are affirming the antecedent,
which allows us to reach the logically valid conclusion that B
is also true.
If the additional premise is that the antecedent A is false,
we are denying the antecedent,
which does not allow us to conclude anything about B.
If the additional premise is that the consequent B is true, we are
affirming the consequent,
which does not allow us to conclude anything about A.
If the additional premise is that the consequent B is false, we are
denying the consequent,
which allows us to reach the logically valid conclusion that A is also
There are countless fallacies, some, such as
affirming the consequent
and denying the antecedent
are so common they have names.
Some do not.
Non sequitur is the name of another common type of formal fallacy.
For instance, consider the argument:
If A then B. A.
This is a fallacy known as non sequitur, which is
Latin for "does not follow."
The conclusion does not follow from the two premises.
The premises guarantee that B is true.
They say nothing about C.
The conclusion B follows from the premises but the conclusion
C does not.
There is a missing premise: If B then C.
We will distinguish between two kinds of non sequitur;
see the box below.
Common Formal Fallacies
The following exercises check your understanding of the correct application of rules of reasoning,
valid and sound arguments, and formal and informal fallacies.
is an argument in which the premises do not justify
the conclusion as a matter of logic.
An argument can be fallacious for many reasons.
The argument might mis-apply a legitimate rule of logic.
Or it might omit a crucial premise or misconstrue a premise.
Or it might misconstrue the conclusion.
For instance, consider the argument: Mary says X is true. Mary does Y.
Anybody who does Y is a bad person.
Therefore, X is false.
That argument is fallacious: It is a non sequitur of relevance
because the conclusion that X is false does not follow from the two premises
("Mary does Y" and "Anybody who does Y is a bad person").
The form of the argument is: If A then B. A. Therefore C.
if B then C.
If we added that premise, the argument would be
sound, because it is not true
that everything bad people say is false.
Because this fallacy has, at its heart, a
non sequitur of relevance, we call it a
fallacy of relevance.
Instead of establishing the conclusion it claims (that X is false),
it establishes a different conclusion (that Mary is bad) and
ignores the difference.
This variant of a fallacy of relevance is very common.
It has a name, ad hominem, which means "at the person" in
Instead of addressing Mary's argument that X is true, it attacks Mary herself.
Ad hominem arguments are discussed below.
The tacit premise that everything that comes of something bad is bad—and its
opposite, that only good comes of good—are genetic fallacies.
Inappropriate appeal to authority, discussed below, is another
Consider the argument: All Ys are Zs. Mary says X is a Y. Therefore, X is a Z.
That argument is fallacious: It is a non sequitur of evidence
because the conclusion that X is a Z does not follow from the two premises
("All Ys are Zs" and "Mary says X is a Y").
The form of the argument is: If A then B. C. Therefore B.
if C then A.
In words, that extra premise is "if Mary says X is a Y, then X is a Y."
If we added that premise, the argument would be
But it would not be sound
unless it is impossible for Mary to be mistaken that X is a Y.
Because this fallacy has, at its heart, a non sequitur of evidence,
we call it a fallacy of evidence.
The fallacy consists in treating one of the stated premises (Mary says X is a Y) as if it were a
different premise (X is a Y).
This particular kind of fallacy of evidence is common: It is an
(inappropriate) appeal to authority.
There are more examples and discussion of fallacies of evidence and fallacies of relevance below.
To sum up, many (if not all) informal fallacies are of this form: The argument is a
non sequitur of relevance or a
non sequitur of evidence.
Non sequiturs of relevance can
be made valid by adding a premise that says the real conclusion implies the desired conclusion.
Non sequiturs of evidence can be made valid by adding a premise
that says one of the given premises implies a necessary but missing premise.
In both cases, the missing premise is false, so the patched argument would be
valid but not sound.
Fallacies of Relevance and Fallacies of Evidence
A fallacy of relevance commits a
non sequitur of relevance: It establishes a conclusion, but not
the desired conclusion.
An extra (and false) premise is needed for the actual conclusion to
imply the desired conclusion.
A canonical form of a fallacy of relevance is:
If A then B. A. Therefore, C.—together with the real-world fact
that B does not imply C.
Alternatively, a fallacy of relevance is:
If A then B. Not B. Therefore, not C.—together with the real-world fact
that not A does not imply not C.
Conversely, a fallacy of evidence commits a
non sequitur of evidence: It does not establish any
An extra (and false) premise is needed for one of the stated premises to imply a
premise that can be used to reach the desired conclusion.
A canonical form for a fallacy of evidence is:
If A then B. C.
Therefore B.—together with the real-world fact that C
does not imply A.
Alternatively, a fallacy of evidence is:
If A then B. Not C. Therefore, not A.—together with the real-world fact
that not C does not imply not B.
Logicians often distinguish among kinds of relevance.
A piece of evidence is positively relevant to
some assertion if it adds weight to the assertion.
It is negatively relevant if it takes weight away from the assertion.
Some evidence is irrelevant to a given assertion.
For instance, consider the assertion "it's hot outside."
The observation that passers-by are sweating would be positively relevant to the assertion:
it supports the assertion that the weather is hot.
The observation that passers-by are wearing parkas would be negatively relevant:
it is evidence that the weather is not hot.
The observation that passers-by are listening to mp3 players would be irrelevant.
Many superficially persuasive arguments in fact ride on irrelevant observations.
Here are some examples.
Nancy claims the death penalty is a good thing.
But Nancy once set fire to a vacant warehouse.
Nancy is evil.
Therefore, the death penalty is a bad thing.
This argument does not address Nancy's argument, it just says she must be wrong (about everything)
because she is evil.
Whether Nancy is good or evil is irrelevant: It has no bearing on whether her argument is sound.
This is a fallacy of relevance: It establishes that Nancy is bad, then equates being bad and
never being right.
In symbols, the argument is If A then B. A. Therefore C.
(If somebody sets fire to a vacant warehouse, that person is evil. Nancy set fire to a vacant
warehouse. Therefore, Nancy's opinion about the death penalty is wrong.)
Ad hominem is Latin for "towards the person." An ad hominem argument
attacks the person making the claim, rather than the person's reasoning.
A variant of the ad hominem argument is "guilt by association."
Bob claims the death penalty is a good thing.
But Bob's family business manufactures caskets.
Bob benefits when people die, so his motives are suspect.
Therefore, the death penalty is a bad thing.
This argument does not address Bob's argument, it addresses Bob's motives.
His motives are irrelevant: They have nothing to do with whether his argument for the death penalty
This is related to an ad hominem argument.
It, too, addresses the person, not the person's argument.
However, rather than condemning Bob as evil,
it impugns his motives in arguing for this particular conclusion.
Amy says people shouldn't smoke cigarettes in public because cigarette smoke has a strong odor.
But Amy wears strong perfume all the time.
Amy is clearly a hypocrite.
Therefore, smoking in public is fine.
This argument does not engage Amy's argument: It attacks her for the (in)consistency
of her opinions in this matter and in some other matter.
Whether Amy wears strong fragrances has nothing to do with
whether her argument against smoking is sound.
The abstract form of this argument is also a non sequitur: If A then B. A.
(In words: If you complain about strong smells and wear strong fragrances, you are a hypocrite.
Amy complains about strong smells and wears strong perfume; therefore, her opinion about smoking
Tu quoque is Latin for "you also."
It related to ad hominem arguments:
it addresses the person rather than the person's argument.
But instead of generally condemning the other party, it says that his or her claim in the
matter at issue is hypocritical because
it is inconsistent with something else the person has done or said.
We are supposed to conclude that he or she must therefore be wrong on this particular point.
Yes, I hit Billy.
But Sally hit him first.
This argument claims it is fine to do something wrong because somebody else did something wrong.
The argument is of the form: If A then B. A.
(In words: If Sally hit Billy, it's OK for Billy to hit Sally. Sally hit Billy. Therefore, it's OK
for me to hit Billy.)
Generally, the two-wrongs-make-a-right argument says that the justified wrong happened after the
exculpatory wrong, or was less severe.
For instance, Sally hit Billy first, or Sally hit Billy harder than I did, or
Sally pulled a knife on Billy.
On the other hand, it might be quite reasonable to argue, "yes, I hit Billy.
But he was beating me with a baseball bat—I acted in self defense. "
In that case, the first "wrong" might justify hitting Billy, which otherwise would
If you don't give me your lunch money, my big brother will beat you up.
You don't want to be beaten up, do you?
Therefore, you should give me your lunch money.
This argument appeals to force: Accept my conclusion—or else.
It is not a logical argument.
If A then B. B is bad. Therefore, not A.
Here, A is "you don't give me your lunch money,"
B is "you will be beaten up."
The argument conflates "it is bad to be beaten up" with "it is false that you
will be beaten up."
The argument establishes the conclusion that if you don't give me your lunch money, something bad
It does not establish the conclusion that you should give me your lunch money.
There is a missing premise that relates the implicit conclusion that could be
justified on the evidence (the if you don't give me your lunch money, something bad will happen)
to the stated conclusion (you should give me your lunch money).
Ad baculum is a fallacy of relevance, because it relies on a
non sequitur of relevance.
Ad baculum is Latin for "to the stick."
It is essentially the argument "might makes right."
Not all arguments of the form
If you do A then B will happen. B is bad. Therefore, don't do A
are ad baculum arguments.
It depends in part on whether B is a real or imposed consequence of A.
For instance, If you cheat on your exam, you will feel guilty
about it for the rest of your life; therefore, you should not cheat
is not an ad baculum argument.
But If you cheat on your exam, I will turn you in to the Student
Conduct Office and have you expelled;
therefore, you should not cheat is an ad baculum argument.
(Either way, don't cheat on your exam!)
Yes, I downloaded music illegally—but my girlfriend left me and I lost my job
so I was broke and I couldn't afford to
buy music and I was so sad that I was broke and that my girlfriend was gone that I really had
to listen to 100 variations of She caught the Katy.
This argument justifies an action not by claiming that it is correct, but
by an appeal to pity: extenuating circumstances of a sort.
Ad misericordium is Latin for "to pity."
It is an appeal to compassion rather than to reason.
Yes, I failed the final. But I need to get an A in the class or I
[won't get into Business school] /
[will lose my scholarship] /
[will violate my academic probation] /
[will lose my 4.0 GPA].
You have to give me an A!
Millions of people share copyrighted mp3 files and videos online.
Therefore, sharing copyrighted music and videos is fine.
This "bandwagon" argument claims that something is moral because it is common.
Common and correct are not the same.
Whether a practice is widespread has little bearing on whether it is legal or moral.
That many people believe something is true does not make it true.
Ad populum is Latin for "to the people."
It equates the popularity of an idea with the truth of the idea: Everybody can't be wrong.
Few teenagers have not made ad populum arguments:
"But Mom, everybody is doing it!"
Bob: Sleeping a full 12 hours once in a while is a healthy pleasure.
Samantha: If everybody slept 12 hours all the time, nothing would ever get done; the reduction in
productivity would drive the country into bankruptcy.
Therefore, nobody should sleep for 12 hours.
Samantha attacked a different claim from the one Bob made:
She attacked the assertion that it is good for everybody to sleep 12 hours every day.
Bob only claimed that is was good once in a while.
This argument is also a non sequitur of relevance:
If A then B. A.
(In words: If an action would have bad consequences if everyone did it all the time, then
that action should not be performed by everyone all the time.
Sleeping 12 hours would have bad consequences if everyone did it all the time.
Therefore, nobody should ever do it.)
A straw man argument replaces the original claim with one that is more vulnerable,
attacks that more vulnerable claim, then pretends to have refuted the original.
Art: Teacher salaries should be increased to attract better teachers.
Bette: Lengthening the school day would also improve student learning outcomes.
Therefore, teacher salaries should remain the same.
Art argues that increasing teacher salaries would attract better teachers.
Bette does not address his argument: She simply argues that there are other ways of
improving student learning outcomes.
Art did not even use student learning outcomes as a reason for increasing teacher salaries.
Even if Bette is correct that lengthening the school day would improve learning outcomes, her
argument is sideways to Art's: It is a distraction, not a refutation.
A red herring argument distracts the listener from
the real topic.
All men should have the right to vote.
Sally is not a man.
Therefore, Sally should not necessarily have the right to vote.
This is an example of equivocation, a fallacy facilitated by
the fact that a word can have more than one meaning.
man in two different ways.
In the first premise, the word means human while in the second, it means
Generally, equivocation is considered a fallacy of relevance,
but this example fits our definition of a fallacy of evidence.
The logical form of this argument is If A then B. Not C. Therefore,
B is not necessarily true.
That argument is a formal fallacy.
There is a missing premise that equates one of the premises given
(Sally is not male) with a different premise not given (Sally is not human).
That is, if not C then not A.
That (false) premise relates evidence given to evidence not given, so this is a
fallacy of evidence according to our definition.
The fact that the same word can mean "human" and "male" hides the
Another common structure for fallacies that involve equivocation is:
All P1s are Qs.
X is a P2. Therefore, X is a Q.
The equivocation is that the same word is used to refer to P1
and P2, which hides the fact that P1 and P2
are not the same.
Here is an example of equivocation hiding a fallacy of relevance:
If you are a Swiss citizen living in the U.S., you are an alien (foreigner).
Birgitte is a Swiss citizen living in California.
Therefore, Birgitte is an Alien (from another planet).
The structure of this example is For any x, if x is A then x is B. y is A.
Therefore, y is C.
The missing (and false) premise is that all aliens are Aliens
(For any x, if x is B then x is C),
which would relate the valid conclusion (Birgitte is an alien) to the desired conclusion (Birgitte is
Thus this equivocation fallacy is a fallacy of relevance.
The straw man, red herring and equivocation fallacies all change the subject:
they argue for (or against) something that is sideways to the original claim but easy to confuse
with the original claim.
Ad hominem arguments also change the subject—from whether the speaker is right
to whether the speaker is "good."
There is no circumstance that justifies killing another person.
The death penalty involves killing another person.
Therefore, even if someone commits a brutal murder, he should not be put to death.
This argument begs the question.
It assumes what it purports to prove, namely, that there is no circumstance
that justifies killing.
"No circumstance" already precludes "commits brutal murder."
The form of the argument is A. Therefore, A.
That is indeed logically valid—it just isn't much of an argument.
Where this "fallacy" gets legs is when the premise and the conclusion use
different words to say the same thing, creating the illusion that the conclusion is
different from the assumption.
Here is another example:
Jack is overweight. Therefore, Jack is fat.
Petitio principii is Latin for "attack the beginning."
The premise assumes the truth of the conclusion.
A circular argument is a variant of begging the question.
Common Fallacies of Relevance
This section gives examples of fallacies of evidence.
Recall that such fallacies are of the form:
If A then B. C. Therefore B.
There is a missing premise, namely, If C then A.
Without that premise, the argument is not much of an argument at all: The premises cannot be combined to
support a conclusion other than the premises themselves (which would be a circular argument:
C. Therefore, C.).
Another form for a fallacy of evidence is: If A then B. Not C. Therefore Not A.
Again, there is a missing premise, namely, If not C then not B.
And without that premise, the argument is no argument at all.
Yet another fallacy is of the form: A. Therefore, B.
Eat your brussels sprouts. There are children starving in Africa.
(In this argument, the premise A is "there are children starving in Africa,"
and the conclusion B is "you should eat your brussels sprouts.")
This could be classified as a fallacy of evidence: There's a missing premise,
If A then B: "If children are starving in
Africa, then I should eat my brussels sprouts."
Without that premise, the statements are not much of an argument.
The examples of fallacies of evidence given below differ in how C
is related to A and B.
All animals with rabies go crazy.
Jessie says my cat has rabies.
Therefore, my cat will go crazy.
This argument is fallacious.
The form of the argument is If A then B. C. Therefore, B.
There is a missing premise, namely, that if Jessie says my cat has
rabies, then my cat has rabies (If C then A.).
That premise relates a stated premise (Jessie says my cat has rabies—the evidence given)
to an unstated premise (my cat has rabies—the evidence required to be able to use the premise
that animals with rabies go crazy).
Thus, this is a fallacy of evidence.
If we add that missing premise, the argument might or might not be sound, depending on
whether Jessie could be mistaken.
The argument is clearly stronger if Jessie is a veterinarian who tested my cat for rabies
than if Jessie is a 5-year-old child who lives next door.
Any time we take somebody's say-so as evidence, we are making an appeal to authority.
That person might or might not be "an authority."
Our legal system has elaborate rules governing evidence.
A witness can testify about what he or she saw or heard or has personal knowledge of.
If I say "Jane told me she saw Frank hotwire the car,"
all other things being equal, that could be used in court as evidence that Jane told me
something, but not as evidence that Frank hotwired the car—it is hearsay because
I am reporting something I heard about, not something I witnessed directly.
I could not appeal to Jane's authority for evidence about Frank's actions.
There is a big difference between "I heard it" and "I heard of it."
The fallacy of appealing to authority is called
argumentum ad verecundiam,
argument to veneration (respect).
A more blatant example is:
Professor Stark says 1+1=3.
Professor Stark has a Ph.D.
He is a learned professor of statistics at one of the world's best universities.
He has published many scholarly articles in refereed journals, lectured in many countries,
and written a textbook about Statistics.
His former students hold positions at top universities, in government research agencies, and
in the private sector.
He has consulted for many top law firms and Fortune 100 companies.
He has been qualified as an expert in Federal court and has testified to Congress.
To study user satisfaction with our software product, we sent out 5,000 questionnaires to
Only 100 users filled out and mailed back their questionnaires.
Since more than 4,900 of the 5,000 users surveyed did not complain, the vast majority of
users are satisfied with the software.
This argument is fallacious: It treats "no evidence of dissatisfaction" as if it were
"evidence of satisfaction."
Lack of evidence that a statement is false is not evidence that the statement is true.
Nor is lack of evidence that a statement is true evidence that the statement is false.
This is an example of nonresponse in a survey.
Nonresponse generally leads to nonresponse bias:
people who return the survey are generally different from those who do not.
Moreover, even if everyone surveyed responds, the opinions expressed in a survey do not tend
to be representative of the opinions of the population surveyed (all users, in this case)
unless the group that is administered the survey (the 5,000 users who were sent questionnaires)
is selected at random.
Either you support the war in Iraq, or you don't support our soldiers who risk their
lives for our country.
You do support our soldiers.
Therefore, you support the war.
This argument is valid but not sound: It starts with a premise that is an artificial
"either-or,"—a false dichotomy.
It is possible to support our soldiers and still to oppose the war in Iraq,
so the first premise is false.
The same false dichotomy could have been disguised in slightly different language: The first
premise could have been written "if you don't support the war in Iraq, you don't
support our soldiers who risk their lives for our country."
That is because the premise if A then B is
logically equivalent to the premise
not A or B.
False dichotomies show up in question form as well:
"So, if you didn't get that money by embezzling it, did you rob someone at gunpoint?"
If a lawyer asked a witness that question in court, you would expect the opposing attorney to say,
Did you know that the Sun goes around the Earth?
This statement presupposes that the Sun goes around the Earth— it is a loaded question.
Classical examples of loaded qustions include,
"Have you stopped beating your wife?"
and "Does your mother know you are an alcoholic?"
The word "stop" presupposes that something has already started; the word "know"
presupposes that something is true.
After nearly eight years of the Bush administration, the stock market had the largest drop
since the Great Depression.
The Republican government ruined the economy.
It might be true that Bush administration policies led to the stock market crash.
And it might be true that there would have been a comparable "market correction"
under a Democratic administration.
The fact that the crash occurred late in the Bush administration is not in itself proof that it
was caused by Republican government.
The crash also occurred shortly after the introduction of the iPhone.
Does it follow that the iPhone caused the stock market crash?
This argument is an example of questionable cause.
In particular, it is an example of the
post hoc ergo propter hoc
(after this, therefore because of this) fallacy.
Moreover, even if actions of the Bush government contributed to the stock market crash,
it would be an oversimplification to pin everything on the government:
an example of the oversimplified cause fallacy.
For instance, mortgage banks that gave "sub-prime" loans to
borrowers who were not creditworthy might also have played a role.
In addition to post hoc ergo propter hoc,
the cum hoc ergo propter hoc
(with this, therefore because of this) fallacy is common in misapplications of statistics:
data show that two phenomena are associated—tend to occur together
or to rise and fall together—and the arguer mistakenly concludes that
one of them must therefore cause the other.
These fallacies are discussed in more depth in
Giving coincidences special causal import is another example of the
questionable cause fallacy.
Dogs bark before earthquakes.
Therefore, dogs can sense that an earthquake is coming.
Dogs do bark before earthquakes. And during earthquakes. And after earthquakes.
And between earthquakes.
Marijuana must remain illegal.
If it were legalized, cocaine would soon follow.
And if cocaine were legalized, then opium would be, and eventually heroin, too.
Before you know it, everybody would be on drugs, from nursing infants to
The only babies born would be crack babies.
People would be dying by the tens of millions from AIDS transmitted by sharing hypodermic
There would be rampant prostitution so drug-crazed women and men could pay for their habits.
Construction workers would fall off buildings, right and left.
Nobody would be able to drive safely—highway fatalities would claim millions of
lives a year.
The police and military would all become addicted, so there would be no law enforcement or
Doctors would be so whacked-out that they couldn't treat patients.
The economy would collapse.
Within five years, the U.S. would be a third-world country.
Eventually, nobody would live past age 20—if they managed to survive infancy.
This argument is of the form:
If A then B. If B then C. If C then D, etc.
You don't want Z, do you? So, you must prevent A.
While each step in this progression is possible, it is by no means inevitable: The
statements are really "if A then possibly B", etc., but they are asserted
as "if A then necessarily B," and so on.
The implicit argument is that since there is no "bright line"
demarcating where to stop the progression, the progression will not stop.
That is fallacious.
It is also a resort to scare tactics—an appeal to emotion rather than to reason.
There is no law of Nature that says government could not draw the line at any point in the progression.
Currently, the line is drawn at alcohol, although during the Prohibition, the line was drawn
on the other side of alcohol.
The line on tobacco is being re-drawn to impose more restrictions: no smoking on airplanes,
in restaurants, etc.
Lines can be drawn anywhere.
The adage "give them an inch and they'll take a mile" is in this family.
Not every argument of the form "If A then B. If B then C. If C then D, etc.
Eventually, Z" is a slippery-slope fallacy.
If each of the
conditionals (the if-then statements)
is valid, this is a perfectly valid argument.
For instance, this argument has no fallacy (assuming the bomb is big):
If I push the detonator, the bomb will go off.
If the bomb goes off, the building will collapse.
If the building collapses, people will be injured and killed.
I don't want that to happen, so I shouldn't push the detonator.
I interviewed 20 students in the lunch line at noon today.
Nineteen were hungry.
Therefore, most students are hungry.
This argument is of the form: Some x are (sometimes) A.
Therefore, most x are (always) A.
The data do not support such a generalization: They were taken at a particular
time and place of my choosing.
To draw reliable generalizations from the sample to other times, places, and students
requires a sample that is drawn using proper sampling techniques, discussed in
built-in bias by virtue of where and when it was taken.
It is called a convenience sample because
it simply selected students who were readily available.
At another time, the same students might not be hungry.
At noon, students not in the lunch line might not be hungry.
Hasty generalizations occur in Statistics when the sample from which we generalize is not
representative of the larger group ("population")
to which we would like to generalize.
This tends to happen when the method used to draw the sample is not a good one—leading to large
bias—or because the
sample is small, so that the luck of the draw has a big effect on the accuracy.
To ensure that a sample is representative generally requires using some method of random selection.
These issues are discussed in
I want to estimate the fraction of free "adult" (commercial pornography) websites
that are hosted in the United States.
I run a search on a popular search engine for "free porn" and find a link to a
list of 25,000 free porn websites, with user ratings.
I look up whether each of those websites is hosted in the U.S. or elsewhere.
Seventy percent are hosted in the U.S.
Therefore, the majority of free porn websites are hosted in the U.S.
This is a hasty generalization.
Even though the list is large, there is no reason it should be representative of the
population of all free porn websites,
with respect to the country they are hosted in or any other variable.
The list is a sample of convenience.
It was found by running a query in English on a U.S. search engine.
It is just a list somebody made and that I happened to find.
To try to get a better sample, I think of more search terms.
I run searches on a popular search engine for "free porn," "free sex videos,"
"naked girls," "hot sex," "free intercourse videos," and
15 other terms.
For each search, I record the first 100 links to free porn websites that the search engine returns,
producing a list of 2,000 websites.
I look up whether each of those websites is hosted in the U.S.
Seventy percent are; therefore, the majority of free porn websites are hosted in the U.S.
This is still a hasty generalization from a sample of convenience.
There is no reason the first 100 websites returned by a search engine in response to each of 20 queries I
make up should be representative of free porn websites, with respect to host country or any other
characteristic—and there are many reasons for it to be unrepresentative:
More popular websites tend to be returned closer to the top of the list of results for each search.
Queries in English tend to return links in the U.S.
U.S. search engines tend to return links in the U.S.
The search engine databases are not an exhaustive list of all websites.
Other queries intended to retrieve porn websites would get different results.
There are countless more reasons.
Countries are like families.
Government is like the parents, and citizens are like the children.
"Spare the rod, spoil the child."
Therefore, our legal system must impose harsh penalties for legal infractions: "tough love."
This argument is of the form: x is similar to y
in some regards.
Therefore, everything that is true for x is true for y.
Physicists are like physicians.
Both study calculus as undergraduates;
both have many years of education,
culminating in advanced degrees;
both know lots of science;
both tend to have above average intelligence;
both tend to be arrogant;
both even start with the same seven letters, "physici."
Physicians treat illness.
Therefore, physicists treat illness.
Weak analogies often arise in some forms of sampling, such as
quota sampling and
A sample can resemble the population from which it is drawn in many ways, and yet be
unrepresentative of the population with respect to the property we care about.
The Hite report example in
is that the resulting samples tend to be representative of the population with respect
to all properties; moreover, we can quantify the extent to which a random sample is likely to be
unrepresentative, and those differences tend to be smaller the larger the
For other ways of drawing samples, the samples are generally unrepresentative even
when they are large.
"Nobody goes there anymore. That place is too crowded."—Yogi Berra.
This isn't an argument because there is no conclusion, only premises, but that's not the point.
These two premises amount to: A. Not A.
They contradict each other.
If one of the premises is true, the other must be false:
It is logically impossible for both to be true.
Hence, any argument that stems from them cannot be sound—since, even if it is valid,
it will have a false premise.
Generally, the fallacy of inconsistency occurs whenever the premises cannot all be true—as a matter
"Baseball is 90% physical. The other half is mental."
"Half the lies they tell about me aren't true."
"I never said most of the things I said."
"If the world was perfect, it wouldn't be."
"It was impossible to get a conversation going, everybody was talking too much."
"It gets late early out there."
"I wish I had an answer to that because I'm tired of answering that question."
Common Fallacies of Evidence
The following exercises check your ability to recognize informal fallacies.
An argument consists of a sequence of statements.
One is the conclusion; the rest are
The premises are given as evidence that the conclusion is true.
If the conclusion must be true if the premises were true, the argument is
A valid argument is sound if its premises are true.
Valid arguments result from applying correct rules of reasoning.
Examples of correct rules of reasoning include:
Using incorrect rules of reasoning or misapplying correct rules results in a
There are many common formal fallacies.
One is the non sequitur.
In a non sequitur, a necessary premise is missing.
If the missing premise relates one of the stated premises to a different premise (i.e., ties evidence given to
evidence not given), the fallacy is a non sequitur of evidence.
If the missing premise relates a valid conclusion but unstated conclusion to the stated
conclusion (i.e., ties the conclusion given to
a conclusion not given), the fallacy is a non sequitur of relevance.
There are other formal fallacies as well.
Examples of formal fallacies include:
In addition to formal fallacies, there are informal fallacies.
Although the categorization is not strict, informal fallacies generally fall into two groups:
fallacies of relevance and fallacies of evidence.
Fallacies of relevance have non sequiturs of relevance at their core;
fallacies of evidence have non sequiturs of evidence.
Examples of the former include ad hominem and other genetic fallacies,
appeals to emotion (fear, pity), the straw man, the red herring, and arguments that beg the question.
Examples of the latter include appeals to authority, slippery slope, hasty generalizations,
weak analogies, post hoc ergo propter hoc, and
cum hoc ergo propter hoc.
Equivocation can hide a fallacy of relevance or a fallacy of evidence.
Inconsistency can produce a valid argument, but never a sound argument.
Statistics can be (and often is) misused to produce both kinds of informal fallacy.
Be alert to the structure of arguments to avoid being deceived and to avoid deception. | http://www.stat.berkeley.edu/~stark/SticiGui/Text/reasoning.htm | 13 |
16 | Journey to the center of the earth: Discovery sheds light on mantle formation
Uncovering a rare, two-billion-year-old window into the Earth’s mantle, a University of Houston professor and his team have found our planet’s geological history is more complex than previously thought.
Jonathan Snow, assistant professor of geosciences at UH, led a team of researchers in a North Pole expedition, resulting in a discovery that could shed new light on the mantle, the vast layer that lies beneath the planet’s outer crust. These findings are described in a paper titled “Ancient, highly heterogeneous mantle beneath Gakkel Ridge, Arctic Ocean,” appearing recently in Nature.
These two-billion-year-old rocks that time forgot were found along the bottom of the Arctic Ocean floor, unearthed during research voyages in 2001 and 2004 to the Gakkel Ridge, an approximately 1,000-mile-long underwater mountain range between Greenland and Siberia. This massive underwater mountain range forms the border between the North American and Eurasian plates beneath the Arctic Ocean, where the two plates diverge.
These were the first major expeditions ever undertaken to the Gakkel Ridge, and these latest published findings are the fruit of several years of research and millions of dollars spent to retrieve and analyze these rocks.
The mantle, the rock layer that comprises about 70 percent of the Earth’s mass, sits several miles below the planet’s surface. Mid-ocean ridges like Gakkel, where mantle rock is slowly pushing upward to form new volcanic crust as the tectonic plates slowly move apart, is one place geologists look for clues about the mantle. Gakkel Ridge is unique because it features – at some locations – the least volcanic activity and most mantle exposure ever discovered on a mid-ocean ridge, allowing Snow and his colleagues to recover many mantle samples.
“I just about fell off my chair,” Snow said. “We can’t exaggerate how important these rocks are – they’re a window into that deep part of the Earth.”
Venturing out aboard a 400-foot-long research icebreaker, Snow and his team sifted through thousands of pounds of rocks scooped up from the ocean floor by the ship’s dredging device. The samples were labeled and cataloged and then cut into slices thinner than a human hair to be examined under a microscope. That is when Snow realized he found something that, for many geologists, is as rare and fascinating as moon rocks – mantle rocks devoid of sea floor alteration. Analysis of the isotopes of osmium, a noble metal rarer than platinum within the mantle rocks, indicated they were two billion years old. The use of osmium isotopes underscores the significance of the results, because using them for this type of analysis is still a new, innovative and difficult technique.
Since the mantle is slowly moving and churning within the Earth, geologists believe the mantle is a layer of well-mixed rock. Fresh mantle rock wells up at mid-ocean ridges to create new crust. As the tectonic plates move, this crust slowly makes its way to a subduction zone, a plate boundary where one plate slides underneath another and the crust is pushed back into the mantle from which it came.
Because this process takes about 200 million years, it was surprising to find rocks that had not been remixed inside the mantle for two billion years. The discovery of the rocks suggests the mantle is not as well-mixed or homogenous as geologists previously believed, revealing that the Earth’s mantle preserves an older and more complex geologic history than previously thought. This opens the possibility of exploring early events on Earth through the study of ancient rocks preserved within the Earth’s mantle.
The rocks were found during two expeditions Snow and his team made to the Arctic, each lasting about two months. The voyages were undertaken while Snow was a research scientist at the Max Planck Institute in Germany, and the laboratory study was done by his research team that now stretches from Hawaii to Houston to Beijing.
Since coming to UH in 2005, Snow’s work stemming from the Gakkel Ridge samples has continued, with more research needed to determine exactly why these rocks remained unmixed for so long. Further study using a laser microprobe technique for osmium analysis available only in Australia is planned for next year.
Source: University of Houston
Geologists Discover New Way of Estimating Size and Frequency of Meteorite Impacts
Scientists have developed a new way of determining the size and frequency of meteorites that have collided with Earth.
Their work shows that the size of the meteorite that likely plummeted to Earth at the time of the Cretaceous-Tertiary (K-T) boundary 65 million years ago was four to six kilometers in diameter. The meteorite was the trigger, scientists believe, for the mass extinction of dinosaurs and other life forms.
François Paquay, a geologist at the University of Hawaii at Manoa (UHM), used variations (isotopes) of the rare element osmium in sediments at the ocean bottom to estimate the size of these meteorites. The results are published in this week's issue of the journal Science.
When meteorites collide with Earth, they carry a different osmium isotope ratio than the levels normally seen throughout the oceans.
"The vaporization of meteorites carries a pulse of this rare element into the area where they landed," says Rodey Batiza of the National Science Foundation (NSF)'s Division of Ocean Sciences, which funded the research along with NSF's Division of Earth Sciences. "The osmium mixes throughout the ocean quickly. Records of these impact-induced changes in ocean chemistry are then preserved in deep-sea sediments."
Paquay analyzed samples from two sites, Ocean Drilling Program (ODP) site 1219 (located in the Equatorial Pacific), and ODP site 1090 (located off of the tip of South Africa) and measured osmium isotope levels during the late Eocene period, a time during which large meteorite impacts are known to have occurred.
"The record in marine sediments allowed us to discover how osmium changes in the ocean during and after an impact," says Paquay.
The scientists expect that this new approach to estimating impact size will become an important complement to a more well-known method based on iridium.
Paquay, along with co-author Gregory Ravizza of UHM and collaborators Tarun Dalai from the Indian Institute of Technology and Bernhard Peucker-Ehrenbrink from the Woods Hole Oceanographic Institution, also used this method to make estimates of impact size at the K-T boundary.
Even though these method works well for the K-T impact, it would break down for an event larger than that: the meteorite contribution of osmium to the oceans would overwhelm existing levels of the element, researchers believe, making it impossible to sort out the osmium's origin.
Under the assumption that all the osmium carried by meteorites is dissolved in seawater, the geologists were able to use their method to estimate the size of the K-T meteorite as four to six kilometers in diameter.
The potential for recognizing previously unknown impacts is an important outcome of this research, the scientists say.
"We know there were two big impacts, and can now give an interpretation of how the oceans behaved during these impacts," says Paquay. "Now we can look at other impact events, both large and small."
Source: National Science Foundation
ScienceDaily (Apr. 26, 2008) — Geologists studying deposits of volcanic glass in the western United States have found that the central Sierra Nevada largely attained its present elevation 12 million years ago, roughly 8 or 9 million years earlier than commonly thought.
The finding has implications not only for understanding the geologic history of the mountain range but for modeling ancient global climates."All the global climate models that are currently being used strongly rely on knowing the topography of the Earth," said Andreas Mulch, who was a postdoctoral scholar at Stanford when he conducted the research. He is the lead author of a paper published recently in the online Early Edition of the Proceedings of the National Academy of Sciences.
A variety of studies over the last five years have shown that the presence of the Sierra Nevada and Rocky Mountains in the western United States has direct implications for climate patterns extending into Europe, Mulch said."If we did not have these mountains, we would completely change the climate on the North American continent, and even change mean annual temperatures in central Europe," he said. "That's why we need to have some idea of how mountains were distributed over planet Earth in order to run past climate models reliably." Mulch is now a professor of tectonics and climate at the University of Hannover in Germany.
Mulch and his colleagues, including Page Chamberlain, a Stanford professor of environmental earth system science, reached their conclusion about the timing of the uplift of the Sierra Nevada by analyzing hydrogen isotopes in water incorporated into volcanic glass.
Because so much of the airborne moisture falls as rain on the windward side of the mountains, land on the leeward side gets far less rain—an effect called a "rain shadow"—which often produces a desert.
The higher the mountain, the more pronounced the rain shadow effect is and the greater the decrease in the number of heavy hydrogen isotopes in the water that makes it across the mountains and falls on the leeward side of the range. By determining the ratio of heavier to lighter hydrogen isotopes preserved in volcanic glass and comparing it with today's topography and rainwater, researchers can estimate the elevation of the mountains at the time the ancient water crossed them.
Volcanic glass is an excellent material for preserving ancient rainfall. The glass forms during explosive eruptions, when tiny particles of molten rock are ejected into the air. "These glasses were little melt particles, and they cooled so rapidly when they were blown into the atmosphere that they just froze, basically," Mulch said. "They couldn't crystallize and form minerals."
Because glass has an amorphous structure, as opposed to the ordered crystalline structure of minerals, there are structural vacancies in the glass into which water can diffuse. Once the glass has been deposited on the surface of the Earth, rainwater, runoff and near-surface groundwater are all available to interact with it. Mulch said the diffusion process continues until the glass is effectively saturated with water.
The samples they studied ranged from slightly more than 12 million years old to as young as 600,000 years old, a time span when volcanism was rampant in the western United States owing to the ongoing subduction of the Pacific plate under the continental crust of the North American plate.
Until now, researchers have been guided largely by "very good geophysical evidence" indicating that the range reached its present elevation approximately 3 or 4 million years ago, owing to major changes in the subsurface structure of the mountains, Mulch said.
"There was a very dense root of the Sierra Nevada, rock material that became so dense that it actually detached and sank down into the Earth's mantle, just because of density differences," Mulch said. "If you remove a very heavy weight at the base of something, the surface will rebound."
The rebound of the range after losing such a massive amount of material should have been substantial. But, Mulch said, "We do not observe any change in the surface elevation of the Sierra Nevada at that time, and that's what we were trying to test in this model."
However, Mulch said he does not think his results refute the geophysical evidence. It could be that the Sierra Nevada did not evolve uniformly along its 400-mile length, he said. The geophysical data indicating the loss of the crustal root is from the southern Sierra Nevada; Mulch's study focused more on the northern and central part of the range. In the southern Sierra Nevada, the weather patterns are different, and the rain shadow effect that Mulch's approach hinges on is less pronounced.
"That's why it's important to have information that's coming from deeper parts of the Earth's crust and from the surface and try to correlate these two," Mulch said. To really understand periods in the Earth's past where climate conditions were markedly different from today, he said, "you need to have integrated studies."
The research was funded by the National Science Foundation.
Adapted from materials provided by Stanford University
adendum : This article was reproduced in part, the full text can be accessed at www.sciencedaily.com/ h
Rocks under the northern ocean are found to resemble ones far south
Scientists probing volcanic rocks from deep under the frozen surface of the Arctic Ocean have discovered a special geochemical signature until now found only in the southern hemisphere. The rocks were dredged from the remote Gakkel Ridge, which lies under 3,000 to 5,000 meters of water; it is Earth’s most northerly undersea spreading ridge. The study appears in the May 1 issue of the leading science journal Nature.
The Gakkel extends some 1,800 kilometers beneath the Arctic ice between Greenland and Siberia. Heavy ice cover prevented scientists from getting at it until the 2001 Arctic Mid-Ocean Ridge Expedition, in which U.S and German ice breakers cooperated. This produced data showing that the ridge is divided into robust eastern and western volcanic zones, separated by an anomalously deep segment. That abrupt boundary contains exposed unmelted rock from earth’s mantle, the layer that underlies the planet’s hardened outer shell, or lithosphere.
By studying chemical trace elements and isotope ratios of the elements lead, neodymium, and strontium, the paper’s authors showed that the eastern lavas, closer to Siberia, display a typical northern hemisphere makeup. However, the western lavas, closer to Greenland, show an isotopic signature called the Dupal anomaly. The Dupal anomaly, whose origin is intensely debated, is found in the southern Indian and Atlantic oceans, but until now was not known from spreading ridges of the northern hemisphere. Lead author Steven Goldstein, a geochemist at Columbia University’s Lamont-Doherty Earth Observatory (LDEO), said that this did not suggest the rocks came from the south. Rather, he said, they might have formed in similar ways. “It implies that the processes at work in the Indian Ocean might have an analog here,” said Goldstein. Possible origins debated in the south include upwelling of material from the deep earth near the core, or shallow contamination of southern hemispheric mantle with certain elements during subduction along the edges of the ancient supercontinent of Pangea.
At least in the Arctic, the scientists say they know what happened. Some 53 million years ago, what are now Eurasia and Greenland began separating, with the Gakkel as the spreading axis. Part of Eurasia’s “keel”—a relatively stable layer of mantle pasted under the rigid continent and enriched in certain elements that are also enriched in the continental crust—got peeled away. As the spreading continued, the keel material got mixed with “normal” mantle that was depleted in these same elements. This formed a mixture resembling the Dupal anomaly. The proof, said Goldstein, is that the chemistry of the western Gakkel lavas appear to be mixtures of “normal” mantle and lavas coming from volcanoes on the Norwegian/Russian island of Spitsbergen. Although Spitsbergen is an island, it is attached to the Eurasian continent, and its volcanoes are fueled by melted keel material.
“This is unlikely to put an end to the debate about the origin of the southern hemisphere Dupal signature, as there may be other viable explanations for it,” said Goldstein. “On the other hand, this study nails it in the Arctic. Moreover, it delineates an important process within Earth’s system, where material associated with the continental lithospheric keel is transported to the deeper convectiing mantle.”
Source: The Earth Institute at Columbia University
How deep is Europe?
The Earth's crust is, on global average around 40 kilometres deep. In relation to the total diameter of the Earth with approx. 12800 kilometres this appears to be rather shallow, but precisely these upper kilometres of the crust, the human habitat, is of special interest for us.
Europe's crust shows an astonishing diversity: for example the crust under Finland is as deep as one only expects for crust under a mountain range such as the Alps. It is also amazing that the crust under Iceland and the Faroer-Islands is considerably deeper than a typical oceanic crust. This is explained by M. Tesauro und M. Kaban from GeoForschungsZentrum Potsdam (GFZ) and S. Cloetingh from the Vrije Universiteit in Amsterdam in a recent publication in the renowned scientific journal "Geophysical Research Letters". GFZ is the German Research Centre for Geosciences and a member of the Helmholtz Association.
For many years intensive investigation of the Earth's crust has been underway. However, different research groups in Europe have mostly been concentrating on individual regions. Hence, a high-resolution and consistent overall picture has not been available to date. With the present study this gap can now be filled. By incorporating the latest seismological results a digital model of the European crust has been created. This new detailed picture also allows for the minimization of interfering effects of the crust when taking a glance at the deeper Earth's interior.
A detailed model of the Earth's crust, i.e. from the upper layers to approx. a depth of 60 km is essential to understand the many millions of years of development of the European Continent. This knowledge supports the discovery of the commercial importance of ore deposits or crude oil in the continental shelf or in general with the use of the subterranean e.g. for the sequestration of CO2. It also contributes to the identification of geological hazards such as earthquakes.
Citation: Tesauro, M., M. K. Kaban, and S. A. P. L. Cloetingh (2008), EuCRUST-07: A new reference model for the European crust, Geophys. Res. Lett., 35, L05313, doi:10.1029/2007GL032244.
Source: Helmholtz Association of German Research Centres
by Barry Ray
Tallahassee FL (SPX) May 02, 2008
Working with colleagues from NASA, a Florida State University researcher has published a paper that calls into question three decades of conventional wisdom regarding some of the physical processes that helped shape the Earth as we know it today.
Munir Humayun, an associate professor in FSU's Department of Geological Sciences and a researcher at the National High Magnetic Field Laboratory, co-authored a paper, "Partitioning of Palladium at High Pressures and Temperatures During Core Formation," that was recently published in the peer-reviewed science journal Nature Geoscience.
The paper provides a direct challenge to the popular "late veneer hypothesis," a theory which suggests that all of our water, as well as several so-called "iron-loving" elements, were added to the Earth late in its formation by impacts with icy comets, meteorites and other passing objects.
"For 30 years, the late-veneer hypothesis has been the dominant paradigm for understanding Earth's early history, and our ultimate origins," Humayun said. "Now, with our latest research, we're suggesting that the late-veneer hypothesis may not be the only way of explaining the presence of certain elements in the Earth's crust and mantle."
To illustrate his point, Humayun points to what is known about the Earth's composition.
"We know that the Earth has an iron-rich core that accounts for about one-third of its total mass," he said. "Surrounding this core is a rocky mantle that accounts for most of the remaining two-thirds," with the thin crust of the Earth's surface making up the rest.
"According to the late-veneer hypothesis, most of the original iron-loving, or siderophile, elements" -- those elements such as gold, platinum, palladium and iridium that bond most readily with iron -- "would have been drawn down to the core over tens of millions of years and thereby removed from the Earth's crust and mantle. The amounts of siderophile elements that we see today, then, would have been supplied after the core was formed by later meteorite bombardment. This bombardment also would have brought in water, carbon and other materials essential for life, the oceans and the atmosphere."
To test the hypothesis, Humayun and his NASA colleagues -- Kevin Righter and Lisa Danielson -- conducted experiments at Johnson Space Center in Houston and the National High Magnetic Field Laboratory in Tallahassee. At the Johnson Space Center, Righter and Danielson used a massive 880-ton press to expose samples of rock containing palladium -- a metal commonly used in catalytic converters -- to extremes of heat and temperature equal to those found more than 300 miles inside the Earth.
The samples were then brought to the magnet lab, where Humayun used a highly sensitive analytical tool known as an inductively coupled plasma mass spectrometer, or ICP-MS, to measure the distribution of palladium within the sample.
"At the highest pressures and temperatures, our experiments found palladium in the same relative proportions between rock and metal as is observed in the natural world," Humayun said. "Put another way, the distribution of palladium and other siderophile elements in the Earth's mantle can be explained by means other than millions of years of meteorite bombardment."
The potential ramifications of his team's research are significant, Humayun said.
"This work will have important consequences for geologists' thinking about core formation, the core's present relation to the mantle, and the bombardment history of the early Earth," he said. "It also could lead us to rethink the origins of life on our planet."
Ancient mineral shows early Earth climate tough on continents
A new analysis of ancient minerals called zircons suggests that a harsh climate may have scoured and possibly even destroyed the surface of the Earth's earliest continents.
Zircons, the oldest known materials on Earth, offer a window in time back as far as 4.4 billion years ago, when the planet was a mere 150 million years old. Because these crystals are exceptionally resistant to chemical changes, they have become the gold standard for determining the age of ancient rocks, says UW-Madison geologist John Valley.
Valley previously used these tiny mineral grains — smaller than a speck of sand — to show that rocky continents and liquid water formed on the Earth much earlier than previously thought, about 4.2 billion years ago.
In a new paper published online this week in the journal Earth and Planetary Science Letters, a team of scientists led by UW-Madison geologists Takayuki Ushikubo, Valley and Noriko Kita show that rocky continents and liquid water existed at least 4.3 billion years ago and were subjected to heavy weathering by an acrid climate.
Ushikubo, the first author on the new study, says that atmospheric weathering could provide an answer to a long-standing question in geology: why no rock samples have ever been found dating back to the first 500 million years after the Earth formed.
"Currently, no rocks remain from before about 4 billion years ago," he says. "Some people consider this as evidence for very high temperature conditions on the ancient Earth."
Previous explanations for the missing rocks have included destruction by barrages of meteorites and the possibility that the early Earth was a red-hot sea of magma in which rocks could not form.
The current analysis suggests a different scenario. Ushikubo and colleagues used a sophisticated new instrument called an ion microprobe to analyze isotope ratios of the element lithium in zircons from the Jack Hills in western Australia. By comparing these chemical fingerprints to lithium compositions in zircons from continental crust and primitive rocks similar to the Earth's mantle, they found evidence that the young planet already had the beginnings of continents, relatively cool temperatures and liquid water by the time the Australian zircons formed.
"At 4.3 billion years ago, the Earth already had habitable conditions," Ushikubo says.
The zircons' lithium signatures also hold signs of rock exposure on the Earth's surface and breakdown by weather and water, identified by low levels of a heavy lithium isotope. "Weathering can occur at the surface on continental crust or at the bottom of the ocean, but the [observed] lithium compositions can only be formed from continental crust," says Ushikubo.
The findings suggest that extensive weathering may have destroyed the Earth's earliest rocks, he says.
"Extensive weathering earlier than 4 billion years ago actually makes a lot of sense," says Valley. "People have suspected this, but there's never been any direct evidence."
Carbon dioxide in the atmosphere can combine with water to form carbonic acid, which falls as acid rain. The early Earth's atmosphere is believed to have contained extremely high levels of carbon dioxide — maybe 10,000 times as much as today.
"At [those levels], you would have had vicious acid rain and intense greenhouse [effects]. That is a condition that will dissolve rocks," Valley says. "If granites were on the surface of the Earth, they would have been destroyed almost immediately — geologically speaking — and the only remnants that we could recognize as ancient would be these zircons."
by David Tenenbaum
for Astrobiology Magazine
Moffett Field (SPX) Jul 15, 2008
The oldest rocks so far identified on Earth are one-half billion years younger than the planet itself, so geologists have relied on certain crystals as micro-messengers from ancient times. Called zircons (for their major constituent, zirconium) these crystals "are the kind of mineral that a geologist loves," says Stephen Mojzsis, an associate professor of geological sciences at the University of Colorado at Boulder.
"They capture chemical information about the melt from which they crystallize, and they preserve that information very, very well," even under extreme heat and pressure.
The most ancient zircons yet recovered date back 4.38 billion years. They provide the first direct data on the young Earth soon after the solar system coalesced from a disk of gas and dust 4.57 billion years ago. These zircons tend to refute the conventional picture of a hot, volcanic planet under constant assault by asteroids and comets.
One modern use for the ancient zircons, Mojzsis says, is to explore the late heavy bombardment, a cataclysmic, 30- to 100-million-year period of impacts that many scientists think could have extinguished any life that may have been around 4 billion years ago.
With support from a NASA Exobiology grant, Mojzsis has begun examining the effect of impacts on a new batch of zircons found in areas that have been hit by more recent impacts. Some will come from the Sudbury, Ontario impact zone, which was formed 1.8 billion years ago.
"We know the size, velocity and temperature distribution, so we will be looking at the outer shell of the zircons," which can form during the intense heat and pressure of an impact, he says. A second set of zircons was chosen to span the Cretaceous-Tertiary (KT) impact of 65 million years ago, which exterminated the dinosaurs.
"The point is to demonstrate that the Hadean zircons show the same type of impact features as these younger ones," Mojzsis says. The Hadean Era, named for the hellish conditions that supposedly prevailed on Earth, ended about 3.8 billion years ago.
The oldest zircons indicate that Earth already had oceans and arcs of islands 4.45 to 4.5 billion years ago, just 50 million years after the gigantic collision that formed the moon. At that time, Mojzsis says, "Earth had more similarities than differences with today. It was completely contrary to the old assumption, based on no data, that Earth's surface was a blasted, lunar-like landscape."
Zircons are natural timekeepers because, during crystallization, they incorporate radioactive uranium and thorium, but exclude lead. As the uranium and thorium decay, they produce lead isotopes that get trapped within the zircons.
By knowing the half-lives of the decay of uranium and thorium to lead, and the amount of these elements and their isotopes in the mineral, it's possible to calculate how much time has elapsed since the zircon crystallized.
Zircons carry other information as well. Those that contain a high concentration of the heavier oxygen isotope O-18, compared to the more common O-16, crystallized in magma containing material that had interacted with liquid water.
A new "titanium thermometer," developed by Bruce Watson of Rensselaer Polytechnic Institute and Mark Harrison of the University of California at Los Angeles, can determine the temperature of crystallization based on the titanium concentration.
Both these analyses showed that zircons from as far back as 4.38 billion years ago crystallized in relatively cool conditions, such as at subduction zones where water and magma interact at the intersection of tectonic plates.
To Mojzsis, the message from the most ancient zircons is this: just 50 million years after a mammoth impact formed the moon, Earth had conditions we might recognize today, not the hellish conditions long favored by the conventional viewpoint.
For reasons related to the orbital dynamics of the solar system, that bucolic era was brutally interrupted about 3.96 billion years ago by the "late heavy bombardment," a period of intense asteroid impacts that churned the planet's surface.
The zircons record this period in the form of a narrow, 2-micron-thick zone that most likely formed during a brief exposure to very high temperature. Careful radioactive dating shows that these zones formed essentially simultaneously, even in Hadean zircons of different ages, Mojzsis says. "We found the most amazing thing. These zircons, even if the core ages are different, all share a common 3.96 billion year age for this overgrowth."
The zones also record "massive loss of lead, which happens when the system is heated quite catastrophically and then quenched," Mojzsis adds.
"So it looks like these zircons were sort of cauterized by some process" that both built up the zone and allowed the lead to escape. The cause, he says, was likely "some extremely energetic event" at 3.96 billion years ago, a date that "correlates very nicely to other estimates of the beginning of the late heavy bombardment."
The intense impacts of this period would seem to have exterminated any life that had formed previously. And yet Mojzsis says this conclusion may be overturned by the zircon data.
"From the Hadean zircons we can understand further what the thermal consequences for the crust were, and test our models for habitability during the late heavy bombardment. Most people think it sterilized Earth's surface, but our analysis says that is not the case at all. For a microbial biosphere at some depth in crustal rocks and sediments, impact at the surface zone did not matter," he says.
Indeed, University of Colorado post-doctoral student Oleg Abramov has calculated that the habitable volume of Earth's crust actually increased by a factor 10 for heat-loving thermophiles and hyperthermophiles during the impacts, Mojzsis says.
This raises the possibility that life survived the period of heavy impacts. "The bombing, however locally devastating, creates quite an ample supply of hydrothermal altered rock and hydrothermal systems, worldwide," says Mojzsis.
Although that's bad for organisms that require cool conditions, "thermophiles do not even notice," he says.
"This goes back to an old idea, maybe the late heavy bombardment pruned the tree of life, and selected for thermophiles. Whatever the diversity of life was like before the late heavy bombardment, afterwards it was diminished, and all life henceforth is derived from these survivors."
Columbus OH (SPX) Jul 29, 2008
A single typhoon in Taiwan buries as much carbon in the ocean -- in the form of sediment -- as all the other rains in that country all year long combined.
That's the finding of an Ohio State University study published in a recent issue of the journal Geology.
The study -- the first ever to examine the chemistry of stream water and sediments that were being washed out to sea while a typhoon was happening at full force -- will help scientists develop better models of global climate change.
Anne Carey, associate professor of earth sciences at Ohio State, said that she and her colleagues have braved two typhoons since starting the project in 2004. The Geology paper details their findings from a study of Taiwan's Choshui River during Typhoon Mindulle in July of that year.
Carey's team analyzes water and river sediments from around the world in order to measure how much carbon is pulled from the atmosphere as mountains weather away.
They study two types of weathering: physical and chemical. Physical weathering happens when organic matter containing carbon adheres to soil that is washed into the ocean and buried.
Chemical weathering happens when silicate rock on the mountainside is exposed to carbon dioxide and water, and the rock disintegrates. The carbon washes out to sea, where it eventually forms calcium carbonate and gets deposited on the ocean floor.
If the carbon gets buried in the ocean, Carey explained, it eventually becomes part of sedimentary rock, and doesn't return to the atmosphere for hundreds of millions of years.
Though the carbon buried in the ocean by storms won't solve global warming, knowing how much carbon is buried offshore of mountainous islands such as Taiwan could help scientists make better estimates of how much carbon is in the atmosphere -- and help them decipher its effect on global climate change.
Scientists have long suspected that extreme storms such as hurricanes and typhoons bury a lot of carbon, because they wash away so much sediment. But since the sediment washes out to sea quickly, samples had to be captured during a storm to answer the question definitively.
"We discovered that if you miss sampling these storms, then you miss truly understanding the sediment and chemical delivery of these rivers," said study coauthor and Ohio State doctoral student Steve Goldsmith.
The researchers found that, of the 61 million tons of sediment carried out to sea by the Choshui River during Typhoon Mindulle, some 500,000 tons consisted of particles of carbon created during chemical weathering. That's about 95 percent as much carbon as the river transports during normal rains over an entire year, and it equates to more than 400 tons of carbon being washed away for each square mile of the watershed during the storm.
Carey's collaborators from Academia Sinica -- a major research institute in Taiwan -- happened to be out collecting sediments for a long-term study of the region when Mindulle erupted in the Pacific.
"I don't want to say that a typhoon is serendipity, but you take what the weather provides," Carey said. "Since Taiwan has an average of four typhoons a year, in summer you pretty much can't avoid them. It's not unusual for some of us to be out in the field when one hits."
As the storm neared the coast, the geologists drove to the Choshui River watershed near the central western portion of the country.
Normally, the river is very shallow. But during a typhoon, it swells with water from the mountains. It's not unusual to see boulders the size of cars -- or actual cars -- floating downstream.
Mindulle gave the geologists their first chance to test some new equipment they designed for capturing water samples from storm runoff.
The equipment consisted of one-liter plastic bottles wedged inside a weighted Teflon case that would sink beneath the waves during a storm. They suspended the contraption from bridges above the river as the waters raged below. At the height of the storm, they tied themselves to the bridges for safety.
They did this once every three hours, taking refuge in a nearby storm shelter in between.
Four days later, after the storm had passed, they filtered the water from the bottles and analyzed the sediments for particulate organic carbon. Then they measured the amount of silica in the remaining water sample in order to calculate the amount of weathering occurring with the storm.
Because they know that two carbon molecules are required to weather one molecule of silica, they could then calculate how much carbon washed out to sea. Carey and Goldsmith did those calculations with study coauthor Berry Lyons, professor of earth sciences at Ohio State.
Carey cautioned that this is the first study of its kind, and more data are needed to put the Mindulle numbers into a long-term perspective. She and Goldsmith are still analyzing the data from Typhoon Haitang, which struck when the two of them happened to be in Taiwan in 2005, so it's too early to say how much carbon runoff occurred during that storm.
"But with two to four typhoons happening in Taiwan per year, it's not unreasonable to think that the amount of carbon sequestered during these storms could be comparable to the long-term annual carbon flux for the country," she said.
The findings could be useful to scientists who model global climate change, Goldsmith said. He pointed to other studies that suggest that mountainous islands such as Taiwan, New Zealand, and Papua New Guinea produce one third of all the sediments that enter the world oceans annually.
As scientists calculate Earth's carbon "budget" -- how much carbon is being added to the atmosphere and how much is being taken away -- they need to know how much is being buried in the oceans.
"What is the true budget of carbon being sequestered in the ocean per year? If the majority of sediment and dissolved constituents are being delivered during these storms, and the storms aren't taken into account, those numbers are going to be off," Goldsmith said.
As weathering pulls carbon from the atmosphere, the planet cools. For instance, other Ohio State geologists recently determined that the rise and weathering of the Appalachians preceded an ice age 450 million years ago.
If more carbon is being buried in the ocean than scientists once thought, does that mean we can worry less about global warming?
"I wouldn't go that far," Goldsmith said. "But if you want to build an accurate climate model, you need to understand how much CO2 is taken out naturally every year. And this paper shows that those numbers could be off substantially."
Carey agreed, and added that weathering rocks is not a practical strategy for reversing global warming, either.
"You'd have to weather all the volcanic rocks in the world to reduce the CO2 level back to pre-industrial times," she said. "You'd have to grind the rock into really fine particles, and you'd consume a lot of energy -- fossil fuels -- to do that, so there probably wouldn't be any long-term gain."
X-rays use diamonds as a window to the center of the Earth
Diamonds from Brazil have provided the answers to a question that Earth scientists have been trying to understand for many years: how is oceanic crust that has been subducted deep into the Earth recycled back into volcanic rocks?
A team of researchers, led by the University of Bristol, working alongside colleagues at the STFC Daresbury Laboratory, have gained a deeper insight into how the Earth recycles itself in the deep earth tectonic cycle way beyond the depths that can be accessed by drilling. The full paper on this research has been published (31 July) in the scientific journal, Nature.
The Earth's oceanic crust is constantly renewed in a cycle which has been occurring for billions of years. This crust is constantly being renewed from below by magma from the Earth's mantle that has been forced up at mid-ocean ridges. This crust is eventually returned to the mantle, sinking down at subduction zones that extend deep beneath the continents. Seismic imaging suggests that the oceanic crust can be subducted to depths of almost 3000km below the Earth's surface where it can remain for billions of years, during which time the crust material develops its own unique 'flavour' in comparison with the surrounding magmas. Exactly how this happens is a question that has baffled Earth scientists for years.
The Earth's oceanic crust lies under seawater for millions of years, and over time reacts with the seawater to form carbonate minerals, such as limestone, When subducted, these carbonate minerals have the effect of lowering the melting point of the crust material compared to that of the surrounding magma. It is thought that this melt is loaded with elements that carry the crustal 'flavour'.
This team of researchers have now proven this theory by looking at diamonds from the Juina area of Brazil. As the carbonate-rich magma rises through the mantle, diamonds crystallise, trapping minute quantities of minerals in the process. They form at great depths and pressures and therefore can provide clues as to what is happening at the Earth's deep interior, down to several hundred kilometres - way beyond the depths that can be physically accessed by drilling. Diamonds from the Juina area are particularly renowned for these mineral inclusions.
At the Synchrotron Radiation Source (SRS) at the STFC Daresbury Laboratory, the team used an intense beam of x-rays to look at the conditions of formation for the mineral perovskite which occurs in these diamonds but does not occur naturally near the Earth's surface. With a focused synchrotron X-ray beam less than half the width of a human hair, they used X-ray diffraction techniques to establish the conditions at which perovskite is stable, concluding that these mineral inclusions were formed up to 700km into the Earth in the mantle transition zone.
These results, backed up by further experiments carried out at the University of Edinburgh, the University of Bayreuth in Germany, and the Advanced Light Source in the USA, enabled the research team to show that the diamonds and their perovskite inclusions had indeed crystallised from very small-degree melts in the Earth's mantle. Upon heating, oceanic crust forms carbonatite melts, super-concentrated in trace elements with the 'flavour' of the Earth's oceanic crust. Furthermore, such melts may be widespread throughout the mantle and may have been 'flavouring' the mantle rocks for a very long time.
Dr Alistair Lennie, a research scientist at STFC Daresbury Laboratory, said: "Using X-rays to find solutions to Earth science questions is an area that has been highly active on the SRS at Daresbury Laboratory for some time. We are very excited that the SRS has contributed to answering such long standing questions about the Earth in this way."
Dr. Michael Walter, Department of Earth Sciences, University of Bristol, said: "The resources available at Daresbury's SRS for high-pressure research have been crucial in helping us determine the origin of these diamonds and their inclusions."
Source: Science and Technology Facilities Council
Moffett Field CA (SPX) Aug 25, 2008
For the last few years, astronomers have faced a puzzle: The vast majority of asteroids that come near the Earth are of a type that matches only a tiny fraction of the meteorites that most frequently hit our planet. Since meteorites are mostly pieces of asteroids, this discrepancy was hard to explain, but a team from MIT and other institutions has now found what it believes is the answer to the puzzle.
The smaller rocks that most often fall to Earth, it seems, come straight in from the main asteroid belt out between Mars and Jupiter, rather than from the near-Earth asteroid (NEA) population.
The puzzle gradually emerged from a long-term study of the properties of asteroids carried out by MIT professor of planetary science Richard Binzel and his students, along with postdoctoral researcher P. Vernazza, who is now with the European Space Agency, and A.T. Tokunaga, director of the University of Hawaii's Institute of Astronomy.
By studying the spectral signatures of near-Earth asteroids, they were able to compare them with spectra obtained on Earth from the thousands of meteorites that have been recovered from falls. But the more they looked, the more they found that most NEAs -- about two-thirds of them -- match a specific type of meteorites called LL chondrites, which only represent about 8 percent of meteorites. How could that be?
"Why do we see a difference between the objects hitting the ground and the big objects whizzing by?" Binzel asks. "It's been a head-scratcher." As the effect became gradually more and more noticeable as more asteroids were analyzed, "we finally had a big enough data set that the statistics demanded an answer. It could no longer be just a coincidence."
Way out in the main belt, the population is much more varied, and approximates the mix of types that is found among meteorites. But why would the things that most frequently hit us match this distant population better than it matches the stuff that's right in our neighborhood? That's where the idea emerged of a fast track all the way from the main belt to a "splat!" on Earth's surface.
This fast track, it turns out, is caused by an obscure effect that was discovered long ago, but only recently recognized as a significant factor in moving asteroids around, called the Yarkovsky effect.
The Yarkovsky effect causes asteroids to change their orbits as a result of the way they absorb the sun's heat on one side and radiate it back later as they rotate around. This causes a slight imbalance that slowly, over time, alters the object's path. But the key thing is this: The effect acts much more strongly on the smallest objects, and only weakly on the larger ones.
"We think the Yarkovsky effect is so efficient for meter-size objects that it can operate on all regions of the asteroid belt," not just its inner edge, Binzel says.
Thus, for chunks of rock from boulder-size on down -- the kinds of things that end up as typical meteorites -- the Yarkovsky effect plays a major role, moving them with ease from throughout the asteroid belt on to paths that can head toward Earth. For larger asteroids a kilometer or so across, the kind that we worry about as potential threats to the Earth, the effect is so weak it can only move them small amounts.
Binzel's study concludes that the largest near-Earth asteroids mostly come from the asteroid belt's innermost edge, where they are part of a specific "family" thought to all be remnants of a larger asteroid that was broken apart by collisions.
With an initial nudge from the Yarkovsky effect, kilometer-sized asteroids from the Flora region can find themselves "over the edge" of the asteroid belt and sent on a path to Earth's vicinity through the perturbing effects of the planets called resonances.
The new study is also good news for protecting the planet. One of the biggest problems in figuring out how to deal with an approaching asteroid, if and when one is discovered on a potential collision course, is that they are so varied. The best way of dealing with one kind might not work on another.
But now that this analysis has shown that the majority of near-Earth asteroids are of this specific type -- stony objects, rich in the mineral olivine and poor in iron -- it's possible to concentrate most planning on dealing with that kind of object, Binzel says.
"Odds are, an object we might have to deal with would be like an LL chondrite, and thanks to our samples in the laboratory, we can measure its properties in detail," he says. "It's the first step toward 'know thy enemy'."
The study not only yields information about impactors that might arrive at Earth in the future, but also provides new information about the types of materials delivered to Earth from extraterrestrial sources. Many scientists believe that impacts could have delivered important materials for the origin of life on early Earth.
The research is reported in the journal Nature. In addition to Binzel, Vernazza and Tokunaga, the co-authors are MIT graduate students Christina Thomas and Francesca DeMeo, S.J. Bus of the University of Hawaii, and A.S. Rivkin of Johns Hopkins University. The work was supported by NASA and the NSF.
Team finds Earth's 'oldest rocks'
By James Morgan
Science reporter, BBC News
Earth's most ancient rocks, with an age of 4.28 billion years, have been found on the shore of Hudson Bay, Canada.
Writing in Science journal, a team reports finding that a sample of Nuvvuagittuq greenstone is 250 million years older than any rocks known.
It may even hold evidence of activity by ancient life forms.
If so, it would be the earliest evidence of life on Earth - but co-author Don Francis cautioned that this had not been established.
"The rocks contain a very special chemical signature - one that can only be found in rocks which are very, very old," he said.
The professor of geology, who is based at McGill University in Montreal, added: "Nobody has found that signal any place else on the Earth."
"Originally, we thought the rocks were maybe 3.8 billion years old.
"Now we have pushed the Earth's crust back by hundreds of millions of years. That's why everyone is so excited."
Ancient rocks act as a time capsule - offering chemical clues to help geologists solve longstanding riddles of how the Earth formed and how life arose on it.
But the majority of our planet's early crust has already been mashed and recycled into Earth's interior several times over by plate tectonics.
Before this study, the oldest whole rocks were from a 4.03 billion-year-old body known as the Acasta Gneiss, in Canada's Northwest Territories.
The only things known to be older are mineral grains called zircons from Western Australia, which date back 4.36 billion years.
Professor Francis was looking for clues to the nature of the Earth's mantle 3.8 billion years ago.
He and colleague Jonathan O'Neil, from McGill University, travelled to remote tundra on the eastern shore of Hudson Bay, in northern Quebec, to examine an outcrop of the Nuvvuagittuq greenstone belt.
They sent samples for chemical analysis to scientists at the Carnegie Institution of Washington, who dated the rocks by measuring isotopes of the rare earth elements neodymium and samarium, which decay over time at a known rate.
The oldest rocks, termed "faux amphibolite", were dated within the range from 3.8 to 4.28 billion years old.
"4.28 billion is the figure I favour," says Francis.
"It could be that the rock was formed 4.3 billion years ago, but then it was re-worked into another rock form 3.8bn years ago. That's a hard distinction to draw."
The same unit of rock contains geological structures which might only have been formed if early life forms were present on the planet, Professor Francis suggested.
The material displays a banded iron formation - fine ribbon-like bands of alternating magnetite and quartz.
This feature is typical of rock precipitated in deep sea hydrothermal vents - which have been touted as potential habitats for early life on Earth.
"These ribbons could imply that 4.3 billion years ago, Earth had an ocean, with hydrothermal circulation," said Francis.
"Now, some people believe that to make precipitation work, you also need bacteria.
"If that were true, then this would be the oldest evidence of life.
"But if I were to say that, people would yell and scream and say that there is no hard evidence."
Fortunately, geologists have already begun looking for such evidence, in similar rocks found in Greenland, dated 3.8 billion years.
"The great thing about our find, is it will bring in people here to Lake Hudson to carry out specialised studies and see whether there was life here or not," says Francis.
"Regardless of that, or the exact date of the rocks, the exciting thing is that we've seen a chemical signature that's never been seen before. That alone makes this an exciting discovery."
Birth of a new ocean
In a remote part of northern Ethiopia, the Earth’s crust is being stretched to breaking point, providing geologists with a unique opportunity to watch the birth of what may eventually become a new ocean. Lorraine Field, a PhD student, and Dr James Hammond, both from the Department of Earth Sciences, are two of the many scientists involved in documenting this remarkable event.
The African continent is slowly splitting apart along the East African Rift, a 3,000 kilometre-long series of deep basins and flanking mountain ranges. An enormous plume of hot, partially molten rock is rising diagonally from the core-mantle boundary, some 2,900 kilometres beneath Southern Africa, and erupting at the Earth’s surface, or cooling just beneath it, in the Afar region of Ethiopia. It is the rise of this plume that is stretching the Earth’s crust to breaking point.
In September 2005, a series of fissures suddenly opened up along a 60-kilometre section as the plate catastrophically responded to the forces pulling it apart. The rapidity and immense length of the rupture – an event unprecedented in scientific history – greatly excited geologists, who rushed to this very remote part of the world to start measuring what was going on. It began with a big earthquake and continued with a swarm of moderate tremors. About a week into the sequence, eruption of the Dabbahu Volcano threw ash and rocks into the air, causing the evacuation of 6,300 people from the region, while cracks appeared in the ground, some of them more than a metre wide. The only fatality was a camel that fell into a fissure. While these movements are only the beginnings of what would be needed to create a new ocean – the complete process taking millions of years – the Afar event has given geologists a unique opportunity to study the rupture process which normally occurs on the floor of deep oceans. In order to do this research, a consortium of universities was formed and divided into five interdisciplinary working groups. Each group has its own aims and experimental programme whilst linking with, and providing results to, the other groups.
Lorraine Field is studying the Dabbahu volcano, located close to where the rifting event occurred, which had never been known to erupt before it woke up in September 2005. Following a very strong earthquake, locals reported a dark column of ‘smoke’ that rose high into the atmosphere and spread out to form an umbrella-shaped cloud. Emissions darkened the area for three days and three nights. Many of the lava flows on the mountain are made of obsidian, a black volcanic glass, and the fissure which opened in 2005 emits fumes and steam with a very strong smell of bad eggs. Water being extremely scarce, the local Afaris have devised an ingenious method of capturing it. They build a pit next to a fumarole that is emitting steam and gases. A low circular retaining wall is then built around the fumarole and topped with branches and grasses. These provide a condensing surface for the vapour which collects in the pit or ‘boina’. Of some concern, however, is the level of contamination in the water from the various chemicals and minerals found in volcanic areas. Occasionally goats have died from drinking this water, so in order to test its quality the locals hold a shiny piece of obsidian over the fumarole. If a milky deposit forms, this indicates a ‘bad’ boina, so they move on to the next. Members of the consortium have brought back some water to analyse in the hope of developing a device, similar to the Aquatest kit reported in the last issue of re:search, but which tests for toxic metals rather than bacteria.
In September 2005, a series of fissures suddenly opened up along a 60km section as the plate catastrophically responded to the forces pulling it apart
Field’s base was in a small village called Digdigga, which comprises a long main street with a mix of square houses built of wood and traditional round Afar houses, made of a lattice framework of sticks covered in thatch, skins and sacking. Digdigga has a concrete school building, the grounds of which became Field’s base camp for nearly three weeks in January this year. The village is situated on an immense, flat, windy plain surrounded by volcanic mountains and cinder cones. Due to the lack of any vegetation, everything quickly becomes covered in a layer of dust, but the bare rocks mean that satellite images can be used to measure the way the Earth’s surface changes as faults move and as molten rock moves up and along the fissures within the rift valley.
Conditions are still too extreme for normal field mapping and so representative rock samples from key locations have been collected. In order to access Dabbahu mountain, the team hired eight camels to carry supplies, taking enough food and water for six days (and an emergency day), and keeping in touch with the base camp by satellite phone. The rocks Field collected will be analysed to determine how the chemistry of the magmas varies at different locations and how it changes over time. This in turn gives information about the depth of the magma chambers within the crust and the relationship between rifting and volcanism in this area.
The rocks collected will be analysed to determine the relationship between rifting and volcanism
James Hammond is using a variety of seismological techniques to image the crust and mantle beneath Afar. For example, seismic waves are generated during earthquakes, so a network of 40 seismometers has been set up across the plate boundary zone to record seismic activity. One of the seismic stations was placed in the chief’s house, close to the summit of Erta Ale. This extraordinary volcano is essentially an open conduit right down into the mantle. By comparing the arrival times of seismic waves at the seismometers, Hammond and his team will be able to generate a three-dimensional image of the crust, crust-mantle boundary, mantle structure and base of the lithosphere across the study area. This will allow some constraints to be placed on the location of melt in this region, enabling the team to obtain information on the mechanisms of break-up involved in the rifting process. In a nutshell, the consortium has the best array of imaging equipment deployed anywhere in the world to help it ‘see’ into an actively rifting continent.
But all this work will not just benefit the scientific community; it will also have an immediate impact on understanding and mitigating natural hazards in Afar. Consequently, the teams work closely with Ethiopian scientists and policy makers in the region. In addition, the project will provide training for Ethiopian doctoral students and postdoctoral researchers, and Ethiopian scientists will be trained in the techniques used by the consortium. Over the next five years, scientists from the UK, Ethiopia and many other countries will all come together to further our understanding of the processes involved in shaping the surface of the Earth.
Provided by University of Bristol
University of Minnesota geology and geophysics researchers, along with their colleagues from China, have uncovered surprising effects of climate patterns on social upheaval and the fall of dynasties in ancient China.
Their research identifies a natural phenomenon that may have been the last straw for some Chinese dynasties: a weakening of the summer Asian Monsoons. Such weakening accompanied the fall of three dynasties and now could be lessening precipitation in northern China.
The study, led researchers from the University of Minnesota and Lanzhou University in China, appears in Science.
The work rests on climate records preserved in the layers of stone in a 118-millimeter-long stalagmite found in Wanxiang Cave in Gansu Province, China. By measuring amounts of the elements uranium and thorium throughout the stalagmite, the researchers could tell the date each layer was formed.
And by analyzing the "signatures" of two forms of oxygen in the stalagmite, they could match amounts of rainfall--a measure of summer monsoon strength--to those dates.
The stalagmite was formed over 1,810 years; stone at its base dates from A.D. 190, and stone at its tip was laid down in A.D. 2003, the year the stalagmite was collected.
"It is not intuitive that a record of surface weather would be preserved in underground cave deposits. This research nicely illustrates the promise of paleoclimate science to look beyond the obvious and see new possibilities," said David Verardo, director of the U.S. National Science Foundation's Paleoclimatology Program, which funded the research.
"Summer monsoon winds originate in the Indian Ocean and sweep into China," said Hai Cheng, corresponding author of the paper and a research scientist at the University of Minnesota. "When the summer monsoon is stronger, it pushes farther northwest into China."
These moisture-laden winds bring rain necessary for cultivating rice. But when the monsoon is weak, the rains stall farther south and east, depriving northern and western parts of China of summer rains. A lack of rainfall could have contributed to social upheaval and the fall of dynasties.
The researchers discovered that periods of weak summer monsoons coincided with the last years of the Tang, Yuan, and Ming dynasties, which are known to have been times of popular unrest. Conversely, the research group found that a strong summer monsoon prevailed during one of China's "golden ages," the Northern Song Dynasty.
The ample summer monsoon rains may have contributed to the rapid expansion of rice cultivation from southern China to the midsection of the country. During the Northern Song Dynasty, rice first became China's main staple crop, and China's population doubled.
"The waxing and waning of summer monsoon rains are just one piece of the puzzle of changing climate and culture around the world," said Larry Edwards, Distinguished McKnight University Professor in Geology and Geophysics and a co-author on the paper. For example, the study showed that the dry period at the end of the Tang Dynasty coincided with a previously identified drought halfway around the world, in Meso-America, which has been linked to the fall of the Mayan civilization.
The study also showed that the ample summer rains of the Northern Song Dynasty coincided with the beginning of the well-known Medieval Warm Period in Europe and Greenland. During this time--the late 10th century--Vikings colonized southern Greenland. Centuries later, a series of weak monsoons prevailed as Europe and Greenland shivered through what geologists call the Little Ice Age.
In the 14th and early 15th centuries, as the cold of the Little Ice Age settled into Greenland, the Vikings disappeared from there. At the same time, on the other side of the world, the weak monsoons of the 14th century coincided with the end of the Yuan Dynasty.
A second major finding concerns the relationship between temperature and the strength of the monsoons. For most of the last 1,810 years, as average temperatures rose, so, too, did the strength of the summer monsoon. That relationship flipped, however, around 1960, a sign that the late 20th century weakening of the monsoon and drying in northwestern China was caused by human activity.
If carbon dioxide is the culprit, as some have proposed, the drying trend may well continue in Inner Mongolia, northern China and neighboring areas on the fringes of the monsoon's reach, as society is likely to continue adding carbon dioxide to the atmosphere for the foreseeable future.
If, however, the culprit is man-made soot, as others have proposed, the trend could be reversed, the researchers said, by reduction of soot emissions.
Washington DC (SPX) Nov 14, 2008
Evolution isn't just for living organisms. Scientists at the Carnegie Institution have found that the mineral kingdom co-evolved with life, and that up to two thirds of the more than 4,000 known types of minerals on Earth can be directly or indirectly linked to biological activity. The finding, published in American Mineralogist, could aid scientists in the search for life on other planets.
Robert Hazen and Dominic Papineau of the Carnegie Institution's Geophysical Laboratory, with six colleagues, reviewed the physical, chemical, and biological processes that gradually transformed about a dozen different primordial minerals in ancient interstellar dust grains to the thousands of mineral species on the present-day Earth. (Unlike biological species, each mineral species is defined by its characteristic chemical makeup and crystal structure.)
"It's a different way of looking at minerals from more traditional approaches," says Hazen."Mineral evolution is obviously different from Darwinian evolution-minerals don't mutate, reproduce or compete like living organisms. But we found both the variety and relative abundances of minerals have changed dramatically over more than 4.5 billion years of Earth's history."
All the chemical elements were present from the start in the Solar Systems' primordial dust, but they formed comparatively few minerals. Only after large bodies such as the Sun and planets congealed did there exist the extremes of temperature and pressure required to forge a large diversity of mineral species. Many elements were also too dispersed in the original dust clouds to be able to solidify into mineral crystals.
As the Solar System took shape through "gravitational clumping" of small, undifferentiated bodies-fragments of which are found today in the form of meteorites-about 60 different minerals made their appearance. Larger, planet-sized bodies, especially those with volcanic activity and bearing significant amounts of water, could have given rise to several hundred new mineral species.
Mars and Venus, which Hazen and coworkers estimate to have at least 500 different mineral species in their surface rocks, appear to have reached this stage in their mineral evolution.
However, only on Earth-at least in our Solar System-did mineral evolution progress to the next stages. A key factor was the churning of the planet's interior by plate tectonics, the process that drives the slow shifting continents and ocean basins over geological time.
Unique to Earth, plate tectonics created new kinds of physical and chemical environments where minerals could form, and thereby boosted mineral diversity to more than a thousand types.
What ultimately had the biggest impact on mineral evolution, however, was the origin of life, approximately 4 billion years ago. "Of the approximately 4,300 known mineral species on Earth, perhaps two thirds of them are biologically mediated," says Hazen.
"This is principally a consequence of our oxygen-rich atmosphere, which is a product of photosynthesis by microscopic algae." Many important minerals are oxidized weathering products, including ores of iron, copper and many other metals.
Microorganisms and plants also accelerated the production of diverse clay minerals. In the oceans, the evolution of organisms with shells and mineralized skeletons generated thick layered deposits of minerals such as calcite, which would be rare on a lifeless planet.
"For at least 2.5 billion years, and possibly since the emergence of life, Earth's mineralogy has evolved in parallel with biology," says Hazen. "One implication of this finding is that remote observations of the mineralogy of other moons and planets may provide crucial evidence for biological influences beyond Earth."
Stanford University geologist Gary Ernst called the study "breathtaking," saying that "the unique perspective presented in this paper may revolutionize the way Earth scientists regard minerals."
Plate tectonics started over 4 billion years ago, geochemists report
(PhysOrg.com) -- A new picture of the early Earth is emerging, including the surprising finding that plate tectonics may have started more than 4 billion years ago — much earlier than scientists had believed, according to new research by UCLA geochemists reported Nov. 27 in the journal Nature.
"We are proposing that there was plate-tectonic activity in the first 500 million years of Earth's history," said geochemistry professor Mark Harrison, director of UCLA's Institute of Geophysics and Planetary Physics and co-author of the Nature paper. "We are reporting the first evidence of this phenomenon."
"Unlike the longstanding myth of a hellish, dry, desolate early Earth with no continents, it looks like as soon as the Earth formed, it fell into the same dynamic regime that continues today," Harrison said. "Plate tectonics was inevitable, life was inevitable. In the early Earth, there appear to have been oceans; there could have been life — completely contradictory to the cartoonish story we had been telling ourselves."
"We're revealing a new picture of what the early Earth might have looked like," said lead author Michelle Hopkins, a UCLA graduate student in Earth and space sciences. "In high school, we are taught to see the Earth as a red, hellish, molten-lava Earth. Now we're seeing a new picture, more like today, with continents, water, blue sky, blue ocean, much earlier than we thought."
The Earth is 4.5 billion years old. Some scientists think plate tectonics — the geological phenomenon involving the movement of huge crustal plates that make up the Earth's surface over the planet's molten interior — started 3.5 billion years ago, others that it began even more recently than that.
The research by Harrison, Hopkins and Craig Manning, a UCLA professor of geology and geochemistry, is based on their analysis of ancient mineral grains known as zircons found inside molten rocks, or magmas, from Western Australia that are about 3 billion years old. Zircons are heavy, durable minerals related to the synthetic cubic zirconium used for imitation diamonds and costume jewelry. The zircons studied in the Australian rocks are about twice the thickness of a human hair.
Hopkins analyzed the zircons with UCLA's high-resolution ion microprobe, an instrument that enables scientists to date and learn the exact composition of samples with enormous precision. The microprobe shoots a beam of ions, or charged atoms, at a sample, releasing from the sample its own ions, which are then analyzed in a mass spectrometer. Scientists can aim the beam of ions at specific microscopic areas of a sample and conduct a high-resolution isotope analysis of them without destroying the object.
"The microprobe is the perfect tool for determining the age of the zircons," Harrison said.
The analysis determined that some of the zircons found in the magmas were more than 4 billion years old. They were also found to have been formed in a region with heat flow far lower than the global average at that time.
"The global average heat flow in the Earth's first 500 million years was thought to be about 200 to 300 milliwatts per meter squared," Hopkins said. "Our zircons are indicating a heat flow of just 75 milliwatts per meter squared — the figure one would expect to find in subduction zones, where two plates converge, with one moving underneath the other."
"The data we are reporting are from zircons from between 4 billion and 4.2 billion years ago," Harrison said. "The evidence is indirect, but strong. We have assessed dozens of scenarios trying to imagine how to create magmas in a heat flow as low as we have found without plate tectonics, and nothing works; none of them explain the chemistry of the inclusions or the low melting temperature of the granites."
Evidence for water on Earth during the planet's first 500 million years is now overwhelming, according to Harrison.
"You don't have plate tectonics on a dry planet," he said.
Strong evidence for liquid water at or near the Earth's surface 4.3 billion years ago was presented by Harrison and colleagues in a Jan. 11, 2001, cover story in Nature.
"Five different lines of evidence now support that once radical hypothesis," Harrison said. "The inclusions we found tell us the zircons grew in water-saturated magmas. We now observe a surprisingly low geothermal gradient, a low rate at which temperature increases in the Earth. The only mechanism that we recognize that is consistent with everything we see is that the formation of these zircons was at a plate-tectonic boundary. In addition, the chemistry of the inclusions in the zircons is characteristic of the two kinds of magmas today that we see at place-tectonic boundaries."
"We developed the view that plate tectonics was impossible in the early Earth," Harrison added. "We have now made observations from the Hadean (the Earth's earliest geological eon) — these little grains contain a record about the conditions under which they formed — and the zircons are telling us that they formed in a region with anomalously low heat flow. Where in the modern Earth do you have heat flow that is one-third of the global average, which is what we found in the zircons? There is only one place where you have heat flow that low in which magmas are forming: convergent plate-tectonic boundaries."
Three years ago, Harrison and his colleagues applied a technique to determine the temperature of ancient zircons.
"We discovered the temperature at which these zircons formed was constant and very low," Harrison said. "You can't make a magma at any lower temperature than what we're seeing in these zircons. You look at artists' conceptions of the early Earth, with flying objects from outer space making large craters; that should make zircons hundreds of degrees centigrade hotter than the ones we see. The only way you can make zircons at the low temperature we see is if the melt is water-saturated. There had to be abundant water. That's a big surprise because our longstanding conception of the early Earth is that it was dry."
Source: University of California - Los Angeles
A very interesting article,and it will be added to I'm sure when the area to the very North west of the Flinders ranges well past Arcaroola is studied in more depth.
As Ice Melts, Antarctic Bedrock Is on the Move
As ice melts away from Antarctica, parts of the continental bedrock are rising in response -- and other parts are sinking, scientists have discovered.
The finding will give much needed perspective to satellite instruments that measure ice loss on the continent, and help improve estimates of future sea level rise.
"Our preliminary results show that we can dramatically improve our estimates of whether Antarctica is gaining or losing ice," said Terry Wilson, associate professor of earth sciences at Ohio State University.
Wilson reported the research in a press conference Monday, December 15, 2008 at the American Geophysical Union meeting in San Francisco.
These results come from a trio of global positioning system (GPS) sensor networks on the continent.
Wilson leads POLENET, a growing network of GPS trackers and seismic sensors implanted in the bedrock beneath the West Antarctic Ice Sheet (WAIS). POLENET is reoccupying sites previously measured by the West Antarctic GPS Network (WAGN) and the Transantarctic Mountains Deformation (TAMDEF) network.
In separate sessions at the meeting, Michael Bevis, Ohio Eminent Scholar in geodyamics and professor of earth sciences at Ohio State, presented results from WAGN, while doctoral student Michael Willis presented results from TAMDEF.
Taken together, the three projects are yielding the best view yet of what's happening under the ice.
When satellites measure the height of the WAIS, scientists calculate ice thickness by subtracting the height of the earth beneath it. They must take into account whether the bedrock is rising or falling. Ice weighs down the bedrock, but as the ice melts, the earth slowly rebounds.
Gravity measurements, too, rely on knowledge of the bedrock. As the crust under Antarctica rises, the mantle layer below it flows in to fill the gap. That mass change must be subtracted from Gravity Recovery and Climate Experiment (GRACE) satellite measurements in order to isolate gravity changes caused by the thickening or thinning of the ice.
Before POLENET and its more spatially limited predecessors, scientists had few direct measurements of the bedrock. They had to rely on computer models, which now appear to be incorrect.
"When you compare how fast the earth is rising, and where, to the models of where ice is being lost and how much is lost -- they don't match," Wilson said. "There are places where the models predict no crustal uplift, where we see several millimeters of uplift per year. We even have evidence of other places sinking, which is not predicted by any of the models."
A few millimeters may sound like a small change, but it's actually quite large, she explained. Crustal uplift in parts of North America is measured on the scale of millimeters per year.
POLENET's GPS sensors measure how much the crust is rising or falling, while the seismic sensors measure the stiffness of the bedrock -- a key factor for predicting how much the bedrock will rise in the future.
"We're pinning down both parts of this problem, which will improve the correction made to the satellite data, which will in turn improve what we know about whether we're gaining ice or losing ice," Wilson said. Better estimates of sea level rise can then follow.
POLENET scientists have been implanting sensors in Antarctica since December 2007. The network will be complete in 2010 and will record data into 2012. Selected sites may remain as a permanent Antarctic observational network.
Source: Ohio State University
Ancient Magma 'Superpiles' May Have Shaped The Continents
Two giant plumes of hot rock deep within the earth are linked to the plate motions that shape the continents, researchers have found.
The two superplumes, one beneath Hawaii and the other beneath Africa, have likely existed for at least 200 million years, explained Wendy Panero, assistant professor of earth sciences at Ohio State University.
The giant plumes -- or "superpiles" as Panero calls them -- rise from the bottom of Earth's mantle, just above our planet's core. Each is larger than the continental United States. And each is surrounded by a wall of plates from Earth's crust that have sunk into the mantle.
She and her colleagues reported their findings at the American Geophysical Union meeting in San Francisco.
Computer models have connected the piles to the sunken former plates, but it's currently unclear which one spawned the other, Panero said. Plates sink into the mantle as part of the normal processes that shape the continents. But which came first, the piles or the plates, the researchers simply do not know.
"Do these superpiles organize plate motions, or do plate motions organize the superpiles? I don't know if it's truly a chicken-or-egg kind of question, but the locations of the two piles do seem to be related to where the continents are today, and where the last supercontinent would have been 200 million years ago," she said.
That supercontinent was Pangea, and its breakup eventually led to the seven continents we know today.
Scientists first proposed the existence of the superpiles more than a decade ago. Earthquakes offer an opportunity to study them, since they slow the seismic waves that pass through them. Scientists combine the seismic data with what they know about Earth's interior to create computer models and learn more.
But to date, the seismic images have created a mystery: they suggest that the superpiles have remained in the same locations, unchanged for hundreds of millions of years.
"That's a problem," Panero said. "We know that the rest of the mantle is always moving. So why are the piles still there?"
Hot rock constantly migrates from the base of the mantle up to the crust, she explained. Hot portions of the mantle rise, and cool portions fall. Continental plates emerge, then sink back into the earth.
But the presence of the superpiles and the location of subducted plates suggest that the two superpiles have likely remained fixed to the Earth's core while the rest of the mantle has churned around them for millions of years.
Unlocking this mystery is the goal of the Cooperative Institute for Deep Earth Research (CIDER) collaboration, a group of researchers from across the United States who are attempting to unite many different disciplines in the study of Earth's interior.
Panero provides CIDER her expertise in mineral physics; others specialize in geodynamics, geomagnetism, seismology, and geochemistry. Together, they have assembled a new model that suggests why the two superpiles are so stable, and what they are made of.
As it turns out, just a tiny difference in chemical composition can keep the superpiles in place, they found.
The superpiles contain slightly more iron than the rest of the mantle; their composition likely consists of 11-13 percent iron instead of 10-12 percent. But that small change is enough to make the superpiles denser than their surroundings.
"Material that is more dense is going to sink to the base of the mantle," Panero said. "It would normally spread out at that point, but in this case we have subducting plates that are coming down from above and keeping the piles contained."
CIDER will continue to explore the link between the superpiles and the plates that surround them. The researchers will also work to explain the relationship between the superpiles and other mantle plumes that rise above them, which feed hotspots such as those beneath Hawaii and mid-ocean ridges. Ultimately, they hope to determine whether the superpiles may have contributed to the breakup of Pangea.
Provided by Ohio State University
Coastal bluffs reveal secrets of past
By Dave Schwab - La Jolla Light
It's a favored surf spot off La Jolla's shoreline today, but millions of years ago it was a volcanic "hot spot."
"It" is the stretch of beach from Scripps Pier north to Torrey Pines that has a very special geology.
"It's a vertical, volcanic intrusion," noted Thomas A. Demere, Ph.D., curator of paleontology at the San Diego Natural History Museum. "Distinctively black basaltic rocks deposited there, right out in the surf zone, are 10 to 12 million years old."
Demere added this remnant volcanic formation lies just beneath the cliff bluffs where the National Marine Fisheries Service Science Center on UCSD's campus sits. At low tide, standing on the beach in that area looking south toward La Jolla, the linear nature of that volcanic deposit is obvious.
"It's really quite striking," Demere added, "quite different from the light brown sandstones that compose the cliffs."
Geologic "sleuths" like Demere are piecing together the geologic riddle of San Diego's paleontological history. Evidence buried in, or uncovered by, natural erosion reveals a past topography much different than today, when an ancient oceanic crustal tectonic plate created an archipelago of volcanic islands producing massive volumes of magma that later congealed into rock.
Also recorded in the historical record of coastal San Diego are periods of higher rainfall and subtropical climates that supported coastal rain forests with exotic plants and animals. With the coming and going of worldwide ice ages, San Diego's coastline endured periods of "drowning," as well as widespread earthquake faulting.
La Jolla's downtown Village has its own unique geologic pedigree, Demere said.
"La Jolla is built on a series of sea floors that are related to climatic fluctuations over the last 120,000 years," he said. "Scripps Park down by the Cove on that nice broad, flat surface is a sea floor 85,000 years old. The flat surface on Prospect Street, the central portion of La Jolla Village, is another sea floor 120,000 years old."
Terraced sea floors like those in La Jolla are the consequence of ice ages and intervening periods of global warming, in roughly 100,000-year cycles that caused wide discrepancies in sea levels.
"The peak of the last ice age, 18,000 years ago, sea level was up to 400 feet lower than it is today," noted Demere.
Natural wave action led to the carving out of platforms resulting in the current topography.
Did Earth's Twin Cores Spark Plate Tectonics?
Michael Reilly, Discovery News
Jan. 6, 2009 -- It's a classic image from every youngster's science textbook: a cutaway image of Earth's interior. The brown crust is paper-thin; the warm mantle orange, the seething liquid of the outer core yellow, and at the center the core, a ball of solid, red-hot iron.
Now a new theory aims to rewrite it all by proposing the seemingly impossible: Earth has not one but two inner cores.
The idea stems from an ancient, cataclysmic collision that scientists believe occurred when a Mars-sized object hit Earth about 4.45 billion years ago. The young Earth was still so hot that it was mostly molten, and debris flung from the impact is thought to have formed the moon.
Haluk Cetin and Fugen Ozkirim of Murray State University think the core of the Mars-sized object may have been left behind inside Earth, and that it sank down near the original inner core. There the two may still remain, either separate or as conjoined twins, locked in a tight orbit.
Their case is largely circumstantial and speculative, Cetin admitted.
"We have no solid evidence yet, and we're not saying 100 percent that it still exists," he said. "The interior of Earth is a very hard place to study."
The ancient collision is a widely accepted phenomenon. But most scientists believe the incredible pressure at the center of the planet would've long since pushed the two cores into each other.
Still, the inner core is a mysterious place. Recently, scientists discovered that it rotates faster than the rest of the planet. And a study last year of how seismic waves propagate through the iron showed that the core is split into two distinct regions.
Beyond that, little is known. But Cetin and Ozkirim think a dual inner core can explain the rise of plate tectonics, and help explain why the planet remains hotter today than it should be, given its size.
"If this is true, it would change all Earth models as we know them," Cetin said. "If not, and these two cores coalesced early on, we would have less to say, but it could still be how plate tectonics got started."
Based on models of Earth's interior, Cetin thinks the two cores rotate in opposite directions, like the wheels of a pasta maker. Their motion would suck in magma from behind and spit it out in front. If this motion persisted for long enough, it could set up a giant current of circulation that would push plates of crust apart in front, and suck them down into the mantle in back.
Friction generated by the motion would keep the planet hot.
Scientists asked to comment on this hypothesis were extremely skeptical. Some asked not to be quoted, citing insufficient evidence to make a well-reasoned critique of the study, which the authors presented last month at the fall meeting of the American Geophysical Union in San Francisco.
"In terms of its volume, and even its mass, the Earth's inner core is quite small relative to the whole planet, about 1 percent," Paul Richards of Columbia University said. "I seriously doubt that inner core dynamics could play a significant role in moving the tectonic plates."
I think with that theory the scenario might go something like this : Light the blue touch paper and stand well back !
Two rare meteorites found in Antarctica two years ago are from a previously unknown, ancient asteroid with an outer layer or crust similar in composition to the crust of Earth's continents, reports a research team primarily composed of geochemists from the University of Maryland.
Published in the January 8 issue of the journal Nature, this is the first ever finding of material from an asteroid with a crust like Earth's. The discovery also represents the oldest example of rock with this composition ever found.
These meteorites point "to previously unrecognized diversity" of materials formed early in the history of the Solar System, write authors James Day, Richard Ash, Jeremy Bellucci, William McDonough and Richard Walker of the University of Maryland; Yang Liu and Lawrence Taylor of the University of Tennessee and Douglas Rumble III of the Carnegie Institution for Science.
"What is most unusual about these rocks is that they have compositions similar to Earth's andesite continental crust -- what the rock beneath our feet is made of," said first author Day, who is a research scientist in Maryland's department of geology. "No meteorites like this have ever been seen before."
Day explained that his team focused their investigations on how such different Solar System bodies could have crusts with such similar compositions. "We show that this occurred because of limited melting of the asteroid, and thus illustrate that the formation of andesite crust has occurred in our solar system by processes other than plate tectonics, which is the generally accepted process that created the crust of Earth."
The two meteorites (numbered GRA 06128 and GRA 06129) were discovered in the Graves Nunatak Icefield during the US Antarctic Search for Meteorites (ANSMET) 2006/2007 field season. Day and his colleagues immediately recognized that these meteorites were unusual because of elevated contents of a light-colored feldspar mineral called oligoclase. "Our age results point to these rocks being over 4.52 billion years old and that they formed during the birth of the Solar System. Combined with the oxygen isotope data, this age points to their origin from an asteroid rather than a planet," he said.
There are a number of asteroids in the asteroid belt that may have properties like the GRA 06128 and GRA 06129 meteorites including the asteroid (2867) Steins, which was studied by the European Space Agency's Rosetta spacecraft during a flyby this past September. These so-called E-type asteroids reflect the Sun's light very brightly, as would be predicted for a body with a crust made of feldspar.
According to Day and his colleagues, finding pieces of meteorites with andesite compositions is important because they not only point to a previously unrecognized diversity of Solar System materials, but also to a new mechanism to generate andesite crust. On the present-day Earth, this occurs dominantly through plates colliding and subduction - where one plate slides beneath another. Subduction forces water back into the mantle aiding melting and generating arc volcanoes, such as the Pacific Rim of Fire - in this way new crust is formed.
"Our studies of the GRA meteorites suggest similar crust compositions may be formed via melting of materials in planets that are initially volatile- and possibly water-rich, like the Earth probably was when if first formed" said Day." A major uncertainty is how evolved crust formed in the early Solar System and these meteorites are a piece in the puzzle to understanding these processes."
Note: This story has been adapted from a news release issued by the University of Maryland
Talk about deep, dark secrets. Rare "ultra-deep" diamonds are valuable - not because they look good twinkling on a newlywed's finger - but because of what they can tell us about conditions far below the Earth's crust.
Now a find of these unusual gems in Australia has provided new clues to how they were formed.
The diamonds, which are white and a few millimetres across, were found by a mineral exploration company just outside the village of Eurelia, some 300 kilometres north of Adelaide, in southern Australia. From there, they were sent to Ralf Tappert, a diamond expert at the University of Adelaide.
Tappert and colleagues say minerals found trapped inside the Eurelia diamonds could only have formed more than 670 kilometres (416 miles) beneath the surface of the Earth - a distance greater than that between Boston and Washington, DC.
Clues from the deep
"The vast majority of diamonds worldwide form at depths between 150 km and 250 km, within the mantle roots of ancient continental plates," says Tappert. "These diamonds formed in the Earth's lower mantle at depths greater than 670 km, which is much deeper than 'normal' diamonds."
Fewer than a dozen ultra-deep diamonds have been found in various corners of the globe since the 1990s. Sites range from Canada and Brazil to Africa - and now Australia.
"Deep diamonds are important because they are the only natural samples that we have from the lower mantle," says Catherine McCammon, a geologist at the University of Bayreuth in Germany. "This makes them an invaluable set of samples - much like the lunar rocks are to our studies of the moon."
The Eurelia gems contain information about the carbon they were made from. Their heavy carbon isotope signatures suggest the carbon was once contained in marine carbonates lying on the ocean floor.
Location, though, provides researchers with a common thread for the Brazilian, African and Australian deep diamonds, which could explain how they were born. All six groups of diamonds were found in areas that would once have lined the edge of the ancient supercontinent Gondwana.
"Deep diamonds have always been treated like oddball diamonds," says Tappert. "We don't really know what their origin is. With the discovery of the ones in Australia we start to get a pattern."
Their geographic spread suggests that all these ultra-deep diamonds were formed in the same way: as the oceanic crust dived down beneath Gondwana - a process known as subduction - it would have dragged carbon down to the lower mantle, transforming it into graphite and then diamond along the way.
Eventually, kimberlites - volcanic rocks named after the town of Kimberley in South Africa - are propelled to the surface during rapid eruptions, bringing the gems up to the surface.
According to John Ludden of the British Geological Survey, if the theory were proven true, it would mean the Eurelia diamonds are much younger than most diamonds are thought to be.
"Many of the world's diamonds are thought to have been sampled from subducted crust in the very early Earth, 3 billion years ago," says Ludden.
Yet Tappert's theory suggests these diamonds would have been formed about 300 million years ago. "This may well result in a revision of exploration models for kimberlites and the diamonds they host, as to date exploration has focused on very old rock units of the early Earth," Ludden told New Scientist.
McCammon says Tappert's theory is "plausible" but just "one among possible models". She says not all deep diamonds fit the Gondwana model, but adds that the new gems "proved a concrete idea that can be tested by others in the community".
Journal reference: Geology (vol 37, p 43)
ScienceDaily (Feb. 28, 2009) — The argument over whether an outcrop of rock in South West Greenland contains the earliest known traces of life on Earth has been reignited, in a study published in the Journal of the Geological Society. The research, led by Martin J. Whitehouse at the Swedish Museum of Natural History, argues that the controversial rocks "cannot host evidence of Earth’s oldest life," reopening the debate over where the oldest traces of life are located.
The small island of Akilia has long been the centre of attention for scientists looking for early evidence of life. Research carried out in 1996 argued that a five metre wide outcrop of rock on the island contained graphite with depleted levels of 13C. Carbon isotopes are frequently used to search for evidence of early life, because the lightest form of carbon, 12C (atomic weight 12), is preferred in biological processes as it requires less energy to be used by organisms. This results in heavier forms, such as 13C, being less concentrated, which might account for the depleted levels found in the rocks at Akilia.
Crucial to the dating of these traces was analysing the cross-cutting intrusions made by igneous rocks into the outcrop. Whatever is cross-cut must be older than the intruding rocks, so obtaining a date for the intrusive rock was vital. When these were claimed to be at least 3.85 billion years old, it seemed that Akilia did indeed hold evidence of the oldest traces of life on Earth.
Since then, many critics have cast doubt on the findings. Over billions of years, the rocks have undergone countless changes to their structure, being folded, distorted, heated and compressed to such an extent that their mineral composition is very different now to what it was originally. The dating of the intrusive rock has also been questioned .Nevertheless, in July 2006, an international team of scientists, led by Craig E. Manning at UCLA, published a paper claiming that they had proved conclusively that the traces of life were older than 3.8 billion years, after having mapped the area extensively. They argued that the rocks formed part of a volcanic stratigraphy, with igneous intrusions, using the cross-cutting relationships between the rocks as an important part of their theory.
The new research, led by Martin J. Whitehouse at the Swedish Museum of Natural History and Nordic Center for Earth Evolution, casts doubt on this interpretation. The researchers present new evidence demonstrating that the cross-cutting relationships are instead caused by tectonic activity, and represent a deformed fault or unconformity. If so, the age of the intrusive rock is irrelevant to the dating of the graphite, and it could well be older. Because of this, the scientists turned their attention to dating the graphite-containing rocks themselves, and found no evidence that they are any older than c. 3.67 billion years.
"The rocks of Akilia provide no evidence that life existed at or before c. 3.82 Ga, or indeed before 3.67 Ga," they conclude.
The age of the Earth itself is around 4.5 billion years. If life complex enough to have the ability to fractionate carbon were to exist at 3.8 billion years, this would suggest life originated even earlier. The Hadean eon, 3.8 – 4.5 billion years ago, is thought to have been an environment extremely hostile to life. In addition to surviving this period, such early life would have had to contend with the ‘Late Heavy Bombardment’ between 3.8 and 4.1 billion years ago, when a large number of impact craters on the Moon suggest that both the Earth and the Moon underwent significant bombardment, probably by collision with asteroids.
M J Whitehouse, J S Myers & C M Fedo. The Akilia Controversy: field, structural and geochronological evidence questions interpretations of >3.8 Ga life in SW Greenland. Journal of the Geological Society, 2009; 166 (2): 335-348 DOI: 10.1144/0016-76492008-070
Adapted from materials provided by Geological Society of London, via AlphaGalileo.
ScienceDaily (Mar. 8, 2009) — A Monash geoscientist and a team of international researchers have discovered the existence of an ocean floor was destroyed 50 to 20 million years ago, proving that New Caledonia and New Zealand are geographically connected.
Using new computer modelling programs Wouter Schellart and the team reconstructed the prehistoric cataclysm that took place when a tectonic plate between Australia and New Zealand was subducted 1100 kilometres into the Earth's interior and at the same time formed a long chain of volcanic islands at the surface.
Mr Schellart conducted the research, published in the journal Earth and Planetary Science Letters, in collaboration with Brian Kennett from ANU (Canberra) and Wim Spakman and Maisha Amaru from Utrecht University in the Netherlands.
"Until now many geologists have only looked at New Caledonia and New Zealand separately and didn't see a connection, Mr Schellart said.
"In our new reconstruction, which looked at a much larger region including eastern Australia, New Zealand, Fiji, Vanuatu, New Caledonia and New Guinea, we saw a large number of similarities between New Caledonia and northern New Zealand in terms of geology, structure, volcanism and timing of geological events.
"We then searched deep within the Earth for proof of a connection and found the evidence 1100 km below the Tasman Sea in the form of a subducted tectonic plate.
"We combined reconstructions of the tectonic plates that cover the Earth's surface with seismic tomography, a technique that allows one to look deep into the Earth's interior using seismic waves that travel through the Earth's interior to map different regions.
"We are now able to say a tectonic plate about 70 km thick, some 2500 km long and 700 km wide was subducted into the Earth's interior.
"The discovery means there was a geographical connection between New Caledonia and New Zealand between 50 and 20 million years ago by a long chain of volcanic islands. This could be important for the migration of certain plant and animal species at that time," Mr Schellart said.
Mr Schellart said the new discovery diffuses the debate about whether the continents and micro-continents in the Southwest Pacific have been completely separated since 100 million years ago and helps to explain some of the mysteries surrounding evolution in the region.
"As geologists present more data, and computer modelling programs become more hi-tech, it is likely we will learn more about our Earth's history and the processes of evolution."
Washington DC (SPX) Mar 26, 2009
Earth's crust melts easier than previously thought, scientists have discovered. In a paper published in this week's issue of the journal Nature, geologists report results of a study of how well rocks conduct heat at different temperatures.
They found that as rocks get hotter in Earth's crust, they become better insulators and poorer conductors.
The findings provide insights into how magmas are formed, the scientists say, and will lead to better models of continental collision and the formation of mountain belts.
"These results shed important light on a geologic question: how large bodies of granite magma can be formed in Earth's crust," said Sonia Esperanca, a program director in the National Science Foundation (NSF)'s Division of Earth Sciences, which funded the research.
"In the presence of external heat sources, rocks heat up more efficiently than previously thought," said geologist Alan Whittington of the University of Missouri.
"We applied our findings to computer models that predict what happens to rocks when they get buried and heat up in mountain belts, such as the Himalayas today or the Black Hills in South Dakota in the geologic past.
"We found that strain heating, caused by tectonic movements during mountain belt formation, easily triggers crustal melting."
In the study, the researchers used a laser-based technique to determine how long it took heat to conduct through different rock samples. In all their samples, thermal diffusivity, or how well a material conducts heat, decreased rapidly with increasing temperatures.
The thermal diffusivity of hot rocks and magmas was half that of what had been previously assumed.
"Most crustal melting on Earth comes from intrusions of hot basaltic magma from the Earth's mantle," said Peter Nabelek, also a geologist at the University of Missouri. "The problem is that during continental collisions, we don't see intrusions of basaltic magma into continental crust."
These experiments suggest that because of low thermal diffusivity, strain heating is much faster and more efficient. Once rocks get heated, they stay hotter for much longer, Nabelek said.
The processes take millions of years to happen, and scientists can only simulate them on a computer. The new data will allow them to create computer models that more accurately represent processes that occur during continental collisions.
This is part article only,follow the link to read the complete transcript:
GARY ANDERSON was not around to see a backhoe tear up the buffalo grass at his ranch near Akron, Colorado. But he was watching a few weeks later when the technicians came to dump instruments and insulation into their 2-metre-deep hole.
What they left behind didn't look like much: an anonymous mound of dirt and, a few paces away, a spindly metal framework supporting a solar panel. All Anderson knew was that he was helping to host some kind of science experiment. It wouldn't be any trouble, he'd been told, and it wouldn't disturb the cattle. After a couple of years the people who installed it would come and take it away again.
He had in fact become part of what is probably the most ambitious seismological project ever conducted. Its name is USArray and its aim is to run what amounts to an ultrasound scan over the 48 contiguous states of the US. Through the seismic shudders and murmurs that rack Earth's innards, it will build up an unprecedented 3D picture of what lies beneath North America.
It is a mammoth undertaking, during which USArray's scanner - a set of 400 transportable seismometers - will sweep all the way from the Pacific to the Atlantic. Having started off in California in 2004, it is now just east of the Rockies, covering a north-south swathe stretching from Montana's border with Canada down past El Paso on the Texas-Mexico border. By 2013, it should have reached the north-east coast, and its mission end.
Though not yet at the halfway stage, the project is already bringing the rocky underbelly of the US into unprecedented focus. Geologists are using this rich source of information to gain new understanding of the continent's tumultuous past - and what its future holds.
For something so fundamental, our idea of what lies beneath our feet is sketchy at best. It is only half a century since geologists firmed up the now standard theory of plate tectonics. This is the notion that Earth's uppermost layers are segmented like a jigsaw puzzle whose pieces - vast "plates" carrying whole continents or chunks of ocean - are constantly on the move. Where two plates collide, we now know, one often dives beneath the other. That process, known as subduction, can create forces strong enough to build up spectacular mountain ranges such as the still-growing Andes in South America or the Rocky mountains of the western US and Canada.
In the heat and pressure of the mantle beneath Earth's surface, the subducted rock deforms and slowly flows, circulating on timescales of millions of years. Eventually, it can force its way back to the surface, prising apart two plates at another tectonic weak point. The mid-Atlantic ridge, at the eastern edge of the North American plate, is a classic example of this process in action.
What we don't yet know is exactly what happens to the rock during its tour of Earth's interior. How does its path deep underground relate to features we can see on the surface? Is the diving of plates a smoothly flowing process or a messy, bitty, stop-start affair?
USArray will allow geologists to poke around under the hood, inspecting Earth's internal workings right down to where the mantle touches the iron-rich core 2900 kilometres below the surface - and perhaps even further down. "It is our version of the Hubble Space Telescope. With it, we'll be able to view Earth in a fundamentally different way," says Matthew Fouch, a geophysicist at Arizona State University in Tempe.
College Park MD (SPX) May 11, 2009
An international team of geologists may have uncovered the answer to an age-old question - an ice-age-old question, that is. It appears that Earth's earliest ice age may have been due to the rise of oxygen in Earth's atmosphere, which consumed atmospheric greenhouse gases and chilled the earth.
Scientists from the University of Maryland, including post-doctoral fellows Boswell Wing and Sang-Tae Kim, graduate student Margaret Baker, and professors Alan J. Kaufman and James Farquhar, along with colleagues in Germany, South Africa, Canada and the United States, uncovered evidence that the oxygenation of Earth's atmosphere - generally known as the Great Oxygenation Event - coincided with the first widespread ice age on the planet.
"We can now put our hands on the rock library that preserves evidence of irreversible atmospheric change," said Kaufman. "This singular event had a profound effect on the climate, and also on life."
Using sulfur isotopes to determine the oxygen content of ~2.3 billion year-old rocks in the Transvaal Supergroup in South Africa, they found evidence of a sudden increase in atmospheric oxygen that broadly coincided with physical evidence of glacial debris, and geochemical evidence of a new world-order for the carbon cycle.
"The sulfur isotope change we recorded coincided with the first known anomaly in the carbon cycle. This may have resulted from the diversification of photosynthetic life that produced the oxygen that changed the atmosphere," Kaufman said.
Two and a half billion years ago, before the Earth's atmosphere contained appreciable oxygen, photosynthetic bacteria gave off oxygen that first likely oxygenated the surface of the ocean, and only later the atmosphere.
The first formed oxygen reacted with iron in the oceans, creating iron oxides that settled to the ocean floor in sediments called banded iron-formations - layered deposits of red-brown rock that accumulated in ocean basins worldwide. Later, once the iron was used up, oxygen escaped from the oceans and started filling up the atmosphere.
Once oxygen made it into the atmosphere, the scientists suggest that it reacted with methane, a powerful greenhouse gas, to form carbon dioxide, which is 62 times less effective at warming the surface of the planet. "With less warming potential, surface temperatures may have plummeted, resulting in globe-encompassing glaciers and sea ice" said Kaufman.
In addition to its affect on climate, the rise in oxygen stimulated the rise in stratospheric ozone, our global sunscreen. This gas layer, which lies between 12 and 30 miles above the surface, decreased the amount of damaging ultraviolet sunrays reaching the oceans, allowing photosynthetic organisms that previously lived deeper down, to move up to the surface, and hence increase their output of oxygen, further building up stratospheric ozone.
"New oxygen in the atmosphere would also have stimulated weathering processes, delivering more nutrients to the seas, and may have also pushed biological evolution towards eukaryotes, which require free oxygen for important biosynthetic pathways," said Kaufman.
The result of the Great Oxidation Event, according to Kaufman and his colleagues, was a complete transformation of Earth's atmosphere, of its climate, and of the life that populated its surface. The study is published in the May issue of Geology.
Panama, Panama (SPX) May 19, 2009
The geologic faults responsible for the rise of the eastern Andes mountains in Colombia became active 25 million years ago-18 million years before the previously accepted start date for the Andes' rise, according to researchers at the Smithsonian Tropical Research Institute in Panama, the University of Potsdam in Germany and Ecopetrol in Colombia.
"No one had ever dated mountain-building events in the eastern range of the Colombian Andes," said Mauricio Parra, a former doctoral candidate at the University of Potsdam (now a postdoctoral fellow with the University of Texas) and lead author.
"This eastern sector of America's backbone turned out to be far more ancient here than in the central Andes, where the eastern ranges probably began to form only about 10 million years ago."
The team integrated new geologic maps that illustrate tectonic thrusting and faulting, information about the origins and movements of sediments and the location and age of plant pollen in the sediments, as well as zircon-fission track analysis to provide an unusually thorough description of basin and range formation.
As mountain ranges rise, rainfall and erosion wash minerals like zircon from rocks of volcanic origin into adjacent basins, where they accumulate to form sedimentary rocks. Zircon contains traces of uranium. As the uranium decays, trails of radiation damage accumulate in the zircon crystals.
At high temperatures, fission tracks disappear like the mark of a knife disappears from a soft block of butter. By counting the microscopic fission tracks in zircon minerals, researchers can tell how long ago sediments formed and how deeply they were buried.
Classification of nearly 17,000 pollen grains made it possible to clearly delimit the age of sedimentary layers.
The use of these complementary techniques led the team to postulate that the rapid advance of a sinking wedge of material as part of tectonic events 31 million years ago may have set the stage for the subsequent rise of the range.
"The date that mountain building began is critical to those of us who want to understand the movement of ancient animals and plants across the landscape and to engineers looking for oil and gas," said Carlos Jaramillo, staff scientist from STRI. "We are still trying to put together a big tectonic jigsaw puzzle to figure out how this part of the world formed
Tempe AZ (SPX) May 28, 2009
There are very few places in the world where dynamic activity taking place beneath Earth's surface goes undetected. Volcanoes, earthquakes, and even the sudden uplifting or sinking of the ground are all visible results of restlessness far below, but according to research by Arizona State University (ASU) seismologists, dynamic activity deep beneath us isn't always expressed on the surface.
The Great Basin in the western United States is a desert region largely devoid of major surface changes. The area consists of small mountain ranges separated by valleys and includes most of Nevada, the western half of Utah and portions of other nearby states.
For tens of millions of years, the Great Basin has been undergoing extension--the stretching of Earth's crust.
While studying the extension of the region, geologist John West of ASU was surprised to find that something unusual existed beneath this area's surface.
West and colleagues found that portions of the lithosphere--the crust and uppermost mantle of the Earth--had sunk into the more fluid upper mantle beneath the Great Basin and formed a large cylindrical blob of cold material far below the surface of central Nevada.
It was an extremely unexpected finding in a location that showed no corresponding changes in surface topography or volcanic activity, West says.
West compared his unusual results of the area with tomography models--CAT scans of the inside of Earth--done by geologist Jeff Roth, also of ASU. West and Roth are graduate students; working with their advisor, Matthew Fouch, the team concluded that they had found a lithospheric drip.
Results of their research, funded by the National Science Foundation (NSF), were published in the May 24 issue of the journal Nature Geoscience.
"The results provide important insights into fine-scale mantle convection processes, and their possible connections with volcanism and mountain-building on Earth's surface," said Greg Anderson, program director in NSF's Division of Earth Sciences.
A lithospheric drip can be envisioned as honey dripping off a spoon, where an initial lithospheric blob is followed by a long tail of material.
When a small, high-density mass is embedded near the base of the crust and the area is warmed up, the high-density piece will be heavier than the area around it and it will start sinking. As it drops, material in the lithosphere starts flowing into the newly created conduit.
Seismic images of mantle structure beneath the region provided additional evidence, showing a large cylindrical mass 100 km wide and at least 500 km tall (about 60 by 300 miles).
"As a general rule, I have been anti-drip since my early days as a scientist," admits Fouch. "The idea of a lithospheric drip has been used many times over the years to explain things like volcanism, surface uplift, surface subsidence, but you could never really confirm it--and until now no one has caught a drip in the act, so to speak."
Originally, the team didn't think any visible signs appeared on the surface.
"We wondered how you could have something like a drip that is drawing material into its center when the surface of the whole area is stretching apart," says Fouch.
"But it turns out that there is an area right above the drip, in fact the only area in the Great Basin, that is currently undergoing contraction. John's finding of a drip is therefore informing geologists to develop a new paradigm of Great Basin evolution."
Scientists have known about the contraction for some time, but have been arguing about its cause.
As a drip forms, surrounding material is drawn in behind it; this means that the surface should be contracting toward the center of the basin. Since contraction is an expected consequence of a drip, a lithospheric drip could well be the answer to what is being observed in the Great Basin.
"Many in the scientific community thought it couldn't be a drip because there wasn't any elevation change or surface manifestation, and a drip has historically always been connected with major surface changes," says West.
"But those features aren't required to have the drip. Under certain conditions, like in the Great Basin, drips can form with little or no corresponding changes in surface topography or volcanic activity."
All the numerical models computed by the team suggest that the drip isn't going to cause things to sink down or pop up quickly, or cause lots of earthquakes.
There would likely be little or no impact on the people living above the drip. The team believes that the drip is a transient process that started some 15-20 million years ago, and probably recently detached from the overlying plate.
"This finding would not have been possible without the incredible wealth of seismic data captured by EarthScope's Transportable Array (TA) as it moved across the western United States," says West.
"We had access to data from a few long-term stations in the region, but the excellent data and 75-km grid spacing of the TA is what made these results possible."
This is a great example "of science in action," says Fouch.
"We went in not expecting to find this. Instead, we came up with a hypothesis that was not what anyone had proposed previously for the area, and then we tested the hypothesis with as many different types of data as we could find.
"In all cases so far it has held up. We're excited to see how this discovery plays a role in the development of new ideas about the geologic history of the western U.S."
Washington DC (SPX) Jul 29, 2009
A new analysis of jade found along the Motagua fault that bisects Guatemala is underscoring the fact that this region has a more complex geologic history than previously thought.
Because jade and other associated metamorphic rocks are found on both sides of the fault, and because the jade to the north is younger by about 60 million years, a team of geologists posits in a new research paper that the North American and Caribbean plates have done more than simply slide past each other: they have collided. Twice.
"Now we understand what has happened in Guatemala, geologically," says one of the authors, Hannes Brueckner, Professor of Geology at Queens College, City University of New York. "Our new research is filling in information about plate tectonics for an area of the world that needed sorting."
Jade is a cultural term for two rare metamorphic rocks known as jadeitite (as discussed in the current research) and nephrite that are both extremely tough and have been used as tools and talismans throughout the world. The jadeitite (or jadeite jade) is a sort of scar tissue from some collisions between Earth's plates.
As ocean crust is pushed under another block, or subducted, pressure increases with only modest rise in temperature, squeezing and drying the rocks without melting them. Jade precipitates from fluids flowing up the subduction channel and into the chilled, overlying mantle that becomes serpentinite.
The serpentinite assemblage, which includes jade and has a relatively low density, can be uplifted during subsequent continental collisions and extruded along the band of the collision boundary, such as those found in the Alps, California, Iran, Russia, and other parts of the world.
The Motagua fault is one of three subparallel left-lateral strike-slip faults (with horizontal motion) in Guatemala and forms the boundary between the North American and Caribbean tectonic plates.
In an earlier paper, the team of authors found evidence of two different collisions by dating mica found in collisional rocks (including jade) from the North American side of the fault to about 70 million years ago and from the southern side (or the Caribbean plate) to between 120 and 130 million years ago.
But mica dates can be "reset" by subsequent heating. Now, the authors have turned to eclogite, a metamorphic rock that forms from ocean floor basalt in the subduction channel. Eclogite dates are rarely reset, and the authors found that eclogite from both sides of the Motagua dates to roughly 130 million years old.
The disparate dating of rocks along the Motagua can be explained by the following scenario: a collision 130 million years ago created a serpentinite belt that was subsequently sliced into segments.
Then, after plate movement changed direction about 100 million years ago, a second collision between one of these slices and the North American plate reset the mica clocks in jadeitite found on the northern side of the fault to 70 million years. Finally, plate motion in the last 70 million years juxtaposed the southern serpentinites with the northern serpentinites, which explains why there are collisional remnants on both sides of the Motagua.
"All serpentinites along the fault line formed at the same time, but the northern assemblage was re-metamorphosed at about 70 million year ago. There are two collision events recorded in the rocks observed today, one event on the southern side and two on the northern," explains author George Harlow, Curator in the Division of Earth and Planetary Sciences at the American Museum of Natural History. "Motion between plates is usually not a single motion-it is a series of motions.
Rich Ore Deposits Linked To Ancient Atmosphere
Washington DC (SPX) Nov 27, 2009
Much of our planet's mineral wealth was deposited billions of years ago when Earth's chemical cycles were different from today's. Using geochemical clues from rocks nearly 3 billion years old, a group of scientists including Andrey Bekker and Doug Rumble from the Carnegie Institution have made the surprising discovery that the creation of economically important nickel ore deposits was linked to sulfur in the ancient oxygen-poor atmosphere.
These ancient ores - specifically iron-nickel sulfide deposits - yield 10% of the world's annual nickel production. They formed for the most part between two and three billion years ago when hot magmas erupted on the ocean floor. Yet scientists have puzzled over the origin of the rich deposits. The ore minerals require sulfur to form, but neither seawater nor the magmas hosting the ores were thought to be rich enough in sulfur for this to happen.
"These nickel deposits have sulfur in them arising from an atmospheric cycle in ancient times. The isotopic signal is of an anoxic atmosphere," says Rumble of Carnegie's Geophysical Laboratory, a co-author of the paper appearing in the November 20 issue of Science.
Rumble, with lead author Andrey Bekker (formerly Carnegie Fellow and now at the University of Manitoba), and four other colleagues used advanced geochemical techniques to analyze rock samples from major ore deposits in Australia and Canada. They found that to help produce the ancient deposits, sulfur atoms made a complicated journey from volcanic eruptions, to the atmosphere, to seawater, to hot springs on the ocean floor, and finally to molten, ore-producing magmas.
The key evidence came from a form of sulfur known as sulfur-33, an isotope in which atoms contain one more neutron than "normal" sulfur (sulfur-32). Both isotopes act the same in most chemical reactions, but reactions in the atmosphere in which sulfur dioxide gas molecules are split by ultraviolet light (UV) rays cause the isotopes to be sorted or "fractionated" into different reaction products, creating isotopic anomalies.
"If there is too much oxygen in the atmosphere then not enough UV gets through and these reactions can't happen," says Rumble. "So if you find these sulfur isotope anomalies in rocks of a certain age, you have information about the oxygen level in the atmosphere."
By linking the rich nickel ores with the ancient atmosphere, the anomalies in the rock samples also answer the long-standing question regarding the source of the sulfur in the ore minerals. Knowing this will help geologists track down new ore deposits, says Rumble, because the presence of sulfur and other chemical factors determine whether or not a deposit will form.
"Ore deposits are a tiny fraction of a percent of the Earth's surface, yet economically they are incredibly important.
Corvallis OR (SPX) Feb 02, 2010
Researchers have discovered that some of the most fundamental assumptions about how water moves through soil in a seasonally dry climate such as the Pacific Northwest are incorrect - and that a century of research based on those assumptions will have to be reconsidered.
A new study by scientists from Oregon State University and the Environmental Protection Agency showed - much to the surprise of the researchers - that soil clings tenaciously to the first precipitation after a dry summer, and holds it so tightly that it almost never mixes with other water.
The finding is so significant, researchers said, that they aren't even sure yet what it may mean. But it could affect our understanding of how pollutants move through soils, how nutrients get transported from soils to streams, how streams function and even how vegetation might respond to climate change.
The research was just published online in Nature Geoscience, a professional journal.
"Water in mountains such as the Cascade Range of Oregon and Washington basically exists in two separate worlds," said Jeff McDonnell, an OSU distinguished professor and holder of the Richardson Chair in Watershed Science in the OSU College of Forestry. "We used to believe that when new precipitation entered the soil, it mixed well with other water and eventually moved to streams. We just found out that isn't true."
"This could have enormous implications for our understanding of watershed function," he said. "It challenges about 100 years of conventional thinking."
What actually happens, the study showed, is that the small pores around plant roots fill with water that gets held there until it's eventually used up in plant transpiration back to the atmosphere. Then new water becomes available with the return of fall rains, replenishes these small localized reservoirs near the plants and repeats the process. But all the other water moving through larger pores is essentially separate and almost never intermingles with that used by plants during the dry summer.
The study found in one test, for instance, that after the first large rainstorm in October, only 4 percent of the precipitation entering the soil ended up in the stream - 96 percent was taken up and held tightly by soil around plants to recharge soil moisture.
A month later when soil moisture was fully recharged, 55 percent of precipitation went directly into streams. And as winter rains continue to pour moisture into the ground, almost all of the water that originally recharged the soil around plants remains held tightly in the soil - it never moves or mixes.
"This tells us that we have a less complete understanding of how water moves through soils, and is affected by them, than we thought we did," said Renee Brooks, a research plant physiologist with the EPA and courtesy faculty in the OSU Department of Forest Ecosystems and Society.
"Our mathematical models of ecosystem function are based on certain assumptions about biological processes," Brooks said. "This changes some of those assumptions. Among the implications is that we may have to reconsider how other things move through soils that we are interested in, such as nutrients or pollutants."
The new findings were made possible by advances in the speed and efficiency of stable isotope analyses of water, which allowed scientists to essentially "fingerprint" water and tell where it came from and where it moved to. Never before was it possible to make so many isotopic measurements and get a better view of water origin and movement, the researchers said.
The study also points out the incredible ability of plants to take up water that is so tightly bound to the soil, with forces nothing else in nature can match.
Earth's robust magnetic field protects the planet and its inhabitants from the full brunt of the solar wind, a torrent of charged particles that on less shielded planets such as Venus and Mars has over the ages stripped away water reserves and degraded their upper atmospheres. Unraveling the timeline for the emergence of that magnetic field and the mechanism that generates it—a dynamo of convective fluid in Earth's outer core—can help constrain the early history of the planet, including the interplay of geologic, atmospheric and astronomical processes that rendered the world habitable.
An interdisciplinary study published in the March 5 Science attempts to do just that, presenting evidence that Earth had a dynamo-generated magnetic field as early as 3.45 billion years ago, just a billion or so years after the planet had formed. The new research pushes back the record of Earth's magnetic field by at least 200 million years; a related group had presented similar evidence from slightly younger rocks in 2007, arguing for a strong terrestrial magnetic field 3.2 billion years ago.
University of Rochester geophysicist John Tarduno and his colleagues analyzed rocks from the Kaapvaal Craton, a region near the southern tip of Africa that hosts relatively pristine early Archean crust. (The Archean eon began about 3.8 billion years ago and ended 2.5 billion years ago.)
In 2009 Tarduno's group had found that some of the rocks were magnetized 3.45 billion years ago—roughly coinciding with the direct evidence for Earth's first life, at 3.5 billion years ago. But an external source for the magnetism—such as a blast from the solar wind—could not be ruled out. Venus, for instance, which lacks a strong internal magnetic field of its own, does have a feeble external magnetic field induced by the impact of the solar wind into the planet's dense atmosphere.
The new study examines the magnetic field strength required to imprint magnetism on the Kaapvaal rocks; it concludes that the field was 50 percent to 70 percent of its present strength. That value is many times greater than would be expected for an external magnetic field, such as the weak Venusian field, supporting the presence of an inner-Earth dynamo at that time.
With the added constraints on the early magnetic field, the researchers were able to extrapolate how well that field could keep the solar wind at bay. The group found that the early Archean magnetopause, the boundary in space where the magnetic field meets the solar wind, was about 30,000 kilometers or less from Earth. The magnetopause is about twice that distance today but can shift in response to extreme energetic outbursts from the sun. "Those steady-state conditions three and a half billion years ago are similar to what we see during severe solar storms today," Tarduno says. With the magnetopause so close to Earth, the planet would not have been totally shielded from the solar wind and may have lost much of its water early on, the researchers say.
Clues for finding habitable exoplanets
As researchers redouble their efforts to find the first truly Earth-like planet outside the solar system, Tarduno says the relationship between stellar wind, atmospheres and magnetic fields should come into play when modeling a planet's potential habitability. "This is clearly a variable to think about when looking at exoplanets," he says, adding that a magnetic field's impact on a planet's water budget seems particularly important.
One scientist in the field agrees that the results are plausible but has some lingering questions. "I think the work that Tarduno and his co-authors are doing is really exciting," says Peter Selkin, a geologist at the University of Washington Tacoma. "There's a lot of potential to use the tools that they've developed to look at rocks that are much older than anybody has been able to do paleomagnetism on before."
But he notes that even the relatively pristine rocks of the Kaapvaal Craton have undergone low-grade mineralogical and temperature changes over billions of years. "They're not exactly in the state they were in initially," Selkin says, "and that's exactly what has made a lot of paleomagnetists stay away from rocks like these." Selkin credits Tarduno and his co-authors for doing all they can to show that the magnetized samples have been minimally altered, but he would like to see more petrologic and mineralogical analysis. "I think that there are still things that we need to know about the minerals that Tarduno and his co-authors used in this study in order to be able to completely buy the results," he says.
David Dunlop, a geophysicist at the University of Toronto, is more convinced, calling the work a "very careful demonstration." The field strengths, he says, "can be assigned quite confidently" to the time interval 3.4 billion to 3.45 billion years ago. "It would be exciting to push back the curtain shadowing [the] onset of the geodynamo still further, but this seems unlikely," Dunlop says. "Nowhere else has nature been so kind in preserving nearly pristine magnetic remanence carriers."
Geologists have found evidence that sea ice extended to the equator 716.5 million years ago, bringing new precision to a "snowball Earth" event long suspected to have taken place around that time.
Funded by the National Science Foundation (NSF) and led by scientists at Harvard University, the team reports on its work this week in the journal Science.
The new findings--based on an analysis of ancient tropical rocks that are now found in remote northwestern Canada--bolster the theory that our planet has, at times in the past, been ice-covered at all latitudes.
"This is the first time that the Sturtian glaciation has been shown to have occurred at tropical latitudes, providing direct evidence that this particular glaciation was a 'snowball Earth' event," says lead author Francis Macdonald, a geologist at Harvard University.
"Our data also suggest that the Sturtian glaciation lasted a minimum of five million years."
According to Enriqueta Barrera, program director in NSF's Division of Earth Sciences, which supported the research, the Sturtian glaciation, along with the Marinoan glaciation right after it, are the greatest ice ages known to have taken place on Earth. "Ice may have covered the entire planet then," says Barrera, "turning it into a 'snowball Earth.'"
The survival of eukaryotes--life forms other than microbes such as bacteria--throughout this period suggests that sunlight and surface water remained available somewhere on Earth's surface. The earliest animals arose at roughly the same time.
Even in a snowball Earth, Macdonald says, there would be temperature gradients, and it is likely that sea ice would be dynamic: flowing, thinning and forming local patches of open water, providing refuge for life.
"The fossil record suggests that all of the major eukaryotic groups, with the possible exception of animals, existed before the Sturtian glaciation," Macdonald says. "The questions that arise from this are: If a snowball Earth existed, how did these eukaryotes survive? Did the Sturtian snowball Earth stimulate evolution and the origin of animals?"
"From an evolutionary perspective," he adds, "it's not always a bad thing for life on Earth to face severe stress."
The rocks Macdonald and his colleagues analyzed in Canada's Yukon Territory showed glacial deposits and other signs of glaciation, such as striated clasts, ice-rafted debris, and deformation of soft sediments.
The scientists were able to determine, based on the magnetism and composition of these rocks, that 716.5 million years ago the rocks were located at sea-level in the tropics, at about 10 degrees latitude.
"Climate modeling has long predicted that if sea ice were ever to develop within 30 degrees latitude of the equator, the whole ocean would rapidly freeze over," Macdonald says. "So our result implies quite strongly that ice would have been found at all latitudes during the Sturtian glaciation."
Scientists don't know exactly what caused this glaciation or what ended it, but Macdonald says its age of 716.5 million years closely matches the age of a large igneous province--made up of rocks formed by magma that has cooled--stretching more than 1,500 kilometers (932 miles) from Alaska to Ellesmere Island in far northeastern Canada.
This coincidence could mean the glaciation was either precipitated or terminated by volcanic activity.
A thousand years after the last ice age ended, the Northern Hemisphere was plunged back into glacial conditions. For 20 years, scientists have blamed a vast flood of meltwater for causing this 'Younger Dryas' cooling, 13,000 years ago. Picking through evidence from Canada's Mackenzie River, geologists now believe they have found traces of this flood, revealing that cold water from North America's dwindling ice sheet poured into the Arctic Ocean, from where it ultimately disrupted climate-warming currents in the Atlantic.
The researchers scoured tumbled boulders and gravel terraces along the Mackenzie River for signs of the meltwater's passage. The flood "would solve a big problem if it actually happened", says oceanographer Wally Broecker of Columbia University's Lamont-Doherty Earth Observatory in Palisades, New York, who was not part of the team.
the geologists present evidence confirming that the flood occurred (J. B. Murton et al. Nature 464, 740–743; 2010). But their findings raise questions about exactly how the flood chilled the planet. Many researchers thought the water would have poured down what is now the St Lawrence River into the North Atlantic Ocean, where the currents form a sensitive climate trigger. Instead, the Mackenzie River route would have funnelled the flood into the Arctic Ocean .
The Younger Dryas was named after the Arctic wild flower Dryas octopetala that spread across Scandinavia as the big chill set in. At its onset, temperatures in northern Europe suddenly dropped 10 °C or more in decades, and tundra replaced the forest that had been regaining its hold on the land. Broecker suggested in 1989 that the rapid climate shift was caused by a slowdown of surface currents in the Atlantic Ocean, which carry warm water north from the Equator to high latitudes (W. S. Broecker et al. Nature 341, 318-321; 1989). The currents are part of the 'thermohaline' ocean circulation, which is driven as the cold and salty — hence dense — waters of the far North Atlantic sink, drawing warmer surface waters north.
Broecker proposed that the circulation was disrupted by a surge of fresh water that overflowed from Lake Agassiz, a vast meltwater reservoir that had accumulated behind the retreating Laurentide Ice Sheet in the area of today's Great Lakes. The fresh water would have reduced the salinity of the surface waters, stopping them from sinking.
“There's no way for that water to go out of the Arctic without going into the Atlantic.”
The theory is widely accepted. However, scientists never found geological evidence of the assumed flood pathway down the St Lawrence River into the North Atlantic; or along a possible alternative route southwards through the Mississippi basin. Now it is clear why: the flood did occur; it just took a different route.
The team, led by Julian Murton of the University of Sussex in Brighton, UK, dated sand, gravel and boulders from eroded surfaces in the Athabasca Valley and the Mackenzie River delta in northwestern Canada. The shapes of the geological features there suggest that the region had two major glacial outburst floods, the first of which coincides with the onset of the Younger Dryas. If the western margins of the Laurentide Ice Sheet lay just slightly east of their assumed location, several thousand cubic kilometres of water would have been able to flood into the Arctic Ocean.
"Geomorphic observations and chronology clearly indicate a northwestern flood route down the Mackenzie valley," says James Teller, a geologist at the University of Manitoba in Winnipeg, Canada, who took part in the study. But he thinks that the route raises questions about the climatic effects of the Lake Agassiz spill. "We're pretty sure that the water, had it flooded the northern Atlantic, would have been capable of slowing the thermohaline ocean circulation and produce the Younger Dryas cooling," he says. "The question is whether it could have done the same in the Arctic Ocean."
Broecker, however, says that the Arctic flood is just what his theory needed. He says that flood waters heading down the St Lawrence River might not have affected the thermohaline circulation anyway, because the sinking takes place far to the north, near Greenland. A pulse of fresh water into the Arctic, however, would ultimately have flowed into the North Atlantic and pulled the climate trigger there. "There's no way for that water to go out of the Arctic without going into the Atlantic," he says.
Santa Barbara, Calif. (UPI) Apr 6, 2010
A U.S. geologist says she's discovered a pattern that connects regular changes in the Earth's orbital cycle to changes in the planet's climate.
University of California-Santa Barbara Assistant Professor Lorraine Lisiecki performed her analysis of climate by examining ocean sediment cores taken from 57 locations around the world and linking that climate record to the history of the Earth's orbit.
The researchers said it's known the Earth's orbit around the sun changes shape every 100,000 years, becoming either more round or more elliptical. The shape of the orbit is known as its "eccentricity" and a related aspect is the 41,000-year cycle in the tilt of the Earth's axis.
Glaciation of the Earth also occurs every 100,000 years and Lisiecki found the timing of changes in climate and eccentricity coincided.
"The clear correlation between the timing of the change in orbit and the change in the Earth's climate is strong evidence of a link between the two," Lisiecki said. She also said she discovered the largest glacial cycles occurred during the weakest changes in the eccentricity of Earth's orbit -- and vice versa, with the stronger changes in orbit correlating to weaker changes in climate.
"This may mean that the Earth's climate has internal instability in addition to sensitivity to changes in the orbit," she said.
The research is reported in the journal Nature Geoscience.
An international team of scientists including Mark Williams and Jan Zalasiewicz of the Geology Department of the University of Leicester, and led by Dr. Thijs Vandenbroucke, formerly of Leicester and now at the University of Lille 1 (France), has reconstructed the Earth's climate belts of the late Ordovician Period, between 460 and 445 million years ago.
The findings have been published online in the Proceedings of the National Academy of Sciences -- and show that these ancient climate belts were surprisingly like those of the present.
The researchers state: "The world of the ancient past had been thought by scientists to differ from ours in many respects, including having carbon dioxide levels much higher -- over twenty times as high -- than those of the present. However, it is very hard to deduce carbon dioxide levels with any accuracy from such ancient rocks, and it was known that there was a paradox, for the late Ordovician was known to include a brief, intense glaciation -- something difficult to envisage in a world with high levels of greenhouse gases. "
The team of scientists looked at the global distribution of common, but mysterious fossils called chitinozoans -- probably the egg-cases of extinct planktonic animals -- before and during this Ordovician glaciation. They found a pattern that revealed the position of ancient climate belts, including such features as the polar front, which separates cold polar waters from more temperate ones at lower latitudes. The position of these climate belts changed as the Earth entered the Ordovician glaciation -- but in a pattern very similar to that which happened in oceans much more recently, as they adjusted to the glacial and interglacial phases of our current (and ongoing) Ice Age.
This 'modern-looking' pattern suggests that those ancient carbon dioxide levels could not have been as high as previously thought, but were more modest, at about five times current levels (they would have had to be somewhat higher than today's, because the sun in those far-off times shone less brightly).
"These ancient, but modern-looking oceans emphasise the stability of Earth's atmosphere and climate through deep time -- and show the current man-made rise in greenhouse gas levels to be an even more striking phenomenon than was thought," the researchers conclude.
Aug 19, 2010
Scientists have discovered a new window into the Earth's violent past. Geochemical evidence from volcanic rocks collected on Baffin Island in the Canadian Arctic suggests that beneath it lies a region of the Earth's mantle that has largely escaped the billions of years of melting and geological churning that has affected the rest of the planet.
Researchers believe the discovery offers clues to the early chemical evolution of the Earth.
The newly identified mantle "reservoir," as it is called, dates from just a few tens of million years after the Earth was first assembled from the collisions of smaller bodies. This reservoir likely represents the composition of the mantle shortly after formation of the core, but before the 4.5 billion years of crust formation and recycling modified the composition of most of the rest of Earth's interior.
"This was a key phase in the evolution of the Earth," says co-author Richard Carlson of the Carnegie Institution's Department of Terrestrial Magnetism. "It set the stage for everything that came after. Primitive mantle such as that we have identified would have been the ultimate source of all the magmas and all the different rock types we see on Earth today."
Carlson and lead author Matthew Jackson (a former Carnegie postdoctoral fellow, now at Boston University), with colleagues, using samples collected by coauthor Don Francis of McGill University, targeted the Baffin Island rocks, which are the earliest expression of the mantle hotspot now feeding volcanic eruptions on Iceland, because previous study of helium isotopes in these rocks showed them to have anomalously high ratios of helium-3 to helium-4.
Helium-3 is generally extremely rare within the Earth; most of the mantle's supply has been outgassed by volcanic eruptions and lost to space over the planet's long geological history. In contrast, helium-4 has been constantly replenished within the Earth by the decay of radioactive uranium and thorium.
The high proportion of helium-3 suggests that the Baffin Island lavas came from a reservoir in the mantle that had never previously outgassed its original helium-3, implying that it had not been subjected to the extensive chemical differentiation experienced by most of the mantle.
The researchers confirmed this conclusion by analyzing the lead isotopes in the lava samples, which date the reservoir to between 4.55 and 4.45 billion years old. This age is only slightly younger than the Earth itself.
The early age of the mantle reservoir implies that it existed before melting of the mantle began to create the magmas that rose to form Earth's crust and before plate tectonics allowed that crust to be mixed back into the mantle.
Many researchers have assumed that before continental crust formed the mantle's chemistry was similar to that of meteorites called chondrites, but that the formation of continents altered its chemistry, causing it to become depleted in the elements, called incompatible elements, that are extracted with the magma when melting occurs in the mantle.
"Our results question this assumption," says Carlson. "They suggest that before continent extraction, the mantle already was depleted in incompatible elements compared to chondrites, perhaps because of an even earlier Earth differentiation event, or perhaps because the Earth originally formed from building blocks depleted in these elements."
Of the two possibilities, Carlson favors the early differentiation model, which would involve a global magma ocean on the newly-formed Earth. This magma ocean produced a crust that predated the crust that exists today.
"In our model, the original crust that formed by the solidification of the magma ocean was buoyantly unstable at Earth's surface because it was rich in iron," he says. "This instability caused it to sink to the base of the mantle, taking the incompatible elements with it, where it remains today."
Some of this deep material may have remained liquid despite the high pressures, and Carlson points out that seismological studies of the deep mantle reveal certain areas, one beneath the southern Pacific and another beneath Africa, that appear to be molten and possibly chemically different from the rest of the mantle.
"I'm holding out hope that these seismically imaged areas might be the compositional complement to the "depleted" primitive mantle that we sample in the Baffin Island lavas," he says
Computational scientists and geophysicists at the University of Texas at Austin and the California Institute of Technology (Caltech) have developed new computer algorithms that for the first time allow for the simultaneous modeling of the earth's Earth's mantle flow, large-scale tectonic plate motions, and the behavior of individual fault zones, to produce an unprecedented view of plate tectonics and the forces that drive it.
A paper describing the whole-earth model and its underlying algorithms will be published in the August 27 issue of the journal Science and also featured on the cover.
The work "illustrates the interplay between making important advances in science and pushing the envelope of computational science," says Michael Gurnis, the John E. and Hazel S. Smits Professor of Geophysics, director of the Caltech Seismological Laboratory, and a coauthor of the Science paper.
To create the new model, computational scientists at Texas's Institute for Computational Engineering and Sciences (ICES)-a team that included Omar Ghattas, the John A. and Katherine G. Jackson Chair in Computational Geosciences and professor of geological sciences and mechanical engineering, and research associates Georg Stadler and Carsten Burstedde-pushed the envelope of a computational technique known as Adaptive Mesh Refinement (AMR).
Partial differential equations such as those describing mantle flow are solved by subdividing the region of interest (such as the mantle) into a computational grid. Ordinarily, the resolution is kept the same throughout the grid. However, many problems feature small-scale dynamics that are found only in limited regions.
"AMR methods adaptively create finer resolution only where it's needed," explains Ghattas. "This leads to huge reductions in the number of grid points, making possible simulations that were previously out of reach."
"The complexity of managing adaptivity among thousands of processors, however, has meant that current AMR algorithms have not scaled well on modern petascale supercomputers," he adds. Petascale computers are capable of one million billion operations per second. To overcome this long-standing problem, the group developed new algorithms that, Burstedde says, "allows for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today."
With the new algorithms, the scientists were able to simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. According to Stadler, the AMR algorithms reduced the size of the simulations by a factor of 5,000, permitting them to fit on fewer than 10,000 processors and run overnight on the Ranger supercomputer at the National Science Foundation (NSF)-supported Texas Advanced Computing Center.
A key to the model was the incorporation of data on a multitude of scales. "Many natural processes display a multitude of phenomena on a wide range of scales, from small to large," Gurnis explains.
For example, at the largest scale-that of the whole earth-the movement of the surface tectonic plates is a manifestation of a giant heat engine, driven by the convection of the mantle below. The boundaries between the plates, however, are composed of many hundreds to thousands of individual faults, which together constitute active fault zones.
"The individual fault zones play a critical role in how the whole planet works," he says, "and if you can't simulate the fault zones, you can't simulate plate movement"-and, in turn, you can't simulate the dynamics of the whole planet.
In the new model, the researchers were able to resolve the largest fault zones, creating a mesh with a resolution of about one kilometer near the plate boundaries.
Included in the simulation were seismological data as well as data pertaining to the temperature of the rocks, their density, and their viscosity-or how strong or weak the rocks are, which affects how easily they deform. That deformation is nonlinear-with simple changes producing unexpected and complex effects.
"Normally, when you hit a baseball with a bat, the properties of the bat don't change-it won't turn to Silly Putty. In the earth, the properties do change, which creates an exciting computational problem," says Gurnis. "If the system is too nonlinear, the earth becomes too mushy; if it's not nonlinear enough, plates won't move. We need to hit the 'sweet spot.'"
After crunching through the data for 100,000 hours of processing time per run, the model returned an estimate of the motion of both large tectonic plates and smaller microplates-including their speed and direction. The results were remarkably close to observed plate movements.
In fact, the investigators discovered that anomalous rapid motion of microplates emerged from the global simulations. "In the western Pacific," Gurnis says, "we have some of the most rapid tectonic motions seen anywhere on Earth, in a process called 'trench rollback.' For the first time, we found that these small-scale tectonic motions emerged from the global models, opening a new frontier in geophysics."
One surprising result from the model relates to the energy released from plates in earthquake zones. "It had been thought that the majority of energy associated with plate tectonics is released when plates bend, but it turns out that's much less important than previously thought," Gurnis says.
"Instead, we found that much of the energy dissipation occurs in the earth's deep interior. We never saw this when we looked on smaller scales."
ScienceDaily (Sep. 17, 2010) — Earth's mantle and its core mix at a distance of 2900 kilometers under our feet in a mysterious zone. A team of geophysicists has just verified that the partial fusion of the mantle is possible in this area when the temperature reaches 4200 Kelvin. This reinforces the hypothesis of the presence of a deep magma ocean.
The originality of this work, carried out by the scientists of the Institut de minéralogie et de physique des milieux condensés (UPMC/Université Paris Diderot/Institut de Physique du Globe/CNRS/IRD), lies in the use of X-ray diffraction at the European Synchrotron Radiation Facility in Grenoble (France). The results will have an effect in the understanding of the dynamics, composition and the formation of the depths of our planet.
On top of Earth's core, consisting of liquid iron, lies the solid mantle, which is made up essentially of magnesium oxides, iron and silicon. The border between the core and the mantle, located at 2900 km below Earth's surface, is highly intriguing to geophysicists. With a pressure of around 1.4 million times the atmospheric pressure and a temperature of more than 4000 Kelvin, this zone is home to chemical reactions and changes in states of matter still unknown. The seismologists who have studied this subject have acknowledged an abrupt reduction of the speed of the seismic waves, which sometimes reach 30% when getting close to this border. This fact has led scientists to formulate the hypothesis, for the last 15 years, of the partial melting of the Earth mantle at the level of this mantle-core border. Today, this hypothesis has been confirmed.
In order to access the depths of our planet, scientists have not only seismological images but also a precious experimental technique: diamond anvil cells, coupled with a heating layer. This instrument allows scientists to re-create the same pressure and temperature conditions as those in Earth's interior on samples of a few microns. This is the technique used by the researchers of the Institut de minéralogie et de physique des milieux condensés on natural samples that are representatives of Earth's mantle and that have been put under pressures of more than 140 gigapascals (or 1.4 million times the atmospheric pressure), and temperatures of more than 5000 Kelvin.
A new approach to this study has been the use of the X-ray diffraction technique at the European synchrotron (ESRF). This has allowed the scientists to determine what mineral phases melt first, and they have also established, without extrapolation, fusion curves of the deep Earth mantle -- i.e., the characterization of the passage from a solid state to a partially liquid state. Their observations show that the partial fusion of the mantle is possible when the temperature approaches 4200 Kelvin. These experiments also prove that the liquid produced during this partial fusion is dense and that it can hold multiple chemical elements, among which are important markers of the dynamics of Earth's mantle. These studies will allow geophysicists and geochemists to achieve a deeper knowledge of the mechanisms of differentiation of Earth and the history of its formation, which started around 4.5 billion years ago.
Seattle WA (SPX) Dec 06, 2010
For years, geologists have argued about the processes that formed steep inner gorges in the broad glacial valleys of the Swiss Alps.
The U-shaped valleys were created by slow-moving glaciers that behaved something like road graders, eroding the bedrock over hundreds or thousands of years. When the glaciers receded, rivers carved V-shaped notches, or inner gorges, into the floors of the glacial valleys. But scientists disagreed about whether those notches were erased by subsequent glaciers and then formed all over again as the second round of glaciers receded.
New research led by a University of Washington scientist indicates that the notches endure, at least in part, from one glacial episode to the next. The glaciers appear to fill the gorges with ice and rock, protecting them from being scoured away as the glaciers move.
When the glaciers receded, the resulting rivers returned to the gorges and easily cleared out the debris deposited there, said David Montgomery, a UW professor of Earth and space sciences.
"The alpine inner gorges appear to lay low and endure glacial attack. They are topographic survivors," Montgomery said.
"The answer is not so simple that the glaciers always win. The river valleys can hide under the glaciers and when the glaciers melt the rivers can go back to work."
Montgomery is lead author of a paper describing the research, published online Dec. 5 in Nature Geoscience. Co-author is Oliver Korup of the University of Potsdam in Germany, who did the work while with the Swiss Federal Research Institutes in Davos, Switzerland.
The researchers used topographic data taken from laser-based (LIDAR) measurements to determine that, if the gorges were erased with each glacial episode, the rivers would have had to erode the bedrock from one-third to three-quarters of an inch per year since the last glacial period to get gorges as deep as they are today.
"That is screamingly fast. It's really too fast for the processes," Montgomery said. Such erosion rates would exceed those in all areas of the world except the most tectonically active regions, the researchers said, and they would have to maintain those rates for 1,000 years.
Montgomery and Korup found other telltale evidence, sediment from much higher elevations and older than the last glacial deposits, at the bottom of the river gorges. That material likely was pushed into the gorges as glaciers moved down the valleys, indicating the gorges formed before the last glaciers.
"That means the glaciers aren't cutting down the bedrock as fast as the rivers do. If the glaciers were keeping up, each time they'd be able to erase the notch left by the river," Montgomery said.
"They're locked in this dance, working together to tear the mountains down."
The work raises questions about how common the preservation of gorges might be in other mountainous regions of the world.
"It shows that inner gorges can persist, and so the question is, 'How typical is that?' I don't think every inner gorge in the world survives multiple glaciations like that, but the Swiss Alps are a classic case. That's where mountain glaciation was first discovered."
I find this article as symptomatic that LIDAR has found yet ANOTHER use. A very useful too, really. The USGS office in Rolla, Missouri has several specialists who have provided LIDAR expertise on occasion that is targeted on various ends, mainly answering questions of topographic veracity at scale.
But it's so much more.
It can be used to define slope so well, and define individual block sizes and shapes, that men need no longer go up and down on ropes to make the measurements to determine rockfall likelihood, bounce heights and velocities in the Colorado Rockfall Simulation Program, and even monitor changes in slope configuration.
I've seen it used for the mapping of underground mines.
Thanks for posting. Overall, a good article.
Berkeley CA (SPX) Dec 20, 2010
A University of California, Berkeley, geophysicist has made the first-ever measurement of the strength of the magnetic field inside Earth's core, 1,800 miles underground.
The magnetic field strength is 25 Gauss, or 50 times stronger than the magnetic field at the surface that makes compass needles align north-south. Though this number is in the middle of the range geophysicists predict, it puts constraints on the identity of the heat sources in the core that keep the internal dynamo running to maintain this magnetic field.
"This is the first really good number we've had based on observations, not inference," said author Bruce A. Buffett, professor of earth and planetary science at UC Berkeley. "The result is not controversial, but it does rule out a very weak magnetic field and argues against a very strong field."
A strong magnetic field inside the outer core means there is a lot of convection and thus a lot of heat being produced, which scientists would need to account for, Buffett said. The presumed sources of energy are the residual heat from 4 billion years ago when the planet was hot and molten, release of gravitational energy as heavy elements sink to the bottom of the liquid core, and radioactive decay of long-lived elements such as potassium, uranium and thorium.
A weak field - 5 Gauss, for example - would imply that little heat is being supplied by radioactive decay, while a strong field, on the order of 100 Gauss, would imply a large contribution from radioactive decay.
"A measurement of the magnetic field tells us what the energy requirements are and what the sources of heat are," Buffett said.
About 60 percent of the power generated inside the earth likely comes from the exclusion of light elements from the solid inner core as it freezes and grows, he said. This constantly builds up crud in the outer core.
The Earth's magnetic field is produced in the outer two-thirds of the planet's iron/nickel core. This outer core, about 1,400 miles thick, is liquid, while the inner core is a frozen iron and nickel wrecking ball with a radius of about 800 miles - roughly the size of the moon. The core is surrounded by a hot, gooey mantle and a rigid surface crust.
The cooling Earth originally captured its magnetic field from the planetary disk in which the solar system formed. That field would have disappeared within 10,000 years if not for the planet's internal dynamo, which regenerates the field thanks to heat produced inside the planet. The heat makes the liquid outer core boil, or "convect," and as the conducting metals rise and then sink through the existing magnetic field, they create electrical currents that maintain the magnetic field. This roiling dynamo produces a slowly shifting magnetic field at the surface.
"You get changes in the surface magnetic field that look a lot like gyres and flows in the oceans and the atmosphere, but these are being driven by fluid flow in the outer core," Buffett said.
Buffett is a theoretician who uses observations to improve computer models of the earth's internal dynamo. Now at work on a second generation model, he admits that a lack of information about conditions in the earth's interior has been a big hindrance to making accurate models.
He realized, however, that the tug of the moon on the tilt of the earth's spin axis could provide information about the magnetic field inside. This tug would make the inner core precess - that is, make the spin axis slowly rotate in the opposite direction - which would produce magnetic changes in the outer core that damp the precession. Radio observations of distant quasars - extremely bright, active galaxies - provide very precise measurements of the changes in the earth's rotation axis needed to calculate this damping.
"The moon is continually forcing the rotation axis of the core to precess, and we're looking at the response of the fluid outer core to the precession of the inner core," he said.
By calculating the effect of the moon on the spinning inner core, Buffett discovered that the precession makes the slightly out-of-round inner core generate shear waves in the liquid outer core. These waves of molten iron and nickel move within a tight cone only 30 to 40 meters thick, interacting with the magnetic field to produce an electric current that heats the liquid. This serves to damp the precession of the rotation axis. The damping causes the precession to lag behind the moon as it orbits the earth. A measurement of the lag allowed Buffett to calculate the magnitude of the damping and thus of the magnetic field inside the outer core.
Buffett noted that the calculated field - 25 Gauss - is an average over the entire outer core. The field is expected to vary with position.
"I still find it remarkable that we can look to distant quasars to get insights into the deep interior of our planet," Buffett said.
Palo Alto CA (SPX) Dec 20, 2010
To answer the big questions, it often helps to look at the smallest details. That is the approach Stanford mineral physicist Wendy Mao is taking to understanding a major event in Earth's inner history.
Using a new technique to scrutinize how minute amounts of iron and silicate minerals interact at ultra-high pressures and temperatures, she is gaining insight into the biggest transformation Earth has ever undergone - the separation of its rocky mantle from its iron-rich core approximately 4.5 billion years ago.
The technique, called high-pressure nanoscale X-ray computed tomography, is being developed at SLAC National Accelerator Laboratory. With it, Mao is getting unprecedented detail - in three-dimensional images - of changes in the texture and shape of molten iron and solid silicate minerals as they respond to the same intense pressures and temperatures found deep in the Earth.
Mao will present the results of the first few experiments with the technique at the annual meeting of the American Geophysical Union in San Francisco.
Tomography refers to the process that creates a three-dimensional image by combining a series of two-dimensional images, or cross-sections, through an object. A computer program interpolates between the images to flesh out a recreation of the object.
Through experiments at SLAC's Stanford Synchrotron Radiation Lightsource and Argonne National Laboratory's Advanced Photon Source, researchers have developed a way to combine a diamond anvil cell, which compresses tiny samples between the tips of two diamonds, with nanoscale X-ray computed tomography to capture images of material at high pressure.
The pressures deep in the Earth are so high - millions of times atmospheric pressure - that only diamonds can exert the needed pressure without breaking under the force.
At present, the SLAC researchers and their collaborators from HPSync, the High Pressure Synergetic Consortium at Argonne's Advanced Photon Source are the only group using this technique.
"It is pretty exciting, being able to measure the interactions of iron and silicate materials at very high pressures and temperatures, which you could not do before," said Mao, an assistant professor of geological and environmental sciences and of photon science.
"No one has ever imaged these sorts of changes at these very high pressures."
It is generally agreed that the initially homogenous ball of material that was the very early Earth had to be very hot in order to differentiate into the layered sphere we live on today. Since the crust and the layer underneath it, the mantle, are silicate-rich, rocky layers, while the core is iron-rich, it's clear that silicate and iron went in different directions at some point.
But how they separated out and squeezed past each other is not clear. Silicate minerals, which contain silica, make up about 90 percent of the crust of the Earth.
If the planet got hot enough to melt both elements, it would have been easy enough for the difference in density to send iron to the bottom and silicates to the top.
If the temperature was not hot enough to melt silicates, it has been proposed that molten iron might have been able to move along the boundaries between grains of the solid silicate minerals.
"To prove that, though, you need to know whether the molten iron would tend to form small spheres or whether it would form channels," Mao said. "That would depend on the surface energy between the iron and silicate."
Previous experimental work has shown that at low pressure, iron forms isolated spheres, similar to the way water beads up on a waxed surface, Mao said, and spheres could not percolate through solid silicate material.
Mao said the results of her first high-pressure experiments using the tomography apparatus suggest that at high pressure, since the silicate transforms into a different structure, the interaction between the iron and silicate could be different than at low pressure.
"At high pressure, the iron takes a more elongate, platelet-like form," she said. That means the iron would spread out on the surface of the silicate minerals, connecting to form channels instead of remaining in isolated spheres.
"So it looks like you could get some percolation of iron at high pressure," Mao said. "If iron could do that, that would tell you something really significant about the thermal history of the Earth."
But she cautioned that she only has data from the initial experiments.
"We have some interesting results, but it is the kind of measurement that you need to repeat a couple times to make sure," Mao said.
A team of University of Nevada, Reno and University of Nevada, Las Vegas researchers have devised a new model for how Nevada's gold deposits formed, which may help in exploration efforts for new gold deposits.
The deposits, known as Carlin-type gold deposits, are characterized by extremely fine-grained nanometer-sized particles of gold adhered to pyrite over large areas that can extend to great depths. More gold has been mined from Carlin-type deposits in Nevada in the last 50 years - more than $200 billion worth at today's gold prices - than was ever mined from during the California gold rush of the 1800s.
This current Nevada gold boom started in 1961 with the discovery of the Carlin gold mine, near the town of Carlin, at a spot where the early westward-moving prospectors missed the gold because it was too fine-grained to be readily seen. Since the 1960s, geologists have found clusters of these "Carlin-type" deposits throughout northern Nevada. They constitute, after South Africa, the second largest concentration of gold on Earth. Despite their importance, geologists have argued for decades about how they formed.
"Carlin-type deposits are unique to Nevada in that they represent a perfect storm of Nevada's ideal geology - a tectonic trigger and magmatic processes, resulting in extremely efficient transport and deposition of gold," said John Muntean, a research economic geologist with the Nevada Bureau of Mines and Geology at the University of Nevada, Reno and previously an industry geologist who explored for gold in Nevada for many years.
"Understanding how these deposits formed is important because most of the deposits that cropped out at the surface have likely been found. Exploration is increasingly targeting deeper deposits. Such risky deep exploration requires expensive drilling.
"Our model for the formation of Carlin-type deposits may not directly result in new discoveries, but models for gold deposit formation play an important role in how companies explore by mitigating risk. Knowing how certain types of gold deposits form allows one to be more predictive by evaluating whether ore-forming processes operated in the right geologic settings. This could lead to identification of potential new areas of discovery."
Muntean collaborated with researchers from the University of Nevada, Las Vegas: Jean Cline, a facultyprofessor of geology at UNLV and a leading authority on Carlin-type gold deposits; Adam Simon, an assistant professor of geoscience who provided new experimental data and his expertise on the interplay between magmas and ore deposits; and Tony Longo, a post-doctoral fellow who carried out detailed microanalyses of the ore minerals.
The team combined decades of previous studies by research and industry geologists with new data of their own to reach their conclusions, which were written about in the Jan. 23 early online issue of Nature Geoscience magazine and will appear in the February printed edition. The team relates formation of the gold deposits to a change in plate tectonics and a major magma event about 40 million years ago. It is the most complete explanation for Carlin-type gold deposits to date.
"Our model won't be the final word on Carlin-type deposits," Muntean said. "We hope it spurs new research in Nevada, especially by people who may not necessarily be ore deposit geologists."
The work was funded by grants from the National Science Foundation, the United States Geological Survey, Placer Dome Exploration and Barrick Gold Corporation.
In one of his songs Bob Dylan asks "How many years can a mountain exist before it is washed to the sea?", and thus poses an intriguing geological question for which an accurate answer is not easily provided. Mountain ranges are in a constant interplay between climatically controlled weathering processes on the one hand and the tectonic forces that cause folding and thrusting and thus thickening of the Earth's crust on the other hand.
While erosion eventually erases any geological obstacles, tectonic forces are responsible for piling- and lifting-up rocks and thus for forming spectacular mountain landscapes such as the European Alps.
In reality, climate, weathering and mountain uplift interact in a complex manner and quantifying rates for erosion and uplift, especially for the last couple of millions of years, remains a challenging task.
In a recent Geology paper Michael Meyer (University of Innsbruck) et al. report on ancient cave systems discovered near the summits of the Allgau Mountains (Austria) that preserved the oldest radiometrically dated dripstones currently known from the European Alps.
"These cave deposits formed ca. 2 million years ago and their geochemical signature and biological inclusions are vastly different from other cave calcites in the Alps" says Meyer, who works at the Institute of Geology and Paleontology at the University of Innsbruck, Austria.
By carefully analysing these dripstones and using an isotopic modelling approach the authors were able to back-calculate both, the depth of the cave and the altitude of the corresponding summit area at the time of calcite formation. Meyer et al. thus derived erosion and uplift rates for the northern rim of the Alps and - most critically - for a geological time period that is characterized by reoccurring ice ages and hence by intensive glacial erosion.
"Our results suggest that 2 million years ago the cave was situated ~1500 meters below its present altitude and the mountains were probably up to 500 meters lower compared to today", states Meyer. These altitudinal changes were significant and much of this uplift can probably be attributed to the gradual unloading of the Alps due to glacial erosion.
Dripstones have been used to reconstruct past climate and environmental change in a variety of ways. The study of Meyer et al. is novel, however, as it highlights the potential of caves and their deposits to quantitatively constrain mountain evolution on a timescale of millions of years and further shows how the interplay of tectonic and climatic processes can be understood. Key to success is an accurate age control provided by Uranium-Lead dating.
This method is commonly used to constrain the age of much older rocks and minerals but has only rarely be applied to dripstones - i.e. only those with high Uranium concentrations - and luckily this is the case for the samples from the Allgau Mountains.
Geologists debate epoch to mark effects of Homo sapiens.
Humanity's profound impact on this planet is hard to deny, but is it big enough to merit its own geological epoch? This is the question facing geoscientists gathered in London this week to debate the validity and definition of the 'Anthropocene', a proposed new epoch characterized by human effects on the geological record.
"We are in the process of formalizing it," says Michael Ellis, head of the climate-change programme of the British Geological Survey in Nottingham, who coordinated the 11 May meeting. He and others hope that adopting the term will shift the thinking of policy-makers. "It should remind them of the global and significant impact that humans have," says Ellis.
But not everyone is behind the idea. "Some think it premature, perhaps hubristic, perhaps nonsensical," says Jan Zalasiewicz, a stratigrapher at the University of Leicester, UK, and a co-convener of the meeting. Zalasiewicz, who declares himself "officially very firmly sitting on the fence", also chairs a working group investigating the proposal for the International Commission on Stratigraphy (ICS) — the body that oversees designations of geological time.
The term Anthropocene was first coined in 2000 by Nobel laureate Paul Crutzen, now at the Max Planck Institute for Chemistry in Mainz, Germany, and his colleagues. It then began appearing in peer-reviewed papers as if it were a technical term rather than scientific slang.
Click for larger imageThe "evidence for the prosecution", as Zalasiewicz puts it, is compelling. Through food production and urbanization, humans have altered more than half of the planet's ice-free land mass1 (see 'Transformation of the biosphere'), and are moving as much as an order of magnitude more rock and soil around than are natural processes2. Rising carbon dioxide levels in the atmosphere are expected to make the ocean 0.3–0.4 pH points more acidic by the end of this century. That will dissolve light-coloured carbonate shells and sea-floor rocks for about 1,000 years, leaving a dark band in the sea-floor sediment that will be obvious to future geologists. A similar dark stripe identifies the Palaeocene–Eocene Thermal Maximum about 55 million years ago, when global temperatures rose by some 6 °C in 20,000 years. A similar temperature jump could happen by 2100, according to some high-emissions scenarios3.
The fossil record will show upheavals too. Some 20% of species living in large areas are now invasive, says Zalasiewicz. "Globally that's a completely novel change." And a review published in Nature in March4 concluded that the disappearance of the species now listed as 'critically endangered' would qualify as a mass extinction on a level seen only five times in the past 540 million years — and all of those mark transitions between geological time periods.
Some at the ICS are wary of formalizing a new epoch. "My main concern is that those who promote it have not given it the careful scientific consideration and evaluation it needs," says Stan Finney, chair of the ICS and a geologist at California State University in Long Beach. He eschews the notion of focusing on the term simply to "generate publicity".
Others point out that an epoch typically lasts tens of millions of years. Our current epoch, the Holocene, began only 11,700 years ago. Declaring the start of a new epoch would compress the geological timeline to what some say is a ridiculous extent. Advocates of the Anthropocene, however, say that it is natural to divide recent history into smaller, more detailed chunks. A less controversial alternative would be to declare the Anthropocene a new 'age': a subdivision of an epoch.
If scientists can agree in principle that a new time division is justified, they will have to settle on a geological marker for its start. Some suggest the pollen of cultivated plants, arguing that mankind's fingerprint can be seen 5,000–10,000 years ago with the beginnings of agriculture. Others support the rise in the levels of greenhouse gases and air pollution in the latter part of the eighteenth century, as industrialization began. A third group would start with the flicker of radioactive isotopes in 1945, marking the invention of nuclear weapons.
Should the working group decide that the Anthropocene epoch has merit, it will go to an ICS vote. But the whole process will take time — defining other geological periods has sometimes taken decades. In the meantime, Zalasiewicz says, "the formalization is the excuse to try to do some very interesting science", comparing Earth's current changes to those of the past.
Leeds UK (SPX) May 23, 2011
The inner core of the Earth is simultaneously melting and freezing due to circulation of heat in the overlying rocky mantle, according to new research from the University of Leeds, UC San Diego and the Indian Institute of Technology.
The findings, published tomorrow in Nature, could help us understand how the inner core formed and how the outer core acts as a 'geodynamo', which generates the planet's magnetic field.
"The origins of Earth's magnetic field remain a mystery to scientists," said study co-author Dr Jon Mound from the University of Leeds. "We can't go and collect samples from the centre of the Earth, so we have to rely on surface measurements and computer models to tell us what's happening in the core."
"Our new model provides a fairly simple explanation to some of the measurements that have puzzled scientists for years. It suggests that the whole dynamics of the Earth's core are in some way linked to plate tectonics, which isn't at all obvious from surface observations.
"If our model is verified it's a big step towards understanding how the inner core formed, which in turn helps us understand how the core generates the Earth's magnetic field."
The Earth's inner core is a ball of solid iron about the size of our moon. This ball is surrounded by a highly dynamic outer core of a liquid iron-nickel alloy (and some other, lighter elements), a highly viscous mantle and a solid crust that forms the surface where we live.
Over billions of years, the Earth has cooled from the inside out causing the molten iron core to partly freeze and solidify. The inner core has subsequently been growing at the rate of around 1mm a year as iron crystals freeze and form a solid mass.
The heat given off as the core cools flows from the core to the mantle to the Earth's crust through a process known as convection. Like a pan of water boiling on a stove, convection currents move warm mantle to the surface and send cool mantle back to the core. This escaping heat powers the geodynamo and coupled with the spinning of the Earth generates the magnetic field.
Scientists have recently begun to realise that the inner core may be melting as well as freezing, but there has been much debate about how this is possible when overall the deep Earth is cooling. Now the research team believes they have solved the mystery.
Using a computer model of convection in the outer core, together with seismology data, they show that heat flow at the core-mantle boundary varies depending on the structure of the overlying mantle. In some regions, this variation is large enough to force heat from the mantle back into the core, causing localised melting.
The model shows that beneath the seismically active regions around the Pacific 'Ring of Fire', where tectonic plates are undergoing subduction, the cold remnants of oceanic plates at the bottom of the mantle draw a lot of heat from the core. This extra mantle cooling generates down-streams of cold material that cross the outer core and freeze onto the inner core.
Conversely, in two large regions under Africa and the Pacific where the lowermost mantle is hotter than average, less heat flows out from the core. The outer core below these regions can become warm enough that it will start melting back the solid inner core.
Co-author Dr Binod Sreenivasan from the Indian Institute of Technology said: "If Earth's inner core is melting in places, it can make the dynamics near the inner core-outer core boundary more complex than previously thought.
"On the one hand, we have blobs of light material being constantly released from the boundary where pure iron crystallizes. On the other hand, melting would produce a layer of dense liquid above the boundary. Therefore, the blobs of light elements will rise through this layer before they stir the overlying outer core.
"Interestingly, not all dynamo models produce heat going into the inner core. So the possibility of inner core melting can also place a powerful constraint on the regime in which the Earth's dynamo operates."
Co-author Dr Sebastian Rost from the University of Leeds added: "The standard view has been that the inner core is freezing all over and growing out progressively, but it appears that there are regions where the core is actually melting. The net flow of heat from core to mantle ensures that there's still overall freezing of outer core material and it's still growing over time, but by no means is this a uniform process.
"Our model allows us to explain some seismic measurements which have shown that there is a dense layer of liquid surrounding the inner core. The localised melting theory could also explain other seismic observations, for example why seismic waves from earthquakes travel faster through some parts of the core than others."
Stanford CA (SPX) May 27, 2011
The magnitude 9 earthquake and resulting tsunami that struck Japan on March 11 were like a one-two punch - first violently shaking, then swamping the islands - causing tens of thousands of deaths and hundreds of billions of dollars in damage. Now Stanford researchers have discovered the catastrophe was caused by a sequence of unusual geologic events never before seen so clearly.
"It was not appreciated before this earthquake that this size of earthquake was possible on this plate boundary," said Stanford geophysicist Greg Beroza. "It was thought that typical earthquakes were much smaller."
The earthquake occurred in a subduction zone, where one great tectonic plate is being forced down under another tectonic plate and into the Earth's interior along an active fault.
The fault on which the Tohoku-Oki earthquake took place slopes down from the ocean floor toward the west. It first ruptured mainly westward from its epicenter - 32 kilometers (about 20 miles) below the seafloor - toward Japan, shaking the island of Honshu violently for 40 seconds.
Surprisingly, the fault then ruptured eastward from the epicenter, up toward the ocean floor along the sloping fault plane for about 30 or 35 seconds.
As the rupture neared the seafloor, the movement of the fault grew rapidly, violently deforming the seafloor sediments sitting on top of the fault plane, punching the overlying water upward and triggering the tsunami.
"When the rupture approached the seafloor, it exploded into tremendously large slip," said Beroza. "It displaced the seafloor dramatically.
"This amplification of slip near the surface was predicted in computer simulations of earthquake rupture, but this is the first time we have clearly seen it occur in a real earthquake.
"The depth of the water column there is also greater than elsewhere," Beroza said. "That, together with the slip being greatest where the fault meets the ocean floor, led to the tsunami being outlandishly big."
Beroza is one of the authors of a paper detailing the research, published online last week in Science Express.
"Now that this slip amplification has been observed in the Tohoku-Oki earthquake, what we need to figure out is whether similar earthquakes - and large tsunamis - could happen in other subduction zones around the world," he said.
Beroza said the sort of "two-faced" rupture seen in the Tohoku-Oki earthquake has not been seen in other subduction zones, but that could be a function of the limited amount of data available for analyzing other earthquakes.
There is a denser network of seismometers in Japan than any other place in the world, he said. The sensors provided researchers with much more detailed data than is normally available after an earthquake, enabling them to discern the different phases of the March 11 temblor with much greater resolution than usual.
Prior to the Tohoku-Oki earthquake, Beroza and Shuo Ma, who is now an assistant professor at San Diego State University, had been working on computer simulations of what might happen during an earthquake in just such a setting. Their simulations had generated similar "overshoot" of sediments overlying the upper part of the fault plane.
Following the Japanese earthquake, aftershocks as large as magnitude 6.5 slipped in the opposite direction to the main shock. This is a symptom of what is called "extreme dynamic overshoot" of the upper fault plane, Beroza said, with the overextended sediments on top of the fault plane slipping during the aftershocks back in the direction they came from.
"We didn't really expect this to happen because we believe there is friction acting on the fault" that would prevent any rebound, he said. "Our interpretation is that it slipped so much that it sort of overdid it. And in adjusting during the aftershock sequence, it went back a bit.
"We don't see these bizarre aftershocks on parts of the fault where the slip is less," he said.
The damage from the March 11 earthquake was so extensive in part simply because the earthquake was so large. But the way it ruptured on the fault plane, in two stages, made the devastation greater than it might have been otherwise, Beroza said.
The deeper part of the fault plane, which sloped downward to the west, was bounded by dense, hard rock on each side. The rock transmitted the seismic waves very efficiently, maximizing the amount of shaking felt on the island of Honshu.
The shallower part of the fault surface, which slopes upward to the east and surfaces at the Japan Trench - where the overlying plate is warped downward by the motion of the descending plate - had massive slip. Unfortunately, this slip was ideally situated to efficiently generate the gigantic tsunami, with devastating consequences.
Nuclear fission powers the movement of Earth's continents and crust, a consortium of physicists and other scientists is now reporting, confirming long-standing thinking on this topic. Using neutrino detectors in Japan and Italy—the Kamioka Liquid-Scintillator Antineutrino Detector (KamLAND) and the Borexino Detector—the scientists arrived at their conclusion by measuring the flow of the antithesis of these neutral particles as they emanate from our planet. Their results are detailed July 17 in Nature Geoscience. (Scientific American is part of the Nature Publishing Group.)
Neutrinos and antineutrinos, which travel through mass and space freely due to their lack of charge and other properties, are released by radioactive materials as they decay. And Earth is chock full of such radioactive elements—primarily uranium, thorium and potassium. Over the billions of years of Earth's existence, the radioactive isotopes have been splitting, releasing energy as well as these antineutrinos—just like in a man-made nuclear reactor. That energy heats the surrounding rock and keeps the elemental forces of plate tectonics in motion. By measuring the antineutrino emissions, scientists can determine how much of Earth's heat results from this radioactive decay.
How much heat? Roughly 20 terawatts of heat—or nearly twice as much energy as used by all of humanity at present—judging by the number of such antineutrino particles emanating from the planet, dubbed geoneutrinos by the scientists. Combined with the 4 terawatts from decaying potassium, it's enough energy to move mountains, or at least cause the collisions that create them.
The precision of the new measurements made by the KamLAND team was made possible by an extended shutdown of the Kashiwazaki-Kariwa nuclear reactor in Japan, following an earthquake there back in 2007. Particles released by the nearby plant would otherwise mix with naturally released geoneutrinos and confuse measurements; the closure of the plant allowed the two to be distinguished. The detector hides from cosmic rays—broadly similar to the neutrinos and antineutrinos it is designed to register—under Mount Ikenoyama nearby. The detector itself is a 13-meter-diameter balloon of transparent film filled with a mix of special liquid hydrocarbons, itself suspended in a bath of mineral oil contained in a 18-meter-diameter stainless steel sphere, covered on the inside with detector tubes. All that to capture the telltale mark of some 90 geoneutrinos over the course of seven years of measurements.
The new measurements suggest radioactive decay provides more than half of Earth's total heat, estimated at roughly 44 terawatts based on temperatures found at the bottom of deep boreholes into the planet's crust. The rest is leftover from Earth's formation or other causes yet unknown, according to the scientists involved. Some of that heat may have been trapped in Earth's molten iron core since the planet's formation, while the nuclear decay happens primarily in the crust and mantle. But with fission still pumping out so much heat, Earth is unlikely to cool—and thereby halt the collisions of continents—for hundreds of millions of years thanks to the long half-lives of some of these elements. And that means there's a lot of geothermal energy—or natural nuclear energy—to be harvested.
ScienceDaily (July 23, 2011) — Fool's gold is providing scientists with valuable insights into a turning point in Earth's evolution, which took place billions of years ago.
Scientists are recreating ancient forms of the mineral pyrite -- dubbed fool's gold for its metallic lustre -- that reveal details of past geological events.
Detailed analysis of the mineral is giving fresh insight into Earth before the Great Oxygenation Event, which took place 2.4 billion years ago. This was a time when oxygen released by early forms of bacteria gave rise to new forms of plant and animal life, transforming Earth's oceans and atmosphere.
Studying the composition of pyrite enables a geological snapshot of events at the time when it was formed. Studying the composition of different forms of iron in fool's gold gives scientists clues as to how conditions such as atmospheric oxygen influenced the processes forming the compound.
The latest research shows that bacteria -- which would have been an abundant life form at the time -- did not influence the early composition of pyrite. This result, which contrasts with previous thinking, gives scientists a much clearer picture of the process.
More extensively, their discovery enables better understanding of geological conditions at the time, which informs how the oceans and atmosphere evolved.
The research, funded by the Natural Environment Research Council and the Edinburgh Collaborative of Subsurface Science and Engineering, was published in Science.
Dr Ian Butler, who led the research, said: "Technology allows us to trace scientific processes that we can't see from examining the mineral composition alone, to understand how compounds were formed. This new information about pyrite gives us a much sharper tool with which to analyse the early evolution of the Earth, telling us more about how our planet was formed."
Dr Romain Guilbaud, investigator on the study, said: "Our discovery enables a better understanding of how information on the Earth's evolution, recorded in ancient minerals, can be interpreted
Geological history has periodically featured giant lava eruptions that coat large swaths of land or ocean floor with basaltic lava, which hardens into rock formations called flood basalt. New research from Matthew Jackson and Richard Carlson proposes that the remnants of six of the largest volcanic events of the past 250 million years contain traces of the ancient Earth's primitive mantle -- which existed before the largely differentiated mantle of today -- offering clues to the geochemical history of the planet.
Scientists recently discovered that an area in northern Canada and Greenland composed of flood basalt contains traces of ancient Earth's primitive mantle. Carlson and Jackson's research expanded these findings, in order to determine if other large volcanic rock deposits also derive from primitive sources.
Information about the primitive mantle reservoir -- which came into existence after Earth's core formed but before Earth's outer rocky shell differentiated into crust and depleted mantle -- would teach scientists about the geochemistry of early Earth and how our planet arrived at its present state.
Until recently, scientists believed that Earth's primitive mantle, such as the remnants found in northern Canada and Greenland, originated from a type of meteorite called carbonaceous chondrites. But comparisons of isotopes of the element neodymium between samples from Earth and samples from chondrites didn't produce the expected results, which suggested that modern mantle reservoirs may have evolved from something different.
Carlson, of Carnegie's Department of Terrestrial Magnetism, and Jackson, a former Carnegie fellow now at Boston University, examined the isotopic characteristics of flood basalts to determine whether they were created by a primitive mantle source, even if it wasn't a chondritic one.
They used geochemical techniques based on isotopes of neodymium and lead to compare basalts from the previously discovered 62-million-year-old primitive mantle source in northern Canada's Baffin Island and West Greenland to basalts from the South Pacific's Ontong-Java Plateau, which formed in the largest volcanic event in geologic history. They discovered minor differences in the isotopic compositions of the two basaltic provinces, but not beyond what could be expected in a primitive reservoir.
They compared these findings to basalts from four other large accumulations of lava-formed rocks in Botswana, Russia, India, and the Indian Ocean, and determined that lavas that have interacted with continental crust the least (and are thus less contaminated) have neodymium and lead isotopic compositions similar to an early-formed primitive mantle composition.
The presence of these early-earth signatures in the six flood basalts suggests that a significant fraction of the world's largest volcanic events originate from a modern mantle source that is similar to the primitive reservoir discovered in Baffin Island and West Greenland. This primitive mantle is hotter, due to a higher concentration of radioactive elements, and more easily melted than other mantle reservoirs. As a result, it could be more likely to generate the eruptions that form flood basalts.
Start-up funding for this work was provided by Boston University.
It's well known that Earth's most severe mass extinction occurred about 250 million years ago. What's not well known is the specific time when the extinctions occurred. A team of researchers from North America and China have published a paper in Science this week which explicitly provides the date and rate of extinction.
"This is the first paper to provide rates of such massive extinction," says Dr. Charles Henderson, professor in the Department of Geoscience at the University of Calgary and co-author of the paper: Calibrating the end-Permian mass extinction.
"Our information narrows down the possibilities of what triggered the massive extinction and any potential kill mechanism must coincide with this time."
About 95 percent of marine life and 70 percent of terrestrial life became extinct during what is known as the end-Permian, a time when continents were all one land mass called Pangea. The environment ranged from desert to lush forest.
Four-limbed vertebrates were becoming diverse and among them were primitive amphibians, reptiles and a group that would, one day, include mammals.
Through the analysis of various types of dating techniques on well-preserved sedimentary sections from South China to Tibet, researchers determined that the mass extinction peaked about 252.28 million years ago and lasted less than 200,000 years, with most of the extinction lasting about 20,000 years.
"These dates are important as it will allow us to understand the physical and biological changes that took place," says Henderson. "We do not discuss modern climate change, but obviously global warming is a biodiversity concern today.
The geologic record tells us that 'change' happens all the time, and from this great extinction life did recover."
There is ongoing debate over whether the death of both marine and terrestrial life coincided, as well as over kill mechanisms, which may include rapid global warming, hypercapnia (a condition where there is too much CO2 in the blood stream), continental aridity and massive wildfires.
The conclusion of this study says extinctions of most marine and terrestrial life took place at the same time. And the trigger, as suggested by these researchers and others, was the massive release of CO2 from volcanic flows known as the Siberian traps, now found in northern Russia.
Henderson's conodont research was integrated with other data to establish the study's findings. Conodonts are extinct, soft-bodied eel-like creatures with numerous tiny teeth that provide critical information on hydrocarbon deposits to global extinctions. | http://awesomenature.tribe.net/m/thread/64e0f374-3275-4347-9ffa-3f5218615259 | 13 |
15 | Written by Sharon Guynup
In a vast, water-pocked region of Western Siberia, a Russian crew explores the Messoyahka natural-gas field, drilling deep into thick permafrost. Engineers—ever alert for changes in downward progress that could signal an oil or gas deposit—note that suddenly the drill rig blasts through something that isn’t bedrock. They carefully examine the rock cuttings and dirt and mud that flow up from the drill hole. Oddly, they also find something they’ve never seen in the mix before. There are icy chunks embedded in the core sample. The ice melts and the chunks disintegrate when pulled to the surface.
The drilling crew soon learned that they had found a substance thought to exist naturally only in the farthest reaches of the solar system. They had discovered methane hydrate, or “methane ice,” in nature on Earth. What they couldn’t know then—in 1964, in the remote fields of Messoyahka—was that the frozen gas may hold a key to solving 21st-century energy problems.
Today, it’s known that abundant methane hydrate riddles permafrost across the Arctic and also lies entombed under deep-ocean sediment, forming the planet’s single-largest carbon reservoir. The ice chunks form when bubbles of methane gas rise from, say, a fault into a frigid, high-pressure environment that squeezes water and methane gas together into a solid. The methane molecules are then trapped within water cages in ice-like crystals.
With about 23 percent of Earth’s surface frozen as permafrost and much of the ocean deep enough to form hydrates, the potential size of these deposits appears to be staggering. Methane hydrate is likely to be far more abundant than all of the remaining oil, coal, and gas fields combined, according to Timothy Collett, a research geologist with the U.S. Geological Survey.
Could these mysterious subterranean deposits be harnessed as a future source of natural gas? With most methane hydrate deposits buried a minimum of a quarter-mile below the surface of the sea or buried in Arctic ice, these formations are extremely difficult to study. Yet, Pitt chemist Kenneth D. Jordan is collaborating with the U.S. Department of Energy’s National Energy Technology Laboratory to explore methane hydrate’s mysterious properties from afar: He’s conducting up-close research from his office in Eberly Hall on the Pittsburgh campus.
Jordan is using computer simulation to explore methane as it exists in Arctic permafrost and in deep-ocean terrain. In essence, he’s applying mathematic formulas to simulate methane hydrate’s structure and dynamics.
Using “virtual” modeling, Jordan and his team run a series of calculations to examine how the properties of methane hydrate depend on temperature and the occupation of the water cages.Computer modeling allows them to conduct experiments that couldn’t be carried out in nature or in a laboratory, like examining how the crystals are created and whether the presence of the methane molecules affects how efficiently the crystals conduct heat. Jordan explains that in nature, under high enough pressure and at low enough temperature, water and methane form methane hydrate crystals, trapping the methane molecules in water cages. “On the computer, you can build these structures with or without methane molecules to analyze how the presence of methane affects heat movement through the crystals,” he says.
Jordan, Distinguished Professor of Computational Chemistry at Pitt, runs these simulations on anywhere from 8 to 32 computers simultaneously, a method called parallel processing. Even when combining the power of multiple computers, the calculations can take anywhere from a few days to a week to calculate, depending on the nature and complexity of the simulation. The computers are linked together on a lightning-fast “InfiniBand” network, some 30 times faster than gigabit Ethernet.
“We couldn’t have done this type of study 15 years ago,” he says. Back then, computers could only model a few hundred atoms; now, with much faster computers working together in parallel, researchers can model the behavior of systems containing thousands, even millions of atoms. “It means that we can model much more accurately.”
Even with these advances, researchers still face the challenge of dealing with many research problems that span a wide range of time and/or length scales, requiring the development of new multiscale algorithms. Jordan is just one of scores of Pitt scientists working across many disciplines who are pairing multiscale modeling and parallel processing with traditional laboratory experiments. This means that physicists, biologists, chemists, engineers, doctors and others are now collaborating closely with top computer programmers—although there are rare birds who can do both.
This fall, the University of Pittsburgh launched a new, multidisciplinary Center for Simulation and Modeling (SAM), which will speed development of innovation and discoveries in computational-based research. Research can be conducted using any of the networked supercomputers clustered around campus. The center will help researchers wean themselves from limited serial processing and gain expertise in parallel processing and multiscale modeling. It will bring together about 50 faculty members and more than 100 graduate students from very diverse areas to discuss problems and work with the center’s newly hired consultants who are experts in translating research problems into computer programs that will help to answer researchers’ questions.
“What we’re doing is providing researchers with resources to tackle their problems,” says George Klinzing, Pitt’s vice provost for research. “We want them to be at the forefront with the tools they need to make important breakthroughs.” The new center is located in Bellefield Hall. Jordan is codirecting the center with J. Karl Johnson, interim chair and W.K. Whiteford Professor in the Department of Chemical and Petroleum Engineering.
For some problems, researchers use existing computer codes; all that’s needed is to plug in and run the data. But for others, it’s necessary to write code from scratch. Helping scientists to write such codes is the main work of the center consultants. The process starts with identifying the research questions and then selecting one of two possible simulation methods according to the time and physical scale of the project. The “Monte Carlo” technique is used to try to find different configurations, perhaps rotating and grouping or moving molecules, while “molecular dynamics” is used to track movement over time, like the possible configurations of protein folding during a nanosecond transformation.
Also key to the center’s mission is helping researchers at Pitt take advantage of parallel computing, which, says Johnson, is a tool that’s revolutionizing researchers’ ability to model complex systems. “Anything that can be analyzed quantitatively, turned into an equation, translated into an algorithm, and put on a computer can be simulated and modeled,” he says. “Parallel computing means you can tackle more realistic, more important problems.”
Indeed, computers can be used to carry out experiments that would be too costly, too complicated, or simply impossible to do in a traditional laboratory setting. Jordan remembers a time, not that many years ago, when lab scientists were suspicious of computer modeling, but notes that today many breakthroughs are only possible as a result of collaborations between computational and experimental researchers. “This close coupling between simulation and more traditional experiments is changing the way that a lot of modern science is done,” he says. Computer modeling allows researchers to tackle complex questions on everything from energy and climate change, to the ups and downs of the world economy, to the spread of infectious diseases.
Predicting the spread—and prevention—of a global viral epidemic is just such an example. To simulate the path and infection rate of a rampaging infectious disease, Donald S. Burke, dean of Pitt’s Graduate School of Public Health, led a research team at the University of Pittsburgh, Johns Hopkins University, and Imperial College in London to craft a complex model. He and his team needed to consider how many people lived in a particular location, where they lived, and in what concentrations. He had to put virtual people into households, use transportation data to figure out where and how they went to work or school, places where they would mix germs freely—and then carry those bugs back home. Then he introduced a disease, Avian flu. Using pandemic information from the 1918 flu, he entered information about how long a carrier is contagious, how quickly the disease spreads, and in what proximity.
After putting all this data into a simulation format, he watched individuals on the screen turn red as they contracted the disease, and turn green as they recovered. Then he changed the parameters to watch what would happen if the flu strain was more or less deadly or infectious than the 1918 strain. Using his models detailing the speed and patterns of infection, the Centers for Disease Control and Prevention, the U.S. Department of Homeland Security, and the U.S. Department of Health and Human Services have crafted new policies on how to respond to an outbreak, including things like school closings and restrictions on travel.
Burke, who also is director of Pitt’s Center for Vaccine Research and is UPMC-Jonas Salk Professor of Global Health, collaborated with the Ministry of Health in Thailand to see if it would be possible to stop just such an epidemic before it raged out of control if it were to strike in Southeast Asia—and how big a stockpile of antiviral drugs it would take. His computer model needed to reflect the fact that the drug would need to be administered to everyone within one kilometer of a known outbreak. He quizzed public health experts on how quickly the drug could be distributed (48 hours, even in an emergency) and what fraction of the population would really take it (about 60 percent). His model showed that a maximum of three million courses of antiviral drugs would be needed to quench the spread of the disease. In response to his study, the pharmaceutical firm Hoffmann-La Roche donated three million doses to the World Health Organization.
“With this kind of computing power,” says Burke, “it’s possible to track what would happen to 85 million people in Thailand.”
Over the next four years, with funding from the Bill and Melinda Gates Foundation, Burke will use this kind of modeling to identify what is needed in new or improved vaccines for maladies such as measles, malaria, dengue fever, and influenza.
When asked about the importance of modeling in his work, Burke notes that Pitt is on the cutting edge, using methods that he expects to change the face of public health research. Then he laughs and quotes J.W. Forrester, the father of systems science: “‘All decisions are based on the models,’” he said. “‘Most models are in our heads; mental models are not true and accurate images of our surroundings but are a set of assumptions and observations gained from experiences. Computer simulation models can compensate for weakness in mental models.’”
Through Pitt’s new simulation center, codirector J. Karl Johnson and his team are studying new ways of storing carbon dioxide, a major contributor to the planet’s recent and worrisome warm-up. Johnson’s simulation work at Pitt is helping researchers to design materials that could capture CO2for storage underground or under the sea.
Johnson says that until the nation and the world develop efficient renewable sources of energy, we’ll still be using fossil fuels, probably for the next few decades. The question he’s tackling, using computer simulations, is whether we can use oil, gas, coal, and natural gas without releasing CO2into the atmosphere. That solution would help to prevent CO2’s increasing environmental damage until new alternative energy sources become viable.
In other CO2 research, Johnson’s team is trying to enhance oil recovery from “dry” oil wells. Team members are modeling changes to CO2using various polymers that make the molecule more closely resemble oil—so when injected into a well, it could push remaining oil to the surface. “The United States has a lot of oil wells that are not producing anymore,” says Johnson. “If we could squeeze more oil out of them, it would aid in our quest for energy independence.”
Simulation is particularly valuable in the creation of new materials and is capable of “computational discovery”—the virtual discovery of something that has not yet been seen in the laboratory. Geoffrey Hutchison, assistant professor of chemistry at Pitt, is investigating what could prove to be an inexpensive, innovative energy source—a new kind of solar cell made from conductive plastic. The cells could be dissolved, like ink, and painted on roofs or cars to supply electricity. They could be produced in flexible, lightweight rolls that people could buy at the hardware store and trim to size.
So far, though, the polymers being used do not conduct electricity well enough. The more traditional silicon solar panels currently in use are roughly 20 percent energy efficient; the new solar-cell materials in development are just five to six percent efficient. Hutchison is using simulations to guide the synthesis of new materials.
Hutchison is trying to understand how electrical current moves on the nanoscale level (about 1,000th the diameter of a human hair.) Do the currents weave, move in straight lines, or bounce off the walls like billiard balls? Do impurities always act as roadblocks? How long do they hold a charge? Computer simulations are giving him the answers.
Hutchison uses computational chemistry to build a hundred or more possible new materials, adding a carbon atom here, exchanging it for an oxygen atom there—sometimes with surprising results. “Making what seems like a subtle change may change the entire shape of a molecule,” he says. “We like molecules to be flat, which spreads the electrical charge over a larger area than twisted or spiral-shaped molecules, which are less conductive.”
The result: He has already come up with five new molecules that look like promising solar materials. In the lab, it would take many months to synthesize an array of polymers, then months more to test them. Instead, using computer simulations, Hutchison was able to find a number of possibilities in a fraction of the time and at much lower cost. Now, he will collaborate with colleagues at Pitt and elsewhere to make and test these new polymers.
The new Center for Simulation and Modeling, says Klinzing, puts Pitt on a vast frontier for the next scientific revolutions in many fields. Among the primary areas of research are: energy and sustainability, nanoscience and materials engineering, medicine and biology, public health, economics and the social sciences, and visualization.
“This center provides our faculty with top-notch resources and the ability to be truly cutting edge on multiple fronts,” states Klinzing. He adds that, going forward, multiscale modeling will have a major impact on understanding a large range of physical and biological processes, enabling the design of completely new molecules and materials for specific, targeted functions.
Back in Eberly Hall, Jordan’s work with methane ice extends beyond its potential as a major source of future energy. The hydrate fields have been in the news recently for other reasons.There is concern that warming oceans and a melting Arctic could trigger a massive, potentially catastrophic methane release. Since methane is a potent greenhouse gas—about 20 times more potent than CO2—a large influx into our atmosphere would quickly heat oceans and thaw permafrost, creating a cycle that could seriously accelerate global warming.
Computer simulation enables researchers like Jordan to gain an intimate understanding of methane hydrate, even from thousands of miles away. Solutions are as close as a computer screen and the brainpower of colleagues next door. A few decades ago that was unthinkable. Instead, an isolated Russian crew on a vast Siberian plain huddled around the ice-like material retrieved while drilling for oil gas. They wondered about their find while, before their eyes, the unusual chunks began to melt away.
For more information about Pitt’s new Center for Simulation and Modeling, please visit www.sam.pitt.edu. | http://www.pittmag.pitt.edu/?p=67 | 13 |
36 | Teaching critical thinking:
A Parenting Science guide
© 2009-2012 Gwen Dewar, Ph.D., all rights reserved
Teaching critical thinking? You might wonder if kids will work it out for themselves.
After all, lots of smart people have managed to think logically without formal instruction in logic. Moreover, studies show that
kids become better learners when they are forced to explain how they solve problems.
So maybe kids will discover principles of logic spontaneously, as they discuss their ideas with others.
But research hints at something else, too.
Perhaps the most effective way to foster critical thinking skills is to teach those skills. Explicitly. (Abrami et al 2008).
Studies suggest that students become remarkably better problem-solvers when we teach them to
• analyze analogies
• create categories and classify items appropriately
• identify relevant information
• construct and recognize valid deductive arguments
• test hypotheses
• recognize common reasoning fallacies
• distinguish between evidence and interpretations of evidence
Do such lessons stifle creativity? Not at all. Critical thinking is about curiosity, flexibility, and keeping an open mind (Quitadamo et al 2008). And, as Robert DeHaan has argued, creative problem solving depends on critical thinking skills (DeHaan 2009).
In fact, research suggests that explicit instruction in critical thinking may make kids smarter, more independent, and more creative.
Here are some examples--and some expert tips for teaching critical thinking to kids.
Teaching critical thinking may boost inventiveness and raises IQ
Richard Herrnstein and his colleagues gave over 400 seventh graders explicit instruction in critical thinking--a program that covered hypothesis testing, basic logic, and the evaluation of complex arguments, inventiveness, decision making, and other topics.
After sixty 45-minute lessons, the kids were tested on a variety of tasks, including tests the Otis-Lennon School Ability Test and Raven Progressive Matrices (both used to measure IQ). The project was remarkably effective.
Compared to students in a control group, the kids given critical thinking lessons made substantial and statistically significant improvements in language comprehension, inventive thinking, and even IQ (Herrnstein et al 1986).
Teaching critical thinking in science class may help kids solve everyday problems
In another experimental study, researchers Anat Zohar and colleagues tested 678 seventh graders’ analytical skills. Then they randomly assigned some students to receive critical thinking lessons as part of their biology curriculum.
Students in the experimental group were explicitly trained to recognize logical fallacies, analyze arguments, test hypotheses, and distinguish between evidence and the interpretation of evidence.
Students in a control group learned biology from the same textbook but got no special coaching in critical thinking.
At the end of the program, students were tested again. The students with critical thinking training showed greater improvement in their analytical skills, and not just for biology problems. The kids trained in critical thinking also did a better job solving everyday problems (Zohar et al 1994).
Tips for teaching critical thinking: What should parents and teachers do?
The short answer is make the principles of rational and scientific thinking explicit.
Philip Abrami and colleagues analyzed 117 studies about teaching critical thinking. The teaching approach with the strongest empirical support was explicit instruction--i.e., teaching kids specific ways to reason and solve problems. In studies where teachers asked students to solve problems without giving them explicit instruction, students experienced little improvement (Abrami et al 2008).
So it seems that kids benefit most when they are taught formal principles of reasoning. And the experiments mentioned above suggest that middle school students aren't too young to learn about logic, rationality, and the scientific method.
If your school isn’t teaching your child these things, then it might be a good idea to find some educational materials and work on critical thinking skills at home.
I also wonder about the need to counteract the forces of irrationality. As I’ve complained elsewhere,
TV, books, “educational” software, and even misinformed teachers can actually discourage critical thinking in children.
What else can we do?
Recent research suggests that our schools can
improve critical thinking skills by teaching kids the art of debate.
And at home, parents may consider these recommendations made by Peter Facione and a panel of experts convened by the American Philosophical Association (Facione 1990).
The American Philosophical Association's tips for teaching critical thinking
• Start early. Young children might not be ready for lessons in formal logic. But they can be taught to give reasons for their conclusions. And they can be taught to evaluate the reasons given by others. Wondering where to begin? If you have young child, check out these
research-based tips for teaching critical thinking and scientific reasoning to preschoolers.
• Avoid pushing dogma. When we tell kids to do things in a certain way, we should give reasons.
• Encourage kids to ask questions. Parents and teachers should foster curiosity in children. If a rationale doesn’t make sense to a child, she should be encouraged to voice her objection or difficulty.
• Ask kids to consider alternative explanations and solutions. It’s nice to get the right answer. But many problems yield themselves to more than one solution. When kids consider multiple solutions, they may become more flexible thinkers.
• Get kids to clarify meaning. Kids should practice putting things in their own words (while keeping the meaning intact). And kids should be encouraged to make meaningful distinctions.
• Talk about biases. Even grade school students can understand
how emotions, motives--even our cravings--can influence our judgments.
• Don’t confine critical thinking to purely factual or academic matters. Encourage kids to reason about ethical, moral, and public policy issues.
• Get kids to write. This last recommendation doesn’t come from Facione or the APA, but it makes good sense. As many teachers know, the process of writing helps students clarify their explanations and sharpen their arguments. In a recent study, researchers assigned college biology students to one of two groups. The writing group had to turn in written explanations of their laboratory work. The control group had to answer brief quizzes instead. At the end of the term, the students in the writing group had increased their analytical skills significantly. Students in the control group had not (Quitadamo and Kurtz 2007).
For more information about improving your child's problem-solving skills, be sure to check out my articles on
intelligence in children
science education for kids.
References: Tips for teaching critical thinking to kids
Abrami PC, Bernard RM, Borokhovski E, Wadem A, Surkes M A, Tamim R, Zhang D. 2008. Instructional interventions affecting critical thinking skills and dispositions: a stage 1 meta-analysis. Rev. Educ. Res. 78:1102–1134.
DeHaan RL. 2009. Teaching creativity and inventive problem solving in science. CBE Life Sci. Educ. 8: 172-181.
Facione PA and the American Philosophical Association. 1990. Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction. In: Research Findings and Recommendations, Millbrae, CA: Insight Assessment.
Herrnstein RJ, Nickerson RS, Sanchez M and Swets JA. 1986. Teaching thinking skills. American Psychologist 41: 1279-1289.
Quitadamo JJ, Faiola CL, Johnson JE and Kurtz MJ. 2008. Community-based inquiry improves critical thinking in general biology. CBE Life Sci. Educ. 7: 327-337.
Quitadamo IJ and Kurtz MJ. 2007. Learning to Improve: Using Writing to Increase Critical Thinking Performance in General Education Biology CBE Life Sci Educ 6(2): 140-154.
Zohar A, Weinberger Y and Tamir P. 1994. The effect of the biology critical thinking project on the development of critical thinking. Journal of Res. Sci. Teachiing 31(2): 183-196.
Content last modified 10/12
image of boy teaching dog ©iStockphoto.com/Sadeugra | http://www.parentingscience.com/teaching-critical-thinking.html | 13 |
120 | Sample Exam1 Essay AnswersWVC student Trees Martens wrote the following excellent (A+) answers in Fall 2000.NOTE: Trees is one of the Philosophy 17 tutors in Spring 2001!
1. The difference between inductive and deductive arguments.
In order to explain the difference between inductive and deductive arguments, we first need to clarify what we understand by the term "argument". In everyday situations, when two people have an argument, it means they disagree about something; in this case "argument" means "dispute". In logic, however, an argument consists of a set of statements or claims that offer reasons to accept another, final claim. The claims that express the reasons for accepting the final claim are the premises; the claim they mean to support is the conclusion. We use this kind of arguments to express how we reached a position on certain matters, and to persuade others to accept our point of view. Arguments in this sense occur in everyday situations too, e.g., "You shouldnt drink so much coffee; your blood pressure is already too high." As a matter of fact, if only we would offer more good arguments, we might end up having fewer arguments.
This brings us to the subject of what constitutes a good argument. As I pointed out above, an argument in logic consists of one or more premises and a conclusion. If the premises offer good reasons to accept the conclusion --in other words if they support the conclusion, we say that the argument has good logic. If the premises fail to support the conclusion, i.e., the conclusion does not follow, the argument has bad logic. However, having good logic does not necessarily make the argument good; it only means that the conclusion follows from the premises, and that if the premises are true or reasonable to believe, the conclusion will be true or very likely. In other words, good logic is necessary, but not sufficient for a good argument; in order to determine whether an argument is good or bad, we also need to check if the premises are true or reasonable to believe, and if the statements are clear. If all this is the case, we can say we have a good argument. If we have a good argument that is deductive we call it sound. This takes us to the difference between deductive and inductive arguments.
In a deductive argument, the arguer claims that the conclusion must be true if, and only if, the premises are true. If the premises support the conclusion, we call it a valid argument: e.g.,
1. Cats have whiskers.
2. Animals with whiskers are mammals.
C. Cats are mammals.
This is a deductive argument that is valid and has true premises . We call this a sound argument.
A deductive argument can also have good logic even if the premises are false. This is still a valid argument, but it is not sound. Here is an example.
1. All birds can fly.
2. A penguin is a bird.
C. A penguin can fly.
This is a valid argument, but a penguin clearly cannot fly. The premise "all birds can fly" is false.
If a deductive argument has bad or incorrect logic -- the premises do not support the conclusion even if the premises are true, we call the argument invalid. E.g.,
1. All humans are mammals.
2. My cat is a mammal.
C. My cat is a human.
1. When I take a shower, I get wet.
2. Im wet.
C.I must have taken a shower.
Both conclusions are false: my cat, clearly, is not a human, and I could have just fallen in the pool, or even stepped out in the rain. Both arguments commit the fallacy of affirming the consequent. These examples show that truth value of the premises is irrelevant for the validity of an argument and that validity relies solely on the logical form.
However, as we have already pointed out, when a deductive argument has good logic but false premises, or true premises but bad logic, the argument is flawed and we should reject its conclusion. It is unsound. If a deductive argument is clear, valid and has all true premises, we have a sound argument and we have every reason to accept its conclusion.
In an inductive argument, the arguer claims that the conclusion is highly likely if the premises are true. If a inductive argument has good logic, we call it a strong argument. If a inductive argument has bad or incorrect logic we say the argument is weak.
Here are a few examples:
1. Most students at a community college live within a 20 mile radius of the campus.
2. WVC is a community college.
3. Abby is a student at WVC.
4. She must live within a 20 mile radius of WVC.
This conclusion is very plausible because the premises are relevant to the conclusion. We may say that this argument is likely.
1. Marc and Tom are both students at WVC.
2. Marc is tall and so is Tom.
3. Marc and Tom are both 20 years old.
4. Marc majors in math, and so does Tom.
5. Marc is on the basketball team.
6. Tom must be on the team, too.
This conclusion comes from nowhere. There are no premises that are relevant to our conclusion, except maybe that Marc and Tom are both tall. The argument says nothing about athletic abilities, which Marc more than probably has, since he is on the basketball team. However, that doesnt mean that Tom is athletic and can play ball.
We can conclude that, in order for an inductive argument to be strong, it should have reasonable premises that are relevant to the conclusion.
Reasonable people should believe the conclusions of sound arguments because a sound argument is an argument that is clear, i.e., free from ambiguity or vagueness,has good logic and true premises. Since we know that if an argument has good logic its conclusion MUST be true if all the premises are true, it is obvious that the conclusion of a sound argument is true. Therefore, it is obvious that any reasonable being should accept the conclusion of a sound argument. We can actually prove this by putting our reasoning in a standard valid deductive argument form. Our argument becomes immediately clear if we arrange the claims so that the premises precede the conclusion they want to support. Using this procedure, our argument becomes:
1. A sound argument has good logic and all true premises.
2. If an argument has good logic and all true premises, its conclusion MUST be true C A sound argument has a true conclusion. 1,2
Using variables and symbols, we recognize our argument as a form of Hypothetical Syllogism..
We will use the following variables: SA - sound argument
VA - an argument has good logic (valid arg.)
TP - all true premises
TC - true conclusion
1. SA --> VA.TP
2. VA.TP --> TC
C SA --> TC (1,2) q.e.d.
2. Critical analysis of an argument against objective reality.
If we want to critically analyze any argument, we first have to try to put it in standard form. This means we have to identify the premises and the conclusion, or conclusions in the case of complex arguments. After careful reading, and without worrying too much about obvious vagueness of the statements, I came up with the following argument:
1. Whatever people say, theyre just expressing their personal opinions. PS--> OP
2. Opinions are subjective. OP-->S
3. Nobody can be objective. (1, 2) (Whatever people say is subjective.) PS-->S
4. Knowledge requires objectivity. K-->OB
5. Therefore, there is really no such thing as (objective) knowledge. (3,4) not K
6. Honest opinions are equally correct. (5) OP-->C
(If there is no objective standard, then any opinion is as good as another.)
7. Also, we all live in our own private little worlds. PR
8. No private reality is more real than anyone elses.
9. Therefore, all private realities are equally real. (equivalent to 8) PR-->R
10.There is no such thing as objective reality. (7,8,9) not OR
This looks like a deductive argument because the arguer claims that the premises support the conclusion, i.e., if the premises are true, the conclusion must follow. In order to check the validity, I have already represented each premise by variables to the right, but I will come back to this later. First we have to check the clarity of the statements and make sure we know what they mean.
In putting the argument in standard form, I have discarded some of the statements because they are objectionably vague and, therefore, only succeed in complicating the argument needlessly. I identified the following statements as irrelevant to the argument:
a) Every persons set of personal experiences is unique. This is obviously true, but it is also very vague and I dont see how it furthers the argument. The arguer should specify what she means by the term "unique." Does she mean unique in the sense of "only from the point where I am standing," at that particular time of day, and therefore, different every single time? Does she mean "unlike any other set of experiences I might have" or "unlike anyone elses experiences?" This is confusing. However, if she would state: "Every persons experiences are subjective," it would be relevant for the argument against objectivity, and I would include it in the standard form.
b) Every person comes from somewhere. So? What does that mean? Does it mean I cannot be objective because I come from Belgium, and my friend cannot be objective because she comes from Japan? What about Americans? Do they have more chances of being objective because their ancestors come from all over the world? Or does "somewhere" mean something different? Please clarify.
c) There is no "view from nowhere." Same problem. Does this mean there is no such thing as "nowhere," so there cannot be a view from there? Or does "nowhere" exist, but is there simply no view from "there?" Of course, its difficult for anyone to have a view from "nowhere," if we all come from "somewhere," unless there happened to be someone there, "nowhere" who also has a view from there. Its all a little confusing; we need better definitions.
After this first step in the clarity check of the statements, there still remain some very ambiguous terms in the argument. The arguer most notably equivocates the words, objective and subjective, but also opinion and knowledge, when she claims that all opinions are subjective, and that we cannot possibly have knowledge because knowledge requires objectivity, and we cannot be objective. Moreover, she claims there is not even an objective reality. Lets examine this problem more closely.
On the ordinary view, "subjective" means according to someones beliefs, feelings, opinions, experiences. On this view "subjective" is the complete opposite of "objective", meaning that something is factual, certain, quantifiable. On top of that, we often contend that everything has to be either subjective or objective. If this is so, how can we claim that some things are objective if all we know is that we had an experience of it? In other words, how does my experience of pleasure differ from my experience of seeing the ocean? The first one is certainly subjective because only I can experience my pleasure. On the other hand, I also have a very personal experience when Im looking at the ocean, different from anybody elses. So this experience is also subjective. If all our experiences are "subjective," everything we experience is just an opinion, and nothing exists independently of our mind. The problem is that we associate "subjectivity" with all our experiences. Philosophers solve this problem by distinguishing two kinds of subjectivity: metaphysical subjectivity, and epistemological subjectivity. Metaphysical subjectivity exists only as experienced by someone: e.g., "My neck is sore." This does not imply that the pain is not real, only that nobody else experiences that pain. Epistemologically subjective claims, on the other hand, are claims for which there are no public methods available to decide whether the claim is true or false: e.g., "Orange juice with calcium tastes better than orange juice without." This is clearly not about knowledge; it is simply a matter of opinion.
We can, in the same way, distinguish between metaphysical objectivity, and epistemological objectivity. Metaphysical objectivity exists independently of my experience. So I can, indeed, claim that the ocean exists in a metaphysically objective way because its mode of being is public. Epistemologically objective claims are claims for which publicly available methods exist to determine whether the claim is true or false. In this case knowledge is possible. It is not just a matter of opinion: e.g., Alisons claim "I just turned 28, yesterday" is epistemologically objective, even if in reality she turned 36. "Objective" does not mean the claim is true; it only refers to the fact that we can verify the truth or falsity of a claim.
Now that we know the distinction between metaphysical and epistemological subjectivity, and metaphysical and epistemological objectivity, we still have to refute the presupposition that everything is either subjective or objective. Let me go back to the claim: "My neck is sore." One morning, my youngest daughter woke up with just that complaint. Although I felt empathy for her --I have frequently had a sore neck myself , I could not feel her pain, and I thought a little massage would make her feel better. However, she could not stand being touched at her neck and she could not lift her head. I finally succeeded in getting her dressed, and took her to the doctor. He told me that this was a pretty common symptom of a growth spurt, and that she would need three to four days to recover. He recommended ice packs, rest, and pain relievers. This little story illustrates that something metaphysically subjective can very well be epistemologically objective. Therefore, the presupposition that everything must be either subjective or objective is false.
We can now return to the argument against objective reality and examine it in light of our findings. The arguer basically claims that there are only personal opinions, and that opinions are subjective; therefore, there is no objective truth. She even claims that there is no objective reality; there are only private, subjective worlds. It is obvious that the arguer equivocates the terms "subjective" and "objective." She uses them indiscriminately in the metaphysical and epistemological meanings. As we have demonstrated, it is not because my experience of the ocean is personal that the ocean does not objectively exist. The same goes for the claim about "our own private little worlds." It is not because we experience the world privately that the world does not objectively exist. As for the claim that "there is no such thing as knowledge," we have pointed out that knowledge is possible because there are epistemologically objective claims for which we have publicly available methods to decide about the truth value. Moreover, we have stated that personal opinions cannot express knowledge precisely because there are no public methods available for determining their truth value. We can, therefore, claim that the assertion that there are only "personal opinions" is false.
The last problem of clarity I want to point out is the mistake of the arguer to assert that because opinions are subjective, nothing can be objective. This is the misconception that something has to be either objective or subjective. We have clearly refuted this, above, with our story about the sore neck.
The remaining steps in our critical analysis are: logic check and facts check.
Lets look at the standard form using the variables to check the logic of the argument.
3.PS-->S 1,2 This is a Hypothetical Syllogism, which is a valid form.
S v OB <--> (not S -->OB) Unstated premise and definition of implication.
not (not S)
so not OB From 3 and the unstated premise. Disjunctive Syllogism is valid
5. not K From not OB (3) and 4 This is Modus Tollens, a valid form. .
SR valid Modus Ponens.
SR v OR <--> (not SR --> OR) Unstated premise and definition of implication.
10. not OR 7, unstated premise. Disjunctive Syllogism, valid.
We can conclude that the above premises support the conclusions, so the soundness of the argument will depend on clarity and truth value.
We can be short on the truth value of the facts since we already demonstrated that most of the premises are so ambiguous they end up being dubious or false. We have already concluded that it is not true that people only give opinions, and that the claim that knowledge does not exist is false. It is false that nobody can be objective, and it is certainly untrue that all honest opinions are equally correct. If I say: "I like milk chocolate better than white chocolate," there is no way to define the truth value of my statement. It is merely a matter of opinion. However, if a classical pianist claims: "Mozart is a better composer than Salieri," I figure this is a better opinion than that of a three year-old who says that "I love you; you love me" from Barney, the purple dinosaur, sounds better than Bachs Brandenburg Concertos. We also demonstrated earlier that the claim: "There is no objective reality. We all live in our own private little worlds," is wrong.
To conclude, we can say that the greatest problems of this argument arise from the vagueness of the terms, which result in false or, at least, very dubious premises.
3. Critical analysis of the argument against abortion.
If we put the argument against abortion in standard form, we can simplify the statements to the following premises:
1. When a mother goes to the hospital to give birth, its pretty clear that she has a little person inside her.
2. The day before the delivery, it is still a little person.
3. And the day before that day, it is still a little person.
4. No matter how many days you count back, its still a little person.
5. Killing a person is wrong. (Unstated premise)
C. Therefore, killing it (the little person) at any point is wrong. Im against abortion.
This argument against abortion concludes that abortion is wrong following the premises that (1) it is wrong to kill a person(unstated), and (2) a baby is a person, even before delivery, and at every stage of its evolution in the mothers womb "no matter how many days you count back". The problem we have to deal with here is the problem of defining our concepts. In order to argue about abortion, we first state what the concepts mean, especially the concept "baby." However, few concepts come with an exact definition, meaning that it is possible to specify the connotation so that no doubt remains. E.g., the connotation of "square" is "equilateral and rectangular." Everything else is not "square." We call these concepts closed. Most concepts, though, are open, which means we cannot precisely specify their connotations, and it is sometimes hard to decide whether the concept applies or not. This is certainly the case with "baby." Does the concept only apply to the baby after delivery, or does it also apply to the foetus? It is easy to see how the use of an open concept raises problems in arguments about ethics, and it shows that we need to use argument by analogy in order to decide whether our concept applies in some cases or not. It certainly does not mean that the concept is useless because that would imply that there is no such thing as a baby, and that would be committing the fallacy of ad continuum, which is exactly what the above argument against abortion does, albeit the other way around. Let me explain.
The continuum fallacy argues that a debate is pointless because you cannot define your concepts clearly. It says you cannot draw the line exactly between what something is and is not; in other words, you cannot specify the difference between A and not-A. If you cannot point out the distinction between A and not-A, says a variant of the continuum fallacy, then, surely, there is no distinction between them. This is illustrated by the famous argument of the beard, which states that there is no difference between having a beard and not having a beard because we cannot draw the line precisely between the concept of a beard and "not-a-beard." Consequently, a man without a beard can never have one. The argument goes that if one whisker doesnt give a man a beard, a second whisker wont give him a beard either, nor will a third. Each whisker added still will not give the man a beard. So it doesnt matter how many whiskers he has, he will never have a beard. So where do you draw the line between a beard and not-a-beard? The concept is moot, and we have no way to go with our argument, or so it seems. Our abortion opponent commits the same fallacy when he asserts that you cannot draw the line between a baby and not-a-baby. "No matter how many days you count back," he argues, a baby is always a baby. Hence, he seems to imply, you shouldnt even discuss abortion because there is no such thing as "not-a-baby."
If we reverse the argument of the beard, we can point out the similarities with the argument against abortion even more clearly. What do you call a beard? Is it a full-grown beard that a man has had for at least a year, or can you say that a man who hasnt shaved for a week has a beard? What about a man who hasnt shaved for two days; does he have a beard? And what would you call the first few whiskers of an adolescent? Lets compare this with the question, "what do you call a baby?" Is a baby a little person from the moment she is born? Or is she already a baby before the delivery? What would you call her when the mother is six moths pregnant, or what do you call a three month-old foetus? Is she already a baby immediately after conception? It is clear that we can go on and on, extending our definition "ad continuum" without a chance for a clear distinction between A and not-A, between a beard and a not-a-beard, between a baby and not-a-baby. If we follow this path, we will always fall into the trap of: what is, is no different from what is not, and we will never be able to put forth any reasonable argument.
To support this analysis, I would like to present a few arguments by analogy to show that this particular argument against abortion is not a good one. The first argument goes as follows. Suppose Tom has removed all the plants from a small piece of his yard, and he has planted seeds because he wants to grow a beautiful flower garden. The next day, his neighbors dog comes running through and digs up the dirt in the process. Tom is furious, and complains to his neighbor, Gail."
--"Your dog has just destroyed my wonderful flower garden. I had worked so hard at it, and now I have to start all over again. Do you have any idea what new landscaping costs these days?"
--Gail replies: "Im sorry if my dog ran through your yard, but how could he destroy anything if there was nothing there? Its just a piece of dirt. I didnt see any flowers."
--" What do you mean," retorts Tom, "there was nothing there? I had just planted seeds of the most beautiful flowers, some of them very difficult to grow, and now your dog dug up all of my plants and I have to redo my landscaping completely!"
--Gail: "Tom, I know you must be upset, but can we be reasonable? Youve just planted the seeds; how can you possibly claim my dog destroyed a full-grown garden?"
--Tom: "You mean a garden is only a garden when you have a mature landscaping? You mean a flower is only a flower when its at the peak of its bloom? What about the day before the flower opens up? Nothing much has changed. It is still a flower; it is just not blooming yet. So what do you call it when it is still a bud, or before the bud has even formed? Surely, you talk about your flowers when the first sprouts appear? So where do you draw the line? I say you cant, and for that reason, I say I had a beautifully landscaped garden, and your dog destroyed it completely. I say you should pay for new landscaping."
I would like to use one more argument by analogy to show that the continuum fallacy clearly leads to preposterous conclusions. Consider the case of Anne, who decides to sue the government over compensation for a house shes never built. Anne owned a piece of land on which she was planning to build her dream house a few years from now. Because of redevelopment measures, the government has taken the land away from her, compensating her at fair market rates so that she can buy another piece of land of equal value. However, Anne does not consider this a fair deal; according to her, not only did she lose her land, she also lost her dream house. Indeed, Anne argues, "When I bought that land, I already had a precise idea of what kind of house I was going to build on it. As a matter of fact, I expressly bought that particular piece of land because it corresponded exactly to the plans I had for my dream house. I can point out where my kitchen would be, my bedroom, and my study; I can even show you the very spot where I would be lounging in my easy chair, overlooking the gorgeous valley from my deck outside of the living room. You see, as far as Im concerned, my dreamhouse was there. And now Ive lost everything. You cannot tell me I didnt lose my house simply because it was not physically there. What do you mean by a house? Is it only a house when the construction is finished? What about the day before the roof goes up? Its still a house, just not completely finished. And what would you call it before the walls are built, or when you lay the foundations? Dont you call it your house when you have the blueprints? Where do you draw the line between what is a house and what is not a house? I say you cant, and for that reason, no matter how far you go back in the construction phase, its still a house. Therefore, I say the government took my dream house without compensating me, and thats wrong. I demand fair compensation."
I recognize that these examples are farfetched, but in order to demonstrate the continuum fallacy, we shouldnt be afraid of highlighting the ridiculous conclusions that "follow" from this kind of argument.
I would like to conclude this analysis with an example from my personal experience because the continuum fallacy reminds me a little of an argument --in the sense of dispute-- I used to have with my older brother when we were kids. It was usually about reading a magazine, playing a game, even using the bathroom. He would say he was going to do something, and there I was, just before he could even get up, doing exactly what he was planning to do. He would get so mad and shout: "I said it first!" to which I would invariably reply: "But I thought of it first." Im not proud of it, but in a funny way it illustrates a little how easily you can show the uselessness of a concept committing the continuum fallacy. Indeed, who has the right to do something first? The one who was there first, the one who said it first, or the one who thought of it first? Where do you draw the line? | http://instruct.westvalley.edu/lafave/sample_exam1_essays.html | 13 |
19 | Welcome to The Socrates Historical Society
Table of Contents
About Socrates- Introduction
"The Socratic method of teaching is based on
Socrates' theory that it is more important to enable students to think for
themselves than to merely fill their heads with "right" answers. Therefore, he
regularly engaged his pupils in dialogues by responding to their questions with
questions, instead of answers. This process encourages divergent thinking
rather than convergent thinking" (Adams).
Although Socrates (470-399 BCE) is the central figure of these dialogues, little
is actually known about him. He left no writings, and what is known is derived
largely from Plato and
Socrates was a stone cutter by trade, even though there is little evidence
that he did much to make a living. However, he did have enough money to own a
suit of armor when he was a
hoplite in the Athenian
military. Socrates' mother was a midwife. He was
married and had three sons.
Throughout his life he claimed to
hear voices which he
interpreted as signs from the gods.
It appears that Socrates spent much of his adult life in the
agora (or the marketplace)
conversing about ethical issues. He had a penchant for exposing ignorance,
hypocrisy, and conceit among among his fellow
Athenians, particularly in
regard to moral questions. In all probability, he was disliked by most of them.
However, Socrates did have a loyal following. He was very influential in the
lives of Plato, Euclid,
Alcibiades, and many others. As such, he was associated with the undemocratic
faction of Athens.
Although Socrates went to great lengths to distinguish himself from the
sophists, it is unlikely
that his fellow Athenians
made such a distinction in their minds.
Socrates is admired by many philosophers for his willingness to explore an
argument wherever it would lead as well as having the moral courage to follow
Socrates and the Socratic Principals are a
Major part of the Educational Programs
of the European Union
This is supposed to be of Socrates,
but it was made after he had already been dead for some time,
by someone who did not know what Socrates looked like.
Socrates was the first of the three great Athenian
philosophers (the other two are
Aristotle). Socrates was born in Athens in 469
BC, so he lived
through the time of
Pericles and the Athenian Empire, though he was too young to
Salamis. He was not from a rich family. His father was probably a
stone-carver, and Socrates also worked in
especially as a not-very-good
sculptor. Socrates' mother was a
Peloponnesian War began, Socrates fought bravely for Athens. We do
not have any surviving pictures of Socrates that were made while he was
alive, or by anyone who ever saw him, but he is supposed to have been
But when Socrates was in his forties or so, he began
to feel an urge to think about the world around him, and try to answer
some difficult questions. He asked, "What is wisdom?" and "What is
beauty?" and "What is the right thing to do?" He knew that these
questions were hard to answer, and he thought it would be better to have
a lot of people discuss the answers together, so that they might come up
with more ideas. So he began to go around Athens asking people he met
these questions, "What is wisdom?" , "What is piety?", and so forth.
Sometimes the people just said they were busy, but sometimes they would
try to answer him. Then Socrates would try to teach them to think better
by asking them more questions which showed them the problems in their
logic. Often this made people angry. Sometimes they even tried to beat
This is what is left of the Painted Stoa, or Porch,
where Socrates used to teach, in Athens.
Socrates soon had a group of young men who listened
to him and learned from him how to think.
Plato was one of these young men. Socrates never charged them any
money. But in 399
BC, some of the
Athenians got mad at Socrates for what he was teaching the young men.
They charged him in court with impiety (not respecting the gods) and
corrupting the youth (teaching young men bad things). People thought he
democracy, and he probably was - he thought the
smartest people should make the decisions for everyone. The
Athenians couldn't charge him with being against democracy, because they
had promised not to take revenge on anyone after the
Peloponnesian War. So they had to use these vague religious charges
Socrates had a big
trial in front of an Athenian jury. He was convicted of these
charges and sentenced to death, and he died soon afterwards, when the
guards gave him a cup of hemlock (a poisonous plant) to drink.
Socrates never wrote down any of his ideas while he
was alive. But after he died, his student, Plato, did
write down some of what Socrates had said. You can read
Plato's version of what Socrates said
or you can buy copies of these conversations as a book (it's only like
"The unexamined life is not worth
Click on a link below to find a
The Socratic method of teaching
is based on Socrates' theory that it is more important to enable students to
think for themselves than to merely fill their heads with "right" answers.
Therefore, he regularly engaged his pupils in dialogues by responding to their
questions with questions, instead of answers. This process encourages divergent
thinking rather than convergent.
Students are given
opportunities to "examine" a common piece of text, whether it is in the form of
a novel, poem, art print, or piece of music. After "reading" the common text
"like a love letter", open-ended questions are posed.
Open-ended questions allow
students to think critically, analyze multiple meanings in text, and express
ideas with clarity and confidence. After all, a certain degree of emotional
safety is felt by participants when they understand that this format is based on
dialogue and not discussion/debate.
Dialogue is exploratory and
involves the suspension of biases and prejudices. Discussion/debate is a
transfer of information designed to win an argument and bring closure. Americans
are great at discussion/debate. We do not dialogue well. However, once teachers
and students learn to dialogue, they find that the ability to ask meaningful
questions that stimulate thoughtful interchanges of ideas is more important than
Participants in a Socratic
Seminar respond to one another with respect by carefully listening instead of
interrupting. Students are encouraged to "paraphrase" essential elements of
another's ideas before responding, either in support of or in disagreement.
Members of the dialogue look each other in the "eyes" and use each other names.
This simple act of socialization reinforces appropriate behaviors and promotes
Before you come to a Socratic Seminar
class, please read the assigned text (novel section, poem, essay, article,
etc.) and write at least one question in each of the following categories:
WORLD CONNECTION QUESTION:
Write a question
connecting the text to the real world.
Example: If you
were given only 24 hours to pack your most precious
belongings in a back pack and to get ready to leave your home town, what
might you pack? (After reading the first 30 pages of NIGHT).
Write a question
about the text that will help everyone in the
class come to an agreement about events or characters in the text. This
question usually has a "correct" answer.
happened to Hester Pyrnne's husband that she was
left alone in Boston without family? (after the first 4 chapters of THE
Write an insightful question about the text that will require proof
and group discussion and "construction of logic" to discover or explore the
answer to the question.
Example: Why did
Gene hesitate to reveal the truth about the
accident to Finny that first day in the infirmary? (after mid-point of A
UNIVERSAL THEME/ CORE
Write a question
dealing with a theme(s) of the text that will
encourage group discussion about the universality of the text.
reading John Gardner's GRENDEL, can you pick out its existential elements?
LITERARY ANALYSIS QUESTION: Write a question dealing with HOW an author
chose to compose a literary piece. How did the author manipulate point of
view, characterization, poetic form, archetypal hero patterns, for example?
Example: In MAMA
FLORA'S FAMILY, why is it important that the
story is told through flashback?
Guidelines for Participants in a
Refer to the text when needed during the discussion. A seminar is not a
test of memory. You are not "learning a subject"; your goal is to understand the
ideas, issues, and values reflected in the text.
It's OK to "pass" when asked to contribute.
Do not participate if you are not prepared. A seminar should not be a
Do not stay confused; ask for clarification.
Stick to the point currently under discussion; make notes about ideas you
want to come back to.
Don't raise hands; take turns speaking.
8. Speak up so that all can hear
9. Talk to each other, not just to the leader or teacher.
10. Discuss ideas rather than each other's opinions.
11. You are responsible for the seminar, even if you don't know it or admit it.
Participants in a Socratic Seminar
When I am evaluating your Socratic
Seminar participation, I ask the following questions about participants. Did
Speak loudly and clearly?
Cite reasons and evidence for their statements?
Use the text to find support?
Listen to others respectfully?
Stick with the subject?
Talk to each other, not just to the leader?
Ask for help to clear up confusion?
Support each other?
Avoid hostile exchanges?
Question others in a civil manner?
What is the difference between dialogue and
- Dialogue is collaborative:
multiple sides work toward shared understanding.
Debate is oppositional: two opposing sides try to prove each other wrong.
- In dialogue, one listens
to understand, to make meaning, and to find common ground.
In debate, one listens to find flaws, to spot differences, and to counter
- Dialogue enlarges and
possibly changes a participant's point of view.
Debate defends assumptions as truth.
- Dialogue creates an
open-minded attitude: an openness to being wrong and an openness to change.
Debate creates a close-minded attitude, a determination to be right.
- In dialogue, one submits
one's best thinking, expecting that other people's reflections will help
improve it rather than threaten it.
In debate, one submits one's best thinking and defends it against challenge
to show that it is right.
- Dialogue calls for
temporarily suspending one's beliefs.
Debate calls for investing wholeheartedly in one's beliefs.
- In dialogue, one searches
for strengths in all positions.
In debate, one searches for weaknesses in the other position.
- Dialogue respects all the
other participants and seeks not to alienate or offend.
Debate rebuts contrary positions and may belittle or deprecate other
- Dialogue assumes that many
people have pieces of answers and that cooperation can lead to a greater
Debate assumes a single right answer that somebody already has.
- Dialogue remains
Debate demands a conclusion.
- suspending judgment
- examining our own work
- exposing our reasoning and
looking for limits to it
- communicating our
- exploring viewpoints more
broadly and deeply
- being open to
- approaching someone who
sees a problem differently not as an adversary, but as a colleague in common
pursuit of better solution.
A Level Participant
Participant offers enough
solid analysis, without prompting, to move the conversation forward
Participant, through her
comments, demonstrates a deep knowledge of the text and the question
come to the seminar prepared, with notes and
a marked/annotated text
Participant, through her
comments, shows that she is actively
listening to other participants
clarification and/or follow-up that extends
often refer back to specific parts of the text.
B Level Participant
Participant offers solid
analysis without prompting
participant demonstrates a good knowledge of the text and the question
come to the seminar prepared, with notes and
a marked/annotated text
that he/she is actively listening to others
and offers clarification and/or follow-up
C Level Participant
Participant offers some analysis, but needs prompting from the
participant demonstrates a general
knowledge of the text and question
Participant is less
prepared, with few notes and no
Participant is actively listening to others, but does not offer
clarification and/or follow-up to others’ comments
Participant relies more
upon his or her opinion, and less on the text to drive her comments
D or F Level Participant
Participant comes to the seminar
ill-prepared with little
understanding of the text and question
not listen to others, offers no commentary to
further the discussion
the group by interrupting other speakers or
by offering off topic questions and comments.
the discussion and its participants
Teaching by Asking Instead of by Telling
by Rick Garlikov
The following is a transcript of a teaching experiment, using
the Socratic method, with a regular third grade class in a suburban elementary
school. I present my perspective and views on the session, and on the Socratic
method as a teaching tool, following the transcript. The class was conducted on
a Friday afternoon beginning at 1:30, late in May, with about two weeks left in
the school year. This time was purposely chosen as one of the most difficult
times to entice and hold these children's concentration about a somewhat complex
intellectual matter. The point was to demonstrate the power of the Socratic
method for both teaching and also for getting students involved and excited
about the material being taught. There were 22 students in the class. I was told
ahead of time by two different teachers (not the classroom teacher) that only a
couple of students would be able to understand and follow what I would be
presenting. When the class period ended, I and the classroom teacher believed
that at least 19 of the 22 students had fully and excitedly participated and
absorbed the entire material. The three other students' eyes were glazed over
from the very beginning, and they did not seem to be involved in the class at
all. The students' answers below are in capital letters.
The experiment was to see whether I could teach these
students binary arithmetic (arithmetic using only two numbers, 0 and 1) only
by asking them questions. None of them had been introduced to binary
arithmetic before. Though the ostensible subject matter was binary arithmetic,
my primary interest was to give a demonstration to the teacher of the power and
benefit of the Socratic method where it is applicable. That is my interest here
as well. I chose binary arithmetic as the vehicle for that because it is
something very difficult for children, or anyone, to understand when it is
taught normally; and I believe that a demonstration of a method that can teach
such a difficult subject easily to children and also capture their enthusiasm
about that subject is a very convincing demonstration of the value of the
method. (As you will see below, understanding binary arithmetic is also about
understanding "place-value" in general. For those who seek a much more detailed
explanation about place-value, visit the long paper on
The Concept and Teaching of
Place-Value.) This was to be the Socratic method in what I consider its
purest form, where questions (and only questions) are used to arouse curiosity
and at the same time serve as a logical, incremental, step-wise guide that
enables students to figure out about a complex topic or issue with their own
thinking and insights. In a less pure form, which is normally the way it occurs,
students tend to get stuck at some point and need a teacher's explanation of
some aspect, or the teacher gets stuck and cannot figure out a question that
will get the kind of answer or point desired, or it just becomes more efficient
to "tell" what you want to get across. If "telling" does occur, hopefully by
that time, the students have been aroused by the questions to a state of curious
receptivity to absorb an explanation that might otherwise have been meaningless
to them. Many of the questions are decided before the class; but depending on
what answers are given, some questions have to be thought up extemporaneously.
Sometimes this is very difficult to do, depending on how far from what is
anticipated or expected some of the students' answers are. This particular
attempt went better than my best possible expectation, and I had much higher
expectations than any of the teachers I discussed it with prior to doing it.
I had one prior relationship with this class. About two weeks earlier
I had shown three of the third grade classes together how to throw a boomerang
and had let each student try it once. They had really enjoyed that. One girl and
one boy from the 65 to 70 students had each actually caught their returning
boomerang on their throws. That seemed to add to everyone's enjoyment. I had
therefore already established a certain rapport with the students, rapport being
something that I feel is important for getting them to comfortably and
enthusiastically participate in an intellectually uninhibited manner in class
and without being psychologically paralyzed by fear of "messing up".
When I got to the classroom for the binary math experiment, students
were giving reports on famous people and were dressed up like the people they
were describing. The student I came in on was reporting on John Glenn, but he
had not mentioned the dramatic and scary problem of that first American trip in
orbit. I asked whether anyone knew what really scary thing had happened on John
Glenn's flight, and whether they knew what the flight was. Many said a trip to
the moon, one thought Mars. I told them it was the first full earth orbit in
space for an American. Then someone remembered hearing about something wrong
with the heat shield, but didn't remember what. By now they were listening
intently. I explained about how a light had come on that indicated the heat
shield was loose or defective and that if so, Glenn would be incinerated coming
back to earth. But he could not stay up there alive forever and they had nothing
to send up to get him with. The engineers finally determined, or hoped, the
problem was not with the heat shield, but with the warning light. They thought
it was what was defective. Glenn came down. The shield was ok; it had been just
the light. They thought that was neat.
"But what I am really here for today is to try an experiment with
you. I am the subject of the experiment, not you. I want to see whether I can
teach you a whole new kind of arithmetic only by asking you questions. I won't
be allowed to tell you anything about it, just ask you things. When you think
you know an answer, just call it out. You won't need to raise your hands and
wait for me to call on you; that takes too long." [This took them a while to
adapt to. They kept raising their hands; though after a while they simply called
out the answers while raising their hands.] Here we go.
1) "How many is this?" [I held up ten fingers.]
2) "Who can write that on the board?" [virtually all hands up; I toss the
chalk to one kid and indicate for her to come up and do it]. She writes
3) Who can write ten another way? [They hesitate than some hands go up. I
toss the chalk to another kid.]
4) Another way?
5) Another way?
2 x 5 [inspired by the last
6) That's very good, but there are lots of things that equal ten,
right? [student nods agreement], so I'd rather not get into combinations that
equal ten, but just things that represent or sort of mean ten. That will
keep us from having a whole bunch of the same kind of thing. Anybody else?
7) One more?
8) [I point to the word "ten"]. What is this?
THE WORD TEN
9) What are written words made up of?
10) How many letters are there in the English alphabet?
11) How many words can you make out of them?
12) [Pointing to the number "10"] What is this way of writing numbers made
13) How many numerals are there?
NINE / TEN
14) Which, nine or ten?
15) Starting with zero, what are they? [They call out, I write them in the
16) How many numbers can you make out of these numerals?
MEGA-ZILLIONS, INFINITE, LOTS
17) How come we have ten numerals? Could it be because we have 10 fingers?
18) What if we were aliens with only two fingers? How many numerals might
19) How many numbers could we write out of 2 numerals?
NOT MANY /
[one kid:] THERE WOULD BE A
20) What problem?
THEY COULDN'T DO THIS [he holds
up seven fingers]
21) [This strikes me as a very quick, intelligent insight I did not expect
so suddenly.] But how can you do fifty five?
[he flashes five fingers for
an instant and then flashes them again]
22) How does someone know that is not ten? [I am not really happy with my
question here but I don't want to get side-tracked by how to logically try to
sign numbers without an established convention. I like that he sees the problem
and has announced it, though he did it with fingers instead of words, which
complicates the issue in a way. When he ponders my question for a second with a
"hmmm", I think he sees the problem and I move on,
23) Well, let's see what they could do. Here's the numerals you wrote down
[pointing to the column from 0 to 9] for our ten numerals. If we only have two
numerals and do it like this, what numerals would we have.
24) Okay, what can we write as we count? [I write as they call out
25) Is that it? What do we do on this planet when we run out of numerals
WRITE DOWN "ONE,
[almost in unison] I DON'T KNOW; THAT'S JUST
THE WAY YOU WRITE "TEN"
27) You have more than one numeral here and you have already used these
numerals; how can you use them again?
WE PUT THE 1 IN A DIFFERENT
28) What do you call that column you put it in?
29) Why do you call it that?
30) Well, what does this 1 and this 0 mean when written in these columns?
1 TEN AND NO ONES
31) But why is this a ten? Why is this [pointing] the ten's column?
DON'T KNOW; IT JUST IS!
32) I'll bet there's a reason. What was the first number that needed a new
column for you to be able to write it?
33) Could that be why it is called the ten's column?! What is the first
number that needs the next column?
34) And what column is that?
35) After you write 19, what do you have to change to write down 20?
a 0 and 1 to a 2
36) Meaning then 2 tens and no ones, right, because 2 tens are ___?
37) First number that needs a fourth column?
38) What column is that?
39) Okay, let's go back to our two-fingered aliens arithmetic. We have
What would we do to write "two" if we did the same thing we do over here
[tens] to write the next number after you run out of numerals?
START ANOTHER COLUMN
40) What should we call it?
41) Right! Because the first number we need it for is ___?
42) So what do we put in the two's column? How many two's are there in
43) And how many one's extra?
44) So then two looks like this: [pointing to "10"], right?
RIGHT, BUT THAT SURE LOOKS LIKE
45) No, only to you guys, because you were taught it wrong [grin] -- to
the aliens it is two. They learn it that way in pre-school just as you learn to
call one, zero [pointing to "10"] "ten". But it's not really ten, right? It's
two -- if you only had two fingers. How long does it take a little kid in
pre-school to learn to read numbers, especially numbers with more than one
numeral or column?
TAKES A WHILE
46) Is there anything obvious about calling "one, zero" "ten" or do you
have to be taught to call it "ten" instead of "one, zero"?
HAVE TO BE TAUGHT IT
47) Ok, I'm teaching you different. What is "1, 0" here?
48) Hard to see it that way, though, right?
49) Try to get used to it; the alien children do. What number comes next?
50) How do we write it with our numerals?
We need one
"TWO" and a "ONE"
[I write down 11 for them] So we have
51) Uh oh, now we're out of numerals again. How do we get to four?
START A NEW COLUMN!
52) Call it what?
THE FOUR'S COLUMN
53) Call it out to me; what do I write?
ONE, ZERO, ZERO
[I write "100
under the other numbers]
ONE, ZERO, ONE
I write "101
55) Now let's add one more to it to get six. But be careful. [I point to
the 1 in the one's column and ask] If we add 1 to 1, we can't write "2", we can
only write zero in this column, so we need to carry ____?
56) And we get?
ONE, ONE, ZERO
57) Why is this six? What is it made of? [I point to columns, which I had
been labeling at the top with the word "one", "two", and "four" as they had
called out the names of them.]
a "FOUR" and a "TWO"
58) Which is ____?
59) Next? Seven?
ONE, ONE, ONE
60) Out of numerals again. Eight?
NEW COLUMN; ONE, ZERO, ZERO,
I write "1000
[We do a couple more and I continue to write them one under the other with
the word next to each number, so we have:]
61) So now, how many numbers do you think you can write with a one and a
ALL OF THEM
62) Now, let's look at something. [Point to Roman numeral X that one kid
had written on the board.] Could you easily multiply Roman numerals? Like MCXVII
63) Let's see what happens if we try to multiply in alien here. Let's try
two times three and you multiply just like you do in tens [in the "traditional"
American style of writing out multiplication].
They call out the "one, zero" for just below the line, and "one, zero,
zero" for just below that and so I write:
64) Ok, look on the list of numbers, up here [pointing to the "chart"
where I have written down the numbers in numeral and word form] what is 110?
65) And how much is two times three in real life?
66) So alien arithmetic works just as well as your arithmetic, huh?
LOOKS LIKE IT
67) Even easier, right, because you just have to multiply or add zeroes
and ones, which is easy, right?
68) There, now you know how to do it. Of course, until you get used to
reading numbers this way, you need your chart, because it is hard to read
something like "10011001011" in alien, right?
69) So who uses this stuff?
70) No, I think you guys use this stuff every day. When do you use it?
NO WE DON'T
71) Yes you do. Any ideas where?
72) [I walk over to the light switch and, pointing to it, ask:] What is
73) [I flip it off and on a few times.] How many positions does it have?
74) What could you call these positions?
ON AND OFF/ UP
75) If you were going to give them numbers what would you call them?
ONE AND TWO/
OH!! ZERO AND ONE!
[other kids then:] OH,
76) You got that right. I am going to end my experiment part here and just
tell you this last part.
Computers and calculators have lots of circuits through essentially on/off
switches, where one way represents 0 and the other way, 1. Electricity can go
through these switches really fast and flip them on or off, depending on the
calculation you are doing. Then, at the end, it translates the strings of zeroes
and ones back into numbers or letters, so we humans, who can't read long strings
of zeroes and ones very well can know what the answers are.
[at this point one of the kid's in the back yelled out,
I don't know exactly how these circuits work; so if your teacher ever gets
some electronics engineer to come into talk to you, I want you to ask him what
kind of circuit makes multiplication or alphabetical order, and so on. And I
want you to invite me to sit in on the class with you.
Now, I have to tell you guys, I think you were leading me on about not
knowing any of this stuff. You knew it all before we started, because I didn't
tell you anything about this -- which by the way is called "binary arithmetic",
"bi" meaning two like in "bicycle". I just asked you questions and you knew all
the answers. You've studied this before, haven't you?
NO, WE HAVEN'T. REALLY.
Then how did you do this? You must be amazing. By the way, some of you may
want to try it with other sets of numerals. You might try three numerals 0, 1,
and 2. Or five numerals. Or you might even try twelve 0, 1, 2, 3, 4, 5, 6, 7, 8,
9, ~, and ^ -- see, you have to make up two new numerals to do twelve, because
we are used to only ten. Then you can check your system by doing multiplication
or addition, etc. Good luck.
After the part about John Glenn, the whole class took only 25 minutes.
Their teacher told me later that after I left the children talked about it
until it was time to go home.
. . . . . . . . . . . . . .
My Views About This Whole Episode
Students do not get bored or lose concentration if they are
actively participating. Almost all of these children participated the whole
time; often calling out in unison or one after another. If necessary, I could
have asked if anyone thought some answer might be wrong, or if anyone agreed
with a particular answer. You get extra mileage out of a given question that
way. I did not have to do that here. Their answers were almost all immediate and
very good. If necessary, you can also call on particular students; if they don't
know, other students will bail them out. Calling on someone in a non-threatening
way tends to activate others who might otherwise remain silent. That was not a
problem with these kids. Remember, this was not a "gifted" class. It was a
normal suburban third grade of whom two teachers had said only a few students
would be able to understand the ideas.
The topic was "twos", but I think they learned just as much about
the "tens" they had been using and not really understanding.
This method takes a lot of energy and concentration when you are
doing it fast, the way I like to do it when beginning a new topic. A teacher
cannot do this for every topic or all day long, at least not the first time one
teaches particular topics this way. It takes a lot of preparation, and a lot of
thought. When it goes well, as this did, it is so exciting for both the students
and the teacher that it is difficult to stay at that peak and pace or to change
gears or topics. When it does not go as well, it is very taxing trying to figure
out what you need to modify or what you need to say. I practiced this particular
sequence of questioning a little bit one time with a first grade teacher. I
found a flaw in my sequence of questions. I had to figure out how to correct
that. I had time to prepare this particular lesson; I am not a teacher but a
volunteer; and I am not a mathematician. I came to the school just to do this
topic that one period.
I did this fast. I personally like to do new topics fast
originally and then re-visit them periodically at a more leisurely pace as you
get to other ideas or circumstances that apply to, or make use of, them. As you
re-visit, you fine tune.
The chief benefits of this method are that it excites students'
curiosity and arouses their thinking, rather than stifling it. It also makes
teaching more interesting, because most of the time, you learn more from the
students -- or by what they make you think of -- than what you knew going into
the class. Each group of students is just enough different, that it makes it
stimulating. It is a very efficient teaching method, because the first time
through tends to cover the topic very thoroughly, in terms of their
understanding it. It is more efficient for their learning then lecturing to them
is, though, of course, a teacher can lecture in less time.
It gives constant feed-back and thus allows monitoring of the
students' understanding as you go. So you know what problems and
misunderstandings or lack of understandings you need to address as you are
presenting the material. You do not need to wait to give a quiz or exam; the
whole thing is one big quiz as you go, though a quiz whose point is teaching,
not grading. Though, to repeat, this is teaching by stimulating students'
thinking in certain focused areas, in order to draw ideas out of them; it is not
"teaching" by pushing ideas into students that they may or may not be able to
absorb or assimilate. Further, by quizzing and monitoring their understanding as
you go along, you have the time and opportunity to correct misunderstandings or
someone's being lost at the immediate time, not at the end of six weeks when it
is usually too late to try to "go back" over the material. And in some cases
their ideas will jump ahead to new material so that you can meaningfully talk
about some of it "out of (your!) order" (but in an order relevant to them). Or
you can tell them you will get to exactly that in a little while, and will
answer their question then. Or suggest they might want to think about it between
now and then to see whether they can figure it out for themselves first. There
are all kinds of options, but at least you know the material is "live" for them,
which it is not always when you are lecturing or just telling them things or
they are passively and dutifully reading or doing worksheets or listening
If you can get the right questions in the right sequence, kids in
the whole intellectual spectrum in a normal class can go at about the same pace
without being bored; and they can "feed off" each others' answers. Gifted kids
may have additional insights they may or may not share at the time, but will
tend to reflect on later. This brings up the issue of teacher expectations. From
what I have read about the supposed sin of tracking, one of the main complaints
is that the students who are not in the "top" group have lower expectations of
themselves and they get teachers who expect little of them, and who teach them
in boring ways because of it. So tracking becomes a self-fulfilling prophecy
about a kid's educability; it becomes dooming. That is a problem, not with
tracking as such, but with teacher expectations of students (and their ability
to teach). These kids were not tracked, and yet they would never have been
exposed to anything like this by most of the teachers in that school, because
most felt the way the two did whose expectations I reported. Most felt the kids
would not be capable enough and certainly not in the afternoon, on a Friday near
the end of the school year yet. One of the problems with not tracking is that
many teachers have almost as low expectations of, and plans for, students
grouped heterogeneously as they do with non-high-end tracked students. The point
is to try to stimulate and challenge all students as much as possible. The
Socratic method is an excellent way to do that. It works for any topics or any
parts of topics that have any logical natures at all. It does not work for
unrelated facts or for explaining conventions, such as the sounds of letters or
the capitals of states whose capitals are more the result of historical accident
than logical selection.
Of course, you will notice these questions are very specific, and
as logically leading as possible. That is part of the point of the method. Not
just any question will do, particularly not broad, very open ended questions,
like "What is arithmetic?" or "How would you design an arithmetic with only two
numbers?" (or if you are trying to teach them about why tall trees do not fall
over when the wind blows "what is a tree?"). Students have nothing in particular
to focus on when you ask such questions, and few come up with any sort of
And it forces the teacher to think about the logic of a topic,
and how to make it most easily assimilated. In tandem with that, the teacher has
to try to understand at what level the students are, and what prior knowledge
they may have that will help them assimilate what the teacher wants them to
learn. It emphasizes student understanding, rather than teacher presentation;
student intake, interpretation, and "construction", rather than teacher output.
And the point of education is that the students are helped most efficiently to
learn by a teacher, not that a teacher make the finest apparent presentation,
regardless of what students might be learning, or not learning. I was fortunate
in this class that students already understood the difference between numbers
and numerals, or I would have had to teach that by questions also. And it was an
added help that they had already learned Roman numerals. It was also most
fortunate that these students did not take very many, if any, wrong turns or
have any firmly entrenched erroneous ideas that would have taken much effort to
show to be mistaken.
I took a shortcut in question 15 although I did not have to; but
I did it because I thought their answers to questions 13 and 14 showed an
understanding that "0" was a numeral, and I didn't want to spend time in this
particular lesson trying to get them to see where "0" best fit with regard to
order. If they had said there were only nine numerals and said they were 1-9,
then you could ask how they could write ten numerically using only those nine,
and they would quickly come to see they needed to add "0" to their list of
These are the four critical points about the questions: 1) they
must be interesting or intriguing to the students; they must lead by 2)
incremental and 3) logical steps (from the students' prior knowledge or
understanding) in order to be readily answered and, at some point, seen to be
evidence toward a conclusion, not just individual, isolated points; and 4) they
must be designed to get the student to see particular points. You are
essentially trying to get students to use their own logic and therefore see, by
their own reflections on your questions, either the good new ideas or the
obviously erroneous ideas that are the consequences of their established ideas,
knowledge, or beliefs. Therefore you have to know or to be able to find out what
the students' ideas and beliefs are. You cannot ask just any question or start
It is crucial to understand the difference between "logically"
leading questions and "psychologically" leading questions. Logically leading
questions require understanding of the concepts and principles involved in order
to be answered correctly; psychologically leading questions can be answered by
students' keying in on clues other than the logic of the content. Question 39
above is psychologically leading, since I did not want to cover in this
lesson the concept of value-representation but just wanted to use
"columnar-place" value, so I psychologically led them into saying "Start another
column" rather than getting them to see the reasoning behind columnar-place as
merely one form of value representation. I wanted them to see how to use
columnar-place value logically without trying here to get them to totally
understand its logic. (A common form of value-representation that is not
"place" value is color value in poker chips, where colors determine the value of
the individual chips in ways similar to how columnar place does it in writing.
For example if white chips are worth "one" unit and blue chips are worth "ten"
units, 4 blue chips and 3 white chips is the same value as a "4" written in the
"tens" column and a "3" written in the "ones" column for almost the same
For the Socratic method to work as a teaching tool and not just as
a magic trick to get kids to give right answers with no real understanding, it
is crucial that the important questions in the sequence must be logically
leading rather than psychologically leading. There is no magic formula for doing
this, but one of the tests for determining whether you have likely done it is to
try to see whether leaving out some key steps still allows people to give
correct answers to things they are not likely to really understand. Further, in
the case of binary numbers, I found that when you used this sequence of
questions with impatient or math-phobic adults who didn't want to have to think
but just wanted you to "get to the point", they could not correctly answer very
far into even the above sequence. That leads me to believe that answering most
of these questions correctly, requires understandingof the topic rather than
picking up some "external" sorts of clues in order to just guess correctly.
Plus, generally when one uses the Socratic method, it tends to become pretty
clear when people get lost and are either mistaken or just guessing. Their
demeanor tends to change when they are guessing, and they answer with a
questioning tone in their voice. Further, when they are logically understanding
as they go, they tend to say out loud insights they have or reasons they have
for their answers. When they are just guessing, they tend to just give short
answers with almost no comment or enthusiasm. They don't tend to want to sustain
Finally, two of the interesting, perhaps side, benefits of using
the Socratic method are that it gives the students a chance to experience the
attendant joy and excitement of discovering (often complex) ideas on their own.
And it gives teachers a chance to learn how much more inventive and bright a
great many more students are than usually appear to be when they are primarily
[Some additional comments about the Socratic method of teaching are in a
letter, "Using the
[For a more general approach to teaching, of which the Socratic Method is
just one specific
form, see "Teaching
Effectively: Helping Students Absorb and Assimilate Material"] | http://www.maxson-nc.us/socrates/default.htm | 13 |
24 | 2.1.1. The visibility function
"An interferometer is a device for measuring the spatial coherence function" (Clark 1999). This dry statement pretty much captures what interferometry is all about, and the rest of this chapter will try to explain what lies beneath it, how the measured spatial coherence function is turned into images and how properties of the interferometer affect the images. We will mostly abstain from equations here and give a written description instead, however, some equations are inevitable. The issues explained here have been covered in length in the literature, in particular in the first two chapters of Taylor et al. (1999) and in great detail in Thompson et al. (2001).
The basic idea of an interferometer is that the spatial intensity distribution of electromagnetic radiation produced by an astronomical object at a particular frequency, I, can be reconstructed from the spatial coherence function measured at two points with the interferometer elements, V(1, 2).
Let the (monochromatic) electromagnetic field arriving at the observer's location be denoted by E(). It is the sum of all waves emitted by celestial bodies at that particular frequency. A property of this field is the correlation function at two points, V(1, 2) = <E(1) E*(2)>, where the superscript * denotes the complex conjugate. V(1, 2) describes how similar the electromagnetic field measured at two locations is. Think of two corks thrown into a lake in windy weather. If the corks are very close together, they will move up and down almost synchronously; however as their separation increases their motions will become less and less similar, until they move completely independently when several meters apart.
Radiation from the sky is largely spatially incoherent, except over very small angles on the sky, and these assumptions (with a few more) then lead to the spatial coherence function
Here is the unit vector pointing towards the source and d is the surface element of the celestial sphere. The interesting point of this equation is that it is a function of the separation and relative orientation of two locations. An interferometer in Europe will measure the same thing as one in Australia, provided the separation and orientation of the interferometer elements are the same. The relevant parameters here are the coordinates of the antennas when projected onto a plane perpendicular to the line of sight (Figure 1). This plane has the axes u and v, hence it is called the (u, v) plane. Now let us further introduce units of wavelengths to measure positions in the (u, v) plane. One then gets
This equation is a Fourier transform between the spatial coherence function and the (modified) intensity distribution in the sky, I, and can be inverted to obtain I. The coordinates u and v are the components of a vector pointing from the origin of the (u, v) plane to a point in the plane, and describe the projected separation and orientation of the elements, measured in wavelengths. The coordinates l and m are direction cosines towards the astronomical source (or a part thereof). In radio astronomy, V is called the visibility function, but a factor, A, is commonly included to describe the sensitivity of the interferometer elements as a function of angle on the sky (the antenna response 1).
The visibility function is the quantity all interferometers measure and which is the input to all further processing by the observer.
Figure 1. Sketch of how a visibility measurement is obtained from a VLBI baseline. The source is observed in direction of the line-of-sight vector, , and the sky coordinates are the direction cosines l and m. The projection of the station coordinates onto the (u, v) plane, which is perpendicular to , yields the (u, v) coordinates of the antennas, measured in units of the observing wavelength. The emission from the source is delayed at one antenna by an amount = × / c. At each station, the signals are intercepted with antennas, amplified, and then mixed down to a low frequency where they are further amplified and sampled. The essential difference between a connected-element interferometer and a VLBI array is that each station has an independent local oscillator, which provides the frequency normal for the conversion from the observed frequency to the recorded frequency. The sampled signals are written to disk or tape and shipped to the correlator. At the correlator, the signals are played back, the geometric delay is compensated for, and the signals are correlated and Fourier transformed (in the case of an XF correlator).
2.1.2. The (u, v) plane
We introduced a coordinate system such that the line connecting the interferometer elements, the baseline, is perpendicular to the direction towards the source, and this plane is called the (u, v) plane for obvious reasons. However, the baseline in the (u, v) plane is only a projection of the vector connecting the physical elements. In general, the visibility function will not be the same at different locations in the (u, v) plane, an effect arising from structure in the astronomical source. It is therefore desirable to measure it at as many points in the (u, v) plane as possible. Fortunately, the rotation of the earth continuously changes the relative orientation of the interferometer elements with respect to the source, so that the point given by (u, v) slowly rotates through the plane, and so an interferometer which is fixed on the ground samples various aspects of the astronomical source as the observation progresses. Almost all contemporary radio interferometers work in this way, and the technique is then called aperture synthesis. Furthermore, one can change the observing frequency to move (radially) to a different point in the (u, v) plane. This is illustrated in Figure 2. Note that the visibility measured at (-u, -v) is the complex conjugate of that measured at (u, v), and therefore does not add information. Hence sometimes in plots of (u, v) coverage such as Figure 2, one also plots those points mirrored across the origin. A consequence of this relation is that after 12 h the aperture synthesis with a given array and frequency is complete.
Figure 2. Points in the (u, v) plane sampled during a typical VLBI observation at a frequency of 1.7 GHz ( = 0.18 m). The axis units are kilolambda, and the maximum projected baseline length corresponds to 60000 k = 10800 km. The source was observed four times for 25 min each, over a time range of 7.5 h. For each visibility only one point has been plotted, i.e., the locations of the complex conjugate visibilities are not shown. Top left: The (u, v) track of a single baseline. It describes part of an ellipse the centre of which does not generally coincide with the origin (this is only the case for an east-west interferometer). Top right: The (u, v) track of all interferometer pairs of all participating antennas. Bottom left: The same as the top right panel, but the (u, v) points of all four frequency bands used in the observation have been plotted, which broadened the tracks. Bottom right: This plot displays a magnified portion of the previous diagram. It shows that at any one time the four frequency bands sample different (u, v) points which lie on concentric ellipses. As the Earth rotates, the (u, v) points progress tangentially.
2.1.3. Image reconstruction
After a typical VLBI observation of one source, the (u, v) coverage will not look too different from the one shown in Figure 2. These are all the data needed to form an image by inverting Equation 3. However, the (u, v) plane has been sampled only at relatively few points, and the purely Fourier-transformed image (the "dirty image") will look poor. This is because the true brightness distribution has been convolved with the instrument's point-spread function (PSF). In the case of aperture synthesis, the PSF B(l, m) is the Fourier transform of the (u, v) coverage:
Here S(u, v)) is unity where measurements have been made, and zero elsewhere. Because the (u, v) coverage is mostly unsampled, B(l,m) has very high artefacts ("sidelobes").
Figure 3. The PSF B(l, m) of the (u, v) coverage shown in the bottom left panel of Figure 2. Contours are drawn at 5%, 15%, 25%, ... of the peak response in the image centre. Patches where the response is higher than 5% are scattered all over the image, sometimes reaching 15%. In the central region, the response outside the central peak reaches more than 25%. Without further processing, the achievable dynamic range with this sort of PSF is of the order of a few tens.
To remove the sidelobes requires to interpolate the visibilities to the empty regions of the (u, v) plane, and the standard method in radio astronomy to do that is the "CLEAN" algorithm.
The CLEAN algorithm (Högbom 1974) is a non-linear, iterative mechanism to rid interferometry images of artefacts caused by insufficient (u, v) coverage. Although a few varieties exist the basic structure of all CLEAN implementations is the same:
CLEAN can be stopped when the sidelobes of the sources in the residual image are much lower than the image noise. A corollary of this is that in the case of very low signal-to-noise ratios CLEAN will essentially have no effect on the image quality, because the sidelobes are below the noise limit introduced by the receivers in the interferometer elements. The "clean image" is iteratively build up out of delta components, and the final image is formed by convolving it with the "clean beam". The clean beam is a two-dimensional Gaussian which is commonly obtained by fitting a Gaussian to the centre of the dirty beam image. After the convolution the residual image is added.
Figure 4. Illustration of the effects of the CLEAN algorithm. Left panel: The Fourier transform of the visibilities already used for illustration in Figures 2 and 3. The image is dominated by artefacts arising from the PSF of the interferometer array. The dynamic range of the image (the image peak divided by the rms in an empty region) is 25. Right panel: The "clean image", made by convolving the model components with the clean beam, which in this case has a size of 3.0 × 4.3 mas. The dynamic range is 144. The contours in both panels start at 180 µJy and increase by factors of two.
The way in which images are formed in radio interferometry may seem difficult and laborious (and it is), but it also adds great flexibility. Because the image is constructed from typically thousands of interferometer measurements one can choose to ignore measurements from, e.g., the longest baselines to emphasize sensitivity to extended structure. Alternatively, one can choose to weight down or ignore short spacings to increase resolution. Or one can convolve the clean model with a Gaussian which is much smaller than the clean beam, to make a clean image with emphasis on fine detail ("superresolution").
2.1.5. Generating a visibility measurement
The previous chapter has dealt with the fundamentals of interferometry and image reconstruction. In this chapter we will give a brief overview about more technical aspects of VLBI observations and the signal processing involved to generate visibility measurements.
It may have become clear by now that an interferometer array really is only a collection of two-element interferometers, and only at the imaging stage is the information gathered by the telescopes combined. In an array of N antennas, the number of pairs which can be formed is N(N - 1) / 2, and so an array of 10 antennas can measure the visibility function at 45 locations in the (u, v) plane simultaneously. Hence, technically, obtaining visibility measurements with a global VLBI array consisting of 16 antennas is no more complex than doing it with a two-element interferometer - it is just logistically more challenging.
Because VLBI observations involve telescopes at widely separated locations (and can belong to different institutions), VLBI observations are fully automated. The entire "observing run" (a typical VLBI observations lasts around 12 h) including setting up the electronics, driving the antenna to the desired coordinates and recording the raw antenna data on tape or disk, is under computer control and requires no interaction by the observer. VLBI observations generally are supervised by telescope operators, not astronomers.
It should be obvious that each antenna needs to point towards the direction of the source to be observed, but as VLBI arrays are typically spread over thousands of kilometres, a source which just rises at one station can be in transit at another 2. Then the electronics need to be set up, which involves a very critical step: tuning the local oscillator. In radio astronomy, the signal received by the antenna is amplified many times (in total the signal is amplified by factors of the order of 108 to 1010), and to avoid receiver instabilities the signal is "mixed down" to much lower frequencies after the first amplification. The mixing involves injecting a locally generated signal (the local oscillator, or LO, signal) into the signal path with a frequency close to the observing frequency. This yields the signal at a frequency which is the difference between the observing frequency and the LO frequency (see, e.g., Rohlfs 1986 for more details). The LO frequency must be extremely stable (1 part in 1015 per day or better) and accurately known (to the sub-Hz level) to ensure that all antennas observe at the same frequency. Interferometers with connected elements such as the VLA or ATCA only need to generate a single LO the output of which can be sent to the individual stations, and any variation in its frequency will affect all stations equally. This is not possible in VLBI, and so each antenna is equipped with a maser (mostly hydrogen masers) which provides a frequency standard to which the LO is phase-locked. After downconversion, the signal is digitized at the receiver output and either stored on tape or disk, or, more recently, directly sent to the correlator via fast network connections ("eVLBI").
The correlator is sometimes referred to as the "lens" of VLBI observations, because it produces the visibility measurements from the electric fields sampled at the antennas. The data streams are aligned, appropriate delays and phases introduced and then two operations need to be performed on segments of the data: the cross-multiplication of each pair of stations and a Fourier transform, to go from the temporal domain into the spectral domain. Note that Eq.3 is strictly valid only at a particular frequency. Averaging in frequency is a source of error, and so the observing band is divided into frequency channels to reduce averaging, and the VLBI measurand is a cross-power spectrum.
The cross-correlation and Fourier transform can be interchanged, and correlator designs exists which carry out the cross-correlation first and then the Fourier transform (the "lag-based", or "XF", design such as the MPIfR's Mark IV correlator and the ATCA correlator), and also vice versa (the "FX" correlator such as the VLBA correlator). The advantages and disadvantages of the two designs are mostly in technical details and computing cost, and of little interest to the observer once the instrument is built. However, the response to spectral lines is different. In a lag-based correlator the signal is Fourier transformed once after cross-correlation, and the resulting cross-power spectrum is the intrinsic spectrum convolved with the sinc function. In a FX correlator, the input streams of both interferometer elements are Fourier transformed which includes a convolution with the sinc function, and the subsequent cross-correlation produces a spectrum which is convolved with the square of the sinc function. Hence the FX correlator has a finer spectral resolution and lower sidelobes (see Romney 1999).
The vast amount of data which needs to be processed in interferometry observations has always been processed on purpose-built computers (except for the very observations where bandwidths of the order of several hundred kHz were processed on general purpose computers). Only recently has the power of off-the-shelf PCs reached a level which makes it feasible to carry out the correlation in software. Deller et al. (2007) describe a software correlator which can efficiently run on a cluster of PC-architecture computers. Correlation is an "embarrassingly parallel" problem, which can be split up in time, frequency, and by baseline, and hence is ideally suited to run in a cluster environment.
The result of the correlation stage is a set of visibility measurements. These are stored in various formats along with auxiliary information about the array such as receiver temperatures, weather information, pointing accuracy, and so forth. These data are sent to the observer who has to carry out a number of calibration steps before the desired information can be extracted.
2.2. Sources of error in VLBI observations
VLBI observations generally are subject to the same problems as observations with connected-element interferometers, but the fact that the individual elements are separated by hundreds and thousands of kilometres adds a few complications.
The largest source of error in typical VLBI observations are phase errors introduced by the earth's atmosphere. Variations in the atmosphere's electric properties cause varying delays of the radio waves as they travel through it. The consequence of phase errors is that the measured flux of individual visibilities will be scattered away from the correct locations in the image, reducing the SNR of the observations or, in fact, prohibiting a detection at all. Phase errors arise from tiny variations in the electric path lengths from the source to the antennas. The bulk of the ionospheric and tropospheric delays is compensated in correlation using atmospheric models, but the atmosphere varies on very short timescales so that there are continuous fluctuations in the measured visibilities.
At frequencies below 5 GHz changes in the electron content (TEC) of the ionosphere along the line of sight are the dominant source of phase errors. The ionosphere is the uppermost part of the earth's atmosphere which is ionised by the sun, and hence undergoes diurnal and seasonal changes. At low GHz frequencies the ionosphere's plasma frequency is sufficiently close to the observing frequency to have a noticeable effect. Unlike tropospheric and most other errors, which have a linear dependence on frequency, the impact of the ionosphere is proportional to the inverse of the frequency squared, and so fades away rather quickly as one goes to higher frequencies. Whilst the TEC is regularly monitored 3 and the measurements can be incorporated into the VLBI data calibration, the residual errors are still considerable.
At higher frequencies changes in the tropospheric water vapour content have the largest impact on radio interferometry observations. Water vapour 4 does not mix well with air and thus the integrated amount of water vapour along the line of sight varies considerably as the wind blows over the antenna. Measuring the amount of water vapour along the line of sight is possible and has been implemented at a few observatories (Effelsberg, CARMA, Plateau de Bure), however it is difficult and not yet regularly used in the majority of VLBI observations.
Compact arrays generally suffer less from atmospheric effects because most of the weather is common to all antennas. The closer two antennas are together, the more similar the atmosphere is along the lines of sight, and the delay difference between the antennas decreases.
Other sources of error in VLBI observations are mainly uncertainties in the geometry of the array and instrumental errors. The properties of the array must be accurately known in correlation to introduce the correct delays. As one tries to measure the phase of an electromagnetic wave with a wavelength of a few centimetres, the array geometry must be known to a fraction of that. And because the earth is by no means a solid body, many effects have to be taken into account, from large effects like precession and nutation to smaller effects such as tectonic plate motion, post-glacial rebound and gravitational delay. For an interesting and still controversial astrophysical application of the latter, see Fomalont and Kopeikin (2003). For a long list of these effects including their magnitudes and timescales of variability, see Walker (1999).
2.3. The problem of phase calibration: self-calibration
Due to the aforementioned errors, VLBI visibilities directly from the correlator will never be used to make an image of astronomical sources. The visibility phases need to be calibrated in order to recover the information about the source's location and structure. However, how does one separate the unknown source properties from the unknown errors introduced by the instrument and atmosphere? The method commonly used to do this is called self-calibration and works as follows.
In simple words, in self-calibration one uses a model of the source (if a model is not available a point source is used) and tries to find phase corrections for the antennas to make the visibilities comply with that model. This won't work perfectly unless the model was a perfect representation of the source, and there will be residual, albeit smaller, phase errors. However the corrected visibilities will allow one to make an improved source model, and one can find phase corrections to make the visibilities comply with that improved model, which one then uses to make an even better source model. This process is continued until convergence is reached.
The assumption behind self-calibration is that the errors in the visibility phases, which are baseline-based quantities, can be described as the result of antenna-based errors. Most of the errors described in Sec 2.2 are antenna-based: e.g. delays introduced by the atmosphere, uncertainties in the location of the antennas, and drifts in the electronics all are antenna-based. The phase error of a visibility is the combination of the antenna-based phase errors 5. Since the number of unknown station phase errors is less than the number of visibilities, there is additional phase information which can be used to determine the source structure.
However, self-calibration contains some traps. The most important is making a model of the source which is usually accomplished by making a deconvolved image with the CLEAN algorithm. If a source has been observed during an earlier epoch and the structural changes are expected to be small, then one can use an existing model for a start. If in making that model one includes components which are not real (e.g., by cleaning regions of the image which in fact do not contain emission) then they will be included in the next iteration of self-calibration and will re-appear in the next image. Although it is not easy to generate fake sources or source parts which are strong, weaker source structures are easily affected. The authors have witnessed a radio astronomy PhD student producing a map of a colleague's name using a data set of pure noise, although the SNR was of the order of only 3.
It is also important to note that for self-calibration the SNR of the visibilities needs to be of the order of 5 or higher within the time it takes the fastest error component to change by a few tens of degrees ("atmospheric coherence time") (Cotton 1995). Thus the integration time for self-calibration usually is limited by fluctuations in the tropospheric water vapour content. At 5 GHz, one may be able to integrate for 2 min without the atmosphere changing too much, but this can drop to as little as 30 s at 43 GHz. Because radio antennas used for VLBI are less sensitive at higher frequencies observations at tens of GHz require brighter and brighter sources to calibrate the visibility phases. Weak sources well below the detection threshold within the atmospheric coherence time can only be observed using phase referencing (see Sec. 2.3.1.)
Another boundary condition for successfully self-calibration is that for a given number of array elements the source must not be too complex. The more antennas, the better because the ratio of number of constraints to number of antenna gains to be determined goes as N/2 (The number of constraints is the number of visibilities, N(N - 1) / 2. The number of gains is the number of stations, N, minus one, because the phase of one station is a free parameter and set to zero). Thus self-calibration works very well at the VLA, even with complex sources, whereas for an east-west interferometer with few elements such as the ATCA, self-calibration is rather limited. In VLBI observations, however, the sources are typically simple enough to make self-calibration work even with a modest number (N > 5) of antennas.
2.3.1. Phase referencing
It is possible to obtain phase-calibrated visibilities without self-calibration by measuring the phase of a nearby, known calibrator. The assumption is that all errors for the calibrator and the target are sufficiently similar to allow calibration of the target with phase corrections derived from the calibrator. While this assumption is justified for the more slowly varying errors such as clock errors and array geometry errors (provided target and calibrator are close), it is only valid under certain circumstances when atmospheric errors are considered. The critical ingredients in phase referencing observations are the target-calibrator separation and the atmospheric coherence time. The separation within which the phase errors for the target and calibrator are considered to be the same is called the isoplanatic patch, and is of the order of a few degrees at 5 GHz. The switching time must be shorter than the atmospheric coherence time to prevent undersampling of the atmospheric phase variations. At high GHz frequencies this can result in observing strategies where one spends half the observing time on the calibrator.
Phase-referencing not only allows one to observe sources too weak for self-calibration, but it also yields precise astrometry for the target relative to the calibrator. A treatment of the attainable accuracy can be found in Pradel et al. (2006).
The polarization of radio emission can yield insights into the strength and orientation of magnetic fields in astrophysical objects and the associated foregrounds. As a consequence and because the calibration has become easier and more streamlined it has become increasingly popular in the past 10 years to measure polarization.
Most radio antennas can record two orthogonal polarizations, conventionally in the form of dual circular polarization. In correlation, one can correlate the right-hand circular polarization signal (RCP) of one antenna with the left-hand circular polarization (LCP) of another and vice versa, to obtain the cross-polarization products RL and LR. The parallel-hand circular polarization cross-products are abbreviated as RR and LL. The four correlation products are converted into the four Stokes parameters in the following way:
From the Stokes images one can compute images of polarized intensity and polarization angle.
Most of the calibration in polarization VLBI observations is identical to conventional observations, where one either records only data in one circular polarization or does not form the cross-polarization data at the correlation stage. However, two effects need to be taken care of, the relative phase relation between RCP and LCP and the leakage of emission from RCP and LCP into the cross-products.
The relative phase orientation of RCP and LCP needs to be calibrated to obtain the absolute value for the electric vector position angle (EVPA) of the polarized emission in the source. This is usually accomplished by observing a calibrator which is known to have a stable EVPA with a low-resolution instrument such as a single dish telescope or a compact array.
Calibration of the leakage is more challenging. Each radio telescope has polarization impurities arising from structural asymmetries and errors in manufacturing, resulting in "leakage" of emission from one polarization to the other. The amount of leakage typically is of the order of a few percent and thus is of the same order as the typical degree of polarization in the observed sources and so needs to be carefully calibrated. The leakage is a function of frequency but can be regarded as stable over the course of a VLBI observation.
Unfortunately, sources which are detectable with VLBI are extremely small and hence mostly variable. It is therefore not possible to calibrate the leakage by simply observing a polarization calibrator, and the leakage needs to be calibrated by every observer. At present the calibration scheme exploits the fact that the polarized emission arising from leakage does not change its position angle in the course of the observations. The EVPA of the polarized emission coming from the source, however, will change with respect to the antenna and its feed horns, because most antennas have alt-azimuth mounts and so the source seems to rotate on the sky as the observation progresses 6. One can think of this situation as the sum of two vectors, where the instrumental polarization is a fixed vector and the astronomical polarization is added to this vector and rotates during the observation. Leakage calibration is about separating these two contributions, by observing a strong source at a wide range of position angles. The method is described in Leppänen et al. (1995), and a more detailed treatment of polarization VLBI is given by Kemball (1999).
2.5. Spectral line VLBI
In general a set of visibility measurements consists of cross-power spectra. If a continuum source has been targeted, the number of spectral points is commonly of the order of a few tens. If a spectral line has been observed, the number of channels can be as high as a few thousand, and is limited by the capabilities of the correlator. The high brightness temperatures (Section 3.1.3) needed to yield a VLBI detection restrict observations to masers, or relatively large absorbers in front of non-thermal continuum sources. The setup of frequencies requires the same care as for short baseline interferometry, but an additional complication is that the antennas have significant differences in their Doppler shifts towards the source. See Westpfahl (1999), Rupen (1999), and Reid et al. (1999) for a detailed treatment of spectral-line interferometry and VLBI.
2.6. Pulsar gating
If pulsars are to be observed with a radio interferometer it is desirable to correlate only those times where a pulse is arriving (or where it is absent, Stappers et al. 1999). This is called pulsar gating and is an observing mode available at most interferometers.
2.7. Wide-field limitations
The equations in Sec. 2.1 are strictly correct only for a single frequency and single points in time, but radio telescopes must observe a finite bandwidth, and in correlation a finite integration time must be used, to be able to detect objects. Hence a point in the (u, v) plane always represents the result of averaging across a bandwidth, , and over a time interval, (the points in Fig. 2 actually represent a continuous frequency band in the radial direction and a continuous observation in the time direction).
The errors arising from averaging across the bandwidth are referred to as bandwidth smearing, because the effect is similar to chromatic aberration in optical systems, where the light from one single point of an object is distributed radially in the image plane. In radio interferometry, the images of sources away from the observing centre are smeared out in the radial direction, reducing the signal-to-noise ratio. The effect of bandwidth smearing increases with the fractional bandwidth, / , the square root of the distance to the observing centre, (l2 + m2)1/2, and with 1 / b, where b is the FWHM of the synthesized beam. Interestingly, however, the dependencies of of the fractional bandwidth and of b cancel one another, and so for any given array and bandwidth, bandwidth smearing is independent of (see Thompson et al. 2001). The effect can be avoided if the observing band is subdivided into a sufficiently large number of frequency channels for all of which one calculates the locations in the (u, v) plane separately. This technique is sometimes deliberately chosen to increase the (u, v) coverage, predominantly at low frequencies where the fractional bandwidth is large. It is then called multi-frequency synthesis.
By analogy, the errors arising from time averaging are called time smearing, and they smear out the images approximately tangentially to the (u, v) ellipse. It occurs because each point in the (u, v) plane represents a measurement during which the baseline vector rotated through E , where E is the angular velocity of the earth. Time smearing also increases as a function of (l2 + m2)1/2 and can be avoided if is chosen small enough for the desired field of view.
VLBI observers generally are concerned with fields of view (FOV) of no more than about one arcsecond, and consequently most VLBI observers are not particularly bothered by wide field effects. However, wide-field VLBI has gained momentum in the last few years as the computing power to process finely time- and bandwidth-sampled data sets has become widely available. Recent examples of observations with fields of view of 1' or more are reported on in McDonald et al. (2001), Garrett et al. (2001), Garrett et al. (2005), Lenc and Tingay (2006) and Lenc et al. (2006). The effects of primary beam attenuation, bandwidth smearing and time smearing on the SNR of the observations can be estimated using the calculator at http://astronomy.swin.edu.au/~elenc/Calculators/wfcalc.php.
2.8. VLBI at mm wavelengths
In the quest for angular resolution VLBI helps to optimize one part of the equation which approximates the separation of the finest details an instrument is capable of resolving, = / D . In VLBI, D approaches the diameter of the earth, and larger instruments are barely possible, although "space-VLBI" can extend an array beyond earth. However, it is straightforward to attempt to decrease to push the resolution further up.
However, VLBI observations at frequencies above 20 GHz ( = 15 mm) become progressively harder towards higher frequencies. Many effects contribute to the difficulties at mm wavelengths: the atmospheric coherence time is shorter than one minute, telescopes are less efficient, receivers are less sensitive, sources are weaker than at cm wavelengths, and tropospheric water vapour absorbs the radio emission. All of these effects limit mm VLBI observations to comparatively few bright continuum sources or sources hosting strong masers. Hence also the number of possible phase calibrators for phase referencing drops. Nevertheless VLBI observations at 22 GHz (13 mm) , 43 GHz ( 7 mm) and 86 GHz ( 3 mm) are routinely carried out with the world's VLBI arrays. For example, of all projects observed in 2005 and 2006 with the VLBA, 16% were made at 22 GHz, 23% at 43 GHz, and 4% at 86 GHz 7.
Although observations at higher frequencies are experimental, a convincing demonstration of the feasibility of VLBI at wavelengths shorter than 3 mm was made at 2 mm (147 GHz) in 2001 and 2002 (Krichbaum et al. 2002a, Greve et al. 2002). These first 2 mm-VLBI experiments resulted in detections of about one dozen quasars on the short continental and long transatlantic baselines (Krichbaum et al. 2002a). In an experiment in April 2003 at 1.3 mm (230 GHz) a number of sources was detected on the 1150 km long baseline between Pico Veleta and Plateau de Bure (Krichbaum et al. 2004). On the 6.4 G long transatlantic baseline between Europe and Arizona, USA fringes for the quasar 3C 454.3 were clearly seen. This detection marks a new record in angular resolution in Astronomy (size < 30 µas). It indicates the existence of ultra compact emission regions in AGN even at the highest frequencies (for 3C 454.3 at z = 0.859, the rest frame frequency is 428 GHz). So far, no evidence for a reduced brightness temperature of the VLBI-cores at mm wavelengths was found (Krichbaum et al. 2004). These are the astronomical observations with the highest angular resolution possible today at any wavelength.
2.9. The future of VLBI: eVLBI, VLBI in space, and the SKA
One of the key drawbacks of VLBI observations has always been that the raw antenna signals are recorded and the visibilities formed only later when the signals are combined in the correlator. Thus there has never been an immediate feedback for the observer, who has had to wait several weeks or months until the data are received for investigation. With the advent of fast computer networks this has changed in the last few years. First small pieces of raw data were sent from the antennas to the correlator, to check the integrity of the VLBI array, then data were directly streamed onto disks at the correlator, then visibilities were produced in real-time from the data sent from the antennas to the correlator. A brief description of the transition from tape-based recording to real-time correlation of the European VLBI Network (EVN) is given in Szomoru et al. (2004). The EVN now regularly performs so-called "eVLBI" observing runs which is likely to be the standard mode of operation in the near future. The Australian Long Baseline Array (LBA) completed a first eVLBI-only observing run in March 2007 8.
It has been indicated in Sec. 2.8 that resolution can be increased not only by observing at higher frequencies with ground-based arrays but also by using a radio antenna in earth orbit. This has indeed been accomplished with the Japanese satellite "HALCA" (Highly Advanced Laboratory for Communications and Astronomy, Hirabayashi et al. 2000) which was used for VLBI observations at 1.6 GHz, 5 GHz and 22 GHz (albeit with very low sensitivity) between 1997 and 2003. The satellite provided the collecting area of an 8 m radio telescope and the sampled signals were directly transmitted to ground-based tracking stations. The satellite's elliptical orbit provided baselines between a few hundred km and more than 20000 km, yielding a resolution of up to 0.3 mas ( Dodson et al. 2006, Edwards and Piner 2002). Amazingly, although HALCA only provided left-hand circular polarization, it has been used successfully to observe polarized emission (e.g., Kemball et al. 2000, Bach et al. 2006a). But this was only possible because the ground array observed dual circular polarization. Many of the scientific results from VSOP are summarized in two special issues of Publications of the Astronomical Society of Japan (PASJ, Vol. 52, No. 6, 2000 and Vol. 58, No. 2, 2006). The successor to HALCA, ASTRO-G, is under development and due for launch in 2012. It will have a reflector with a diameter of 9 m and receivers for observations at 8 GHz, 22 GHz, and 43 GHz.
The Russian mission RadioAstron is a similar project to launch a 10 m radio telescope into a high apogee orbit. It will carry receivers for frequencies between 327 MHz and 25 GHz, and is due for launch in October 2008.
The design goals for the Square Kilometre Array (SKA), a large, next-generation radio telescope built by an international consortium, include interferometer baselines of at least 3000 km. At the same time, the design envisions the highest observing frequency to be 25 GHz, and so one would expect a maximum resolution of around 1 mas. However, most of the baselines will be shorter than 3000 km, and so a weighted average of all visibilities will yield a resolution of a few mas, and of tens of mas at low GHz frequencies. The SKA's resolution will therefore be comparable to current VLBI arrays. Its sensitivity, however, will be orders of magnitude higher (sub-µJy in 1 h). The most compelling consequence of this is that the SKA will allow one to observe thermal sources with brightness temperatures of the order of a few hundred Kelvin with a resolution of a few mas. Current VLBI observations are limited to sources with brightness temperatures of the order of 106 K and so to non-thermal radio sources and coherent emission from masers. With the SKA one can observe star and black hole formation throughout the universe, stars, water masers at significant redshifts, and much more. Whether or not the SKA can be called a VLBI array in the original sense (an array of isolated antennas the visibilities of which are produced later on) is a matter of taste: correlation will be done in real time and the local oscillator signals will be distributed from the same source. Still, the baselines will be "very long" when compared to 20th century connected-element interferometers. A short treatment of "VLBI with the SKA" can be found in Carilli (2005). Comprehensive information about the current state of the SKA is available on the aforementioned web page; prospects of the scientific outcomes of the SKA are summarized in Carilli and Rawlings (2004); and engineering aspects are treated in Hall (2005).
2.10. VLBI arrays around the world and their capabilities
This section gives an overview of presently active VLBI arrays which are available for all astronomers and which are predominantly used for astronomical observations. Antennas of these arrays are frequently used in other array's observations, to either add more long baselines or to increase the sensitivity of the observations. Joint observations including the VLBA and EVN antennas are quite common; also observations with the VLBA plus two or more of the phased VLA, the Green Bank, Effelsberg and Arecibo telescopes (then known as the High Sensitivity Array) have recently been made easier through a common application process. Note that each of these four telescopes has more collecting area than the VLBA alone, and hence the sensitivity improvement is considerable.
|Square Kilometre Array||http://www.skatelescope.org|
|High Sensitivity Array||http://www.nrao.edu/HSA|
|European VLBI Network||http://www.evlbi.org|
|Very Long Baseline Array||http://www.vlba.nrao.edu|
|Long Baseline Array||http://www.atnf.csiro.au/vlbi|
2.10.1 The European VLBI Network (EVN)
The EVN is a collaboration of 14 institutes in Europe, Asia, and South Africa and was founded in 1980. The participating telescopes are used in many independent radio astronomical observations, but are scheduled three times per year for several weeks together as a VLBI array. The EVN provides frequencies in the range of 300 MHz to 43 GHz, though due to its inhomogeneity not all frequencies can be observed at all antennas. The advantage of the EVN is that it includes several relatively large telescopes such as the 76 m Lovell telescope, the Westerbork array, and the Effelsberg 100 m telescope, which provide high sensitivity. Its disadvantages are a relatively poor frequency agility during the observations, because not all telescopes can change their receivers at the flick of a switch. EVN observations are mostly correlated on the correlator at the Joint Institute for VLBI in Europe (JIVE) in Dwingeloo, the Netherlands, but sometimes are processed at the Max-Planck-Institute for Radio Astronomy in Bonn, Germany, or the National Radio Astronomy Observatory in Socorro, USA.
2.10.2. The U.S. Very Long Baseline Array (VLBA)
The VLBA is a purpose-built VLBI array across the continental USA and islands in the Caribbean and Hawaii. It consists of 10 identical antennas with a diameter of 25 m, which are remotely operated from Socorro, New Mexico. The VLBA was constructed in the early 1990s and began full operations in 1993. It provides frequencies between 300 MHz and 86 GHz at all stations (except two which are not worth equipping with 86 GHz receivers due to their humid locations). Its advantages are excellent frequency agility and its homogeneity, which makes it very easy to use. Its disadvantages are its comparatively small antennas, although the VLBA is frequently used in conjunction with the phased VLA and the Effelsberg and Green Bank telescopes.
2.10.3. The Australian Long Baseline Array (LBA)
The LBA consists of six antennas in Ceduna, Hobart, Parkes, Mopra, Narrabri, and Tidbinbilla. Like the EVN it has been formed from existing antennas and so the array is inhomogeneous. Its frequency range is 1.4 GHz to 22 GHz, but not all antennas can observe at all available frequencies. Stretched out along Australia's east coast, the LBA extends in a north-south direction which limits the (u, v) coverage. Nevertheless, the LBA is the only VLBI array which can observe the entire southern sky, and the recent technical developments are remarkable: the LBA is at the forefront of development of eVLBI and at present is the only VLBI array correlating all of its observations using the software correlator developed by Deller et al. (2007) on a computer cluster of the Swinburne Centre for Astrophysics and Supercomputing.
2.10.4. The Korean VLBI Network (KVN)
The KVN is a dedicated VLBI network consisting of three antennas which currently is under construction in Korea. It will be able to observe at up to four widely separated frequencies (22 GHz, 43 GHz, 86 GHz, and 129 GHz), but will also be able to observe at 2.3 GHz and 8.4 GHz. The KVN has been designed to observe H2O and SiO masers in particular and can observe these transitions simultaneously. Furthermore, the antennas can slew quickly for improved performance in phase-referencing observations.
2.10.5. The Japanese VERA network
VERA (VLBI Exploration of Radio Astrometry) is a purpose-built VLBI network of four antennas in Japan. The scientific goal is to measure the annual parallax towards galactic masers (H2O masers at 22 GHz and SiO masers at 43 GHz), to construct a map of the Milky Way. Nevertheless, VERA is open for access to carry out any other observations. VERA can observe two sources separated by up to 2.2° simultaneously, intended for an extragalactic reference source and for the galactic target. This observing mode is a significant improvement over the technique of phase-referencing where the reference source and target are observed in turn. The positional accuracy is expected to reach 10 µas, and recent results seem to reach this (Hirota et al. 2007b). VERA's frequency range is 2.3 GHz to 43 GHz.
2.10.6. The Global mm-VLBI Array (GMVA)
The GMVA is an inhomogeneous array of 13 radio telescopes capable of observing at a frequency of 86 GHz. Observations with this network are scheduled twice a year for about a week. The array's objective is to provide the highest angular resolution on a regular schedule.
1 The fields of view in VLBI observations are typically so small that the dependence of (l,m) can be safely ignored. A can then be set to unity and disappears. Back.
2 This can be neatly observed with the VLBA's webcam images available at http://www.vlba.nrao.edu/sites/SITECAM/allsites.shtml. The images are updated every 5 min. Back.
3 http://iono.jpl.nasa.gov Back.
4 note that clouds do not primarily consist of water vapour, but of condensated water in droplets. Back.
5 Baseline-based errors exist, too, but are far less important, see Cornwell and Fomalont (1999) for a list. Back.
6 The 26 m antenna at the Mount Pleasant Observatory near Hobart, Australia, has a parallactic mount and thus there is no field rotation. Back.
7 e.g., ftp://ftp.aoc.nrao.edu/pub/cumvlbaobs.txt Back.
8 http://www.atnf.csiro.au/vlbi/evlbi/ Back. | http://ned.ipac.caltech.edu/level5/March12/Middelberg/Middelberg2.html | 13 |
15 | The Renaissance, that is, the period that extends roughly from the middle of the fourteenth century to the beginning of the seventeen century, was a time of intense, all-encompassing, and, in many ways, distinctive philosophical activity. A fundamental assumption of the Renaissance movement was that the remains of classical antiquity constituted an invaluable source of excellence to which debased and decadent modern times could turn in order to repair the damage brought about since the fall of the Roman Empire. It was often assumed that God had given a single unified truth to humanity and that the works of ancient philosophers had preserved part of this original deposit of divine wisdom. This idea not only laid the foundation for a scholarly culture that was centered on ancient texts and their interpretation, but also fostered an approach to textual interpretation that strove to harmonize and reconcile divergent philosophical accounts. Stimulated by newly available texts, one of the most important hallmarks of Renaissance philosophy is the increased interest in primary sources of Greek and Roman thought, which were previously unknown or little read. The renewed study of Neoplatonism, Stoicism, Epicureanism, and Skepticism eroded faith in the universal truth of Aristotelian philosophy and widened the philosophical horizon, providing a rich seedbed from which modern science and modern philosophy gradually emerged.
Table of Contents
- Hellenistic Philosophies
- New Philosophies of Nature
- References and Further Reading
Improved access to a great deal of previously unknown literature from ancient Greece and Rome was an important aspect of Renaissance philosophy. The renewed study of Aristotle, however, was not so much because of the rediscovery of unknown texts, but because of a renewed interest in texts long translated into Latin but little studied, such as the Poetics, and especially because of novel approaches to well-known texts. From the early fifteenth century onwards, humanists devoted considerable time and energy to making Aristotelian texts clearer and more precise. In order to rediscover the meaning of Aristotle’s thought, they updated the Scholastic translations of his works, read them in the original Greek, and analyzed them with philological techniques. The availability of these new interpretative tools had a great impact on the philosophical debate. Moreover, in the four decades after 1490, the Aristotelian interpretations of Alexander of Aphrodisias, Themistius, Ammonius, Philoponus, Simplicius, and other Greek commentators were added to the views of Arabic and medieval commentators, stimulating new solutions to Aristotelian problems and leading to a wide variety of interpretations of Aristotle in the Renaissance period.
The most powerful tradition, at least in Italy, was that which took Averroes’s works as the best key for determining the true mind of Aristotle. Averroes’s name was primarily associated with the doctrine of the unity of the intellect. Among the defenders of his theory that there is only one intellect for all human beings, we find Paul of Venice (d. 1429), who is regarded as the founding figure of Renaissance Averroism, and Alessandro Achillini (1463–1512), as well as the Jewish philosopher Elijah del Medigo (1458–1493). Two other Renaissance Aristotelians who expended much of their philosophical energies on explicating the texts of Averroes are Nicoletto Vernia (d. 1499) and Agostino Nifo (c. 1469–1538). They are noteworthy characters in the Renaissance controversy about the immortality of the soul mainly because of the remarkable shift that can be discerned in their thought. Initially they were defenders of Averroes’s theory of the unity of the intellect, but from loyal followers of Averroes as a guide to Aristotle, they became careful students of the Greek commentators, and in their late thought both Vernia and Nifo attacked Averroes as a misleading interpreter of Aristotle, believing that personal immortality could be philosophically demonstrated.
Many Renaissance Aristotelians read Aristotle for scientific or secular reasons, with no direct interest in religious or theological questions. Pietro Pomponazzi (1462–1525), one of the most important and influential Aristotelian philosophers of the Renaissance, developed his views entirely within the framework of natural philosophy. In De immortalitate animae (Treatise on the Immortality of the Soul, 1516), arguing from the Aristotelian text, Pomponazzi maintained that proof of the intellect’s ability to survive the death of the body must be found in an activity of the intellect that functions without any dependence on the body. In his view, no such activity can be found because the highest activity of the intellect, the attainment of universals in cognition, is always mediated by sense impression. Therefore, based solely on philosophical premises and Aristotelian principles, the conclusion is that the entire soul dies with the body. Pomponazzi’s treatise aroused violent opposition and led to a spate of books being written against him. In 1520, he completed De naturalium effectuum causis sive de incantationibus (On the Causes of Natural Effects or On Incantations), whose main target was the popular belief that miracles are produced by angels and demons. He excluded supernatural explanations from the domain of nature by establishing that it is possible to explain those extraordinary events commonly regarded as miracles in terms of a concatenation of natural causes. Another substantial work is De fato, de libero arbitrio et de praedestinatione (Five Books on Fate, Free Will and Predestination), which is regarded as one of the most important works on the problems of freedom and determinism in the Renaissance. Pomponazzi considers whether the human will can be free, and he considers the conflicting points of view of philosophical determinism and Christian theology.
Another philosopher who tried to keep Aristotle’s authority independent of theology and subject to rational criticism, is Jacopo Zabarella (1533–1589), who produced an extensive body of work on the nature of logic and scientific method. His goal was the retrieval of the genuine Aristotelian concepts of science and scientific method, which he understood as the indisputable demonstration of the nature and constitutive principles of natural beings. He developed the method of regressus, a combination of the deductive procedures of composition and the inductive procedures of resolution that came to be regarded as the proper method for obtaining knowledge in the theoretical sciences. Among his main works are the collected logical works Opera logica (1578), which are mainly devoted to the theory of demonstration, and his major work on natural philosophy, De rebus naturalibus (1590). Zabarella’s work was instrumental in a renewal of natural philosophy, methodology, and theory of knowledge.
There were also forms of Aristotelian philosophy with strong confessional ties, such as the branch of Scholasticism that developed on the Iberian Peninsula during the sixteenth century. This current of Hispanic Scholastic philosophy began with the Dominican School founded in Salamanca by Francisco de Vitoria (1492–1546) and continued with the philosophy of the newly founded Society of Jesus, among whose defining authorities were Pedro da Fonseca (1528–1599), Francisco de Toledo (1533–1596), and Francisco Suárez (1548–1617). Their most important writings were in the areas of metaphysics and philosophy of law. They played a key role in the elaboration of the law of nations (jus gentium) and the theory of just war, a debate that began with Vitoria’s Relectio de iure belli (A Re-lecture of the Right of War, 1539) and continued with the writings of Domingo de Soto (1494–1560), Suárez, and many others. In the field of metaphysics, the most important work is Suárez’ Disputationes metaphysicae (Metaphysical Disputations, 1597), a systematic presentation of philosophy—against the background of Christian principles—that set the standard for philosophical and theological teaching for almost two centuries.
The humanist movement did not eliminate older approaches to philosophy, but contributed to change them in important ways, providing new information and new methods to the field. Humanists called for a radical change of philosophy and uncovered older texts that multiplied and hardened current philosophical discord. Some of the most salient features of humanist reform are the accurate study of texts in the original languages, the preference for ancient authors and commentators over medieval ones, and the avoidance of technical language in the interest of moral suasion and accessibility. Humanists stressed moral philosophy as the branch of philosophical studies that best met their needs. They addressed a general audience in an accessible manner and aimed to bring about an increase in public and private virtue. Regarding philosophy as a discipline allied to history, rhetoric, and philology, they expressed little interest in metaphysical or epistemological questions. Logic was subordinated to rhetoric and reshaped to serve the purposes of persuasion.
One of the seminal figures of the humanist movement was Francesco Petrarca (1304–1374). In De sui ipsius et multorum aliorum ignorantia (On His Own Ignorance and That of Many Others), he elaborated what was to become the standard critique of Scholastic philosophy. One of his main objections to Scholastic Aristotelianism is that it is useless and ineffective in achieving the good life. Moreover, to cling to a single authority when all authorities are unreliable is simply foolish. He especially attacked, as opponents of Christianity, Aristotle’s commentator Averroes and contemporary Aristotelians that agreed with him. Petrarca returned to a conception of philosophy rooted in the classical tradition, and from his time onward, when professional humanists took interest in philosophy, they nearly always concerned themselves with ethical questions. Among those he influenced were Coluccio Salutati (1331–1406), Leonardo Bruni (c.1370–1444) and Poggio Bracciolini (1380–1459), all of whom promoted humanistic learning in distinctive ways.
One of the most original and important humanists of the Quattrocento was Lorenzo Valla (1406–1457). His most influential writing was Elegantiae linguae Latinae (Elegances of the Latin Language), a handbook of Latin language and style. He is also famous for having demonstrated, on the basis of linguistic and historical evidence, that the so-called Donation of Constantine, on which the secular rule of the papacy was based, was an early medieval forgery. His main philosophical work is Repastinatio dialecticae et philosophiae (Reploughing of Dialectic and Philosophy), an attack on major tenets of Aristotelian philosophy. The first book deals with the criticism of fundamental notions of metaphysics, ethics, and natural philosophy, while the remaining two books are devoted to dialectics.
Throughout the fifteenth and early sixteenth century, humanists were unanimous in their condemnation of university education and their contempt for Scholastic logic. Humanists such as Valla and Rudolph Agricola (1443–1485), whose main work is De inventione dialectica (On Dialectical Invention, 1479), set about to replace the Scholastic curriculum, based on syllogism and disputation, with a treatment of logic oriented toward the use of persuasion and topics, a technique of verbal association aiming at the invention and organization of material for arguments. According to Valla and Agricola, language is primarily a vehicle for communication and debate, and consequently arguments should be evaluated in terms of how effective and useful they are rather than in terms of formal validity. Accordingly, they subsumed the study of the Aristotelian theory of inference under a broader range of forms of argumentation. This approach was taken up and developed in various directions by later humanists, such as Mario Nizolio (1488–1567), Juan Luis Vives (1493–1540), and Petrus Ramus (1515–1572).
Vives was a Spanish-born humanist who spent the greater part of his life in the Low Countries. He aspired to replace the Scholastic tradition in all fields of learning with a humanist curriculum inspired by education in the classics. In 1519, he published In Pseudodialecticos (Against the Pseudodialecticians), a satirical diatribe against Scholastic logic in which he voices his opposition on several counts. A detailed criticism can be found in De disciplinis (On the Disciplines, 1531), an encyclopedic work divided into three parts: De causis corruptarum artium (On the Causes of the Corruption of the Arts), a collection of seven books devoted to a thorough critique of the foundations of contemporary education; De tradendis disciplinis (On Handing Down the Disciplines), five books where Vives’s educational reform is outlined; and De artibus (On the Arts), five shorter treatises that deal mainly with logic and metaphysics. Another area in which Vives enjoyed considerable success was psychology. His reflections on the human soul are mainly concentrated in De anima et vita (On the Soul and Life, 1538), a study of the soul and its interaction with the body, which also contains a penetrating analysis of the emotions.
Ramus was another humanist who criticized the shortcomings of contemporary teaching and advocated a humanist reform of the arts curriculum. His textbooks were the best sellers of their day and were very influential in Protestant universities in the later sixteenth century. In 1543, he published Dialecticae partitiones (The Structure of Dialectic), which in its second edition was called Dialecticae institutiones (Training in Dialectic), and Aristotelicae animadversions (Remarks on Aristotle). These works gained him a reputation as a virulent opponent of Aristotelian philosophy. He considered his own dialectics, consisting of invention and judgment, to be applicable to all areas of knowledge, and he emphasised the need for learning to be comprehensible and useful, with a particular stress on the practical aspects of mathematics. His own reformed system of logic reached its definitive form with the publication of the third edition of Dialectique (1555).
Humanism also supported Christian reform. The most important Christian humanist was the Dutch scholar Desiderius Erasmus (c.1466–1536). He was hostile to Scholasticism, which he did not consider a proper basis for Christian life, and put his erudition at the service of religion by promoting learned piety (docta pietas). In 1503, he published Enchiridion militis christiani (Handbook of the Christian Soldier), a guide to the Christian life addressed to laymen in need of spiritual guidance, in which he developed the concept of a philosophia Christi. His most famous work is Moriae encomium (The Praise of Folly), a satirical monologue first published in 1511 that touches upon a variety of social, political, intellectual, and religious issues. In 1524, he published De libero arbitrio (On Free Will), an open attack a one central doctrine of Martin Luther’s theology: that the human will is enslaved by sin. Erasmus’s analysis hinges on the interpretation of relevant biblical and patristic passages and reaches the conclusion that the human will is extremely weak, but able, with the help of divine grace, to choose the path of salvation.
Humanism also had an impact of overwhelming importance on the development of political thought. With Institutio principis christiani (The Education of a Christian Prince, 1516), Erasmus contributed to the popular genre of humanist advice books for princes. These manuals dealt with the proper ends of government and how best to attain them. Among humanists of the fourteenth century, the most usual proposal was that a strong monarchy should be the best form of government. Petrarca, in his account of princely government that was written in 1373 and took the form of a letter to Francesco da Carrara, argued that cities ought to be governed by princes who accept their office reluctantly and who pursue glory through virtuous actions. His views were repeated in quite a few of the numerous “mirror for princes” (speculum principis) composed during the course of the fifteenth century, such as Giovanni Pontano’s De principe (On the Prince, 1468) and Bartolomeo Sacchi’s De principe (On the Prince, 1471).
Several authors exploited the tensions within the genre of “mirror for princes” in order to defend popular regimes. In Laudatio florentinae urbis (Panegyric of the City of Florence), Bruni maintained that justice can only be assured by a republican constitution. In his view, cities must be governed according to justice if they are to become glorious, and justice is impossible without liberty.
The most important text to challenge the assumptions of princely humanism, however, was Il principe (The Prince), written by the Florentine Niccolò Machiavelli (1469–1527) in 1513, but not published until 1532. A fundamental belief among the humanists was that a ruler needs to cultivate a number of qualities, such as justice and other moral values, in order to acquire honour, glory, and fame. Machiavelli deviated from this view claiming that justice has no decisive place in politics. It is the ruler’s prerogative to decide when to dispense violence and practice deception, no matter how wicked or immoral, as long as the peace of the city is maintained and his share of glory maximized. Machiavelli did not hold that princely regimes were superior to all others. In his less famous, but equally influential, Discorsi sopra la prima deca di Tito Livio (Discourses on the First Ten Books of Titus Livy, 1531), he offers a defense of popular liberty and republican government that takes the ancient republic of Rome as its model.
During the Renaissance, it gradually became possible to take a broader view of philosophy than the traditional Peripatetic framework permitted. No ancient revival had more impact on the history of philosophy than the recovery of Platonism. The rich doctrinal content and formal elegance of Platonism made it a plausible competitor of the Peripatetic tradition. Renaissance Platonism was a product of humanism and marked a sharper break with medieval philosophy. Many Christians found Platonic philosophy safer and more attractive than Aristotelianism. The Neoplatonic conception of philosophy as a way toward union with God supplied many Renaissance Platonists with some of their richest inspiration. The Platonic dialogues were not seen as profane texts to be understood literally, but as sacred mysteries to be deciphered.
Platonism was brought to Italy by the Byzantine scholar George Gemistos Plethon (c.1360–1454), who, during the Council of Florence in 1439, gave a series of lectures that he later reshaped as De differentiis Aristotelis et Platonis (The Differences between Aristotle and Plato). This work, which compared the doctrines of the two philosophers (to Aristotle’s great disadvantage), initiated a controversy regarding the relative superiority of Plato and Aristotle. In the treatise In calumniatorem Platonis (Against the Calumniator of Plato), Cardinal Bessarion (1403–1472) defended Plethon against the charge levelled against his philosophy by the Aristotelian George of Trebizond (1396–1472), who in Comparatio philosophorum Aristotelis et Platonis (A Comparison of the Philosophers Aristotle and Plato) had maintained that Platonism was unchristian and actually a new religion.
The most important Renaissance Platonist was Marsilio Ficino (1433–1499), who translated Plato’s works into Latin and wrote commentaries on several of them. He also translated and commented on Plotinus’s Enneads and translated treatises and commentaries by Porphyry, Iamblichus, Proclus, Synesius, and other Neoplatonists. He considered Plato as part of a long tradition of ancient theology (prisca theologia) that was inaugurated by Hermes and Zoroaster, culminated with Plato, and continued with Plotinus and the other Neoplatonists. Like the ancient Neoplatonists, Ficino assimilated Aristotelian physics and metaphysics and adapted them to Platonic purposes. In his main philosophical treatise, Theologia Platonica de immortalitate animorum (Platonic Theology on the Immortality of Souls, 1482), he put forward his synthesis of Platonism and Christianity as a new theology and metaphysics, which, unlike that of many Scholastics, was explicitly opposed to Averroist secularism. Another work that became very popular was De vita libri tres (Three Books on Life, 1489) by Ficino; it deals with the health of professional scholars and presents a philosophical theory of natural magic.
One of Ficino’s most distinguished associates was Giovanni Pico della Mirandola (1463–1494). He is best known as the author of the celebrated Oratio de hominis dignitate (Oration on the Dignity of Man), which is often regarded as the manifesto of the new Renaissance thinking, but he also wrote several other prominent works. They include Disputationes adversus astrologiam divinatricem (Disputations against Divinatory Astrology), an influential diatribe against astrology; De ente et uno (On Being and the One), a short treatise attempting to reconcile Platonic and Aristotelian metaphysical views; as well as Heptaplus (Seven Days of Creation), a mystical interpretation of the Genesis creation myth. He was not a devout Neoplatonist like Ficino, but rather an Aristotelian by training and in many ways an eclectic by conviction. He wanted to combine Greek, Hebrew, Muslim, and Christian thought into a great synthesis, which he spelled out in nine hundred theses published as Conclusiones in 1486. He planned to defend them publicly in Rome, but three were found heretical and ten others suspect. He defended them in Apologia, which provoked the condemnation of the whole work by Pope Innocent VIII. Pico’s consistent aim in his writings was to exalt the powers of human nature. To this end he defended the use of magic, which he described as the noblest part of natural science, and Kabbalah, a Jewish form of mysticism that was probably of Neoplatonic origin.
Platonic themes were also central to the thought of Nicholas of Cusa (1401–1464), who linked his philosophical activity to the Neoplatonic tradition and authors such as Proclus and Pseudo-Dionysius. The main problem that runs through his works is how humans, as finite created beings, can think about the infinite and transcendent God. His best-known work is De docta ignorantia (On Learned Ignorance, 1440), which gives expression to his view that the human mind needs to realize its own necessary ignorance of what God is like, an ignorance that results from the ontological and cognitive disproportion between God and the finite human knower. Correlated to the doctrine of learned ignorance is that of the coincidence of opposites in God. All things coincide in God in the sense that God, as undifferentiated being, is beyond all opposition. Two other works that are closely connected to De docta ignorantia are De coniecturis (On Conjectures), in which he denies the possibility of exact knowledge, maintaining that all human knowledge is conjectural, and Apologia docta ignorantiae (A Defense of Learned Ignorance, 1449). In the latter, he makes clear that the doctrine of learned ignorance is not intended to deny knowledge of the existence of God, but only to deny all knowledge of God’s nature.
One of the most serious obstacles to the reception and adoption of Platonism in the early fifteenth century was the theory of Platonic love. Many scholars were simply unable to accept Plato’s explicit treatment of homosexuality. Yet by the middle of the sixteenth century this doctrine had become one of the most popular elements of Platonic philosophy. The transformation of Platonic love from an immoral and offensive liability into a valuable asset represents an important episode in the history of Plato’s re-emergence during the Renaissance as a major influence on Western thought.
Bessarion and Ficino did not deny that Platonic love was essentially homosexual in outlook, but they insisted that it was entirely honourable and chaste. To reinforce this point, they associated Platonic discussions of love with those found in the Bible. Another way in which Ficino made Platonic love more palatable to his contemporaries was to emphasise its place within an elaborate system of Neoplatonic metaphysics. But Ficino’s efforts to accommodate the theory to the values of a fifteenth-century audience did not include concealing or denying that Platonic love was homoerotic. Ficino completely accepted the idea that Platonic love involved a chaste relationship between men and endorsed the belief that the soul’s spiritual ascent to ultimate beauty was fuelled by love between men.
In Gli Asolani (1505), the humanist Pietro Bembo (1470–1547) appropriated the language of Platonic love to describe some aspects of the romance between a man and a woman. In this work, love was presented as unequivocally heterosexual. Most of the ideas set out by Ficino are echoed by Bembo. However, Ficino had separated physical love, which had women as its object, from spiritual love, which was shared between men. Bembo’s version of Platonic love, on the other hand, dealt with the relationship between a man and a woman which gradually progresses from a sexual to a spiritual level. The view of Platonic love formulated by Bembo reached its largest audience with the humanist Baldesar Castiglione’s (1478–1529) Il libro del cortegiano (The Courtier, 1528). Castiglione carried on the trend, initiated by Bessarion, of giving Platonic love a strongly religious coloring, and most of the philosophical content is taken from Ficino.
One of the most popular Renaissance treatises on love, Dialoghi d’amore (Dialogues of Love, 1535), was written by the Jewish philosopher Judah ben Isaac Abravanel, also known as Leone Ebreo (c.1460/5–c.1520/5). The work consists of three conversations on love, which he conceives of as the animating principle of the universe and the cause of all existence, divine as well as material. The first dialogue discusses the relation between love and desire; the second the universality of love; and the third, which provides the longest and most sustained philosophical discussion, the origin of love. He draws upon Platonic and Neoplatonic sources, as well as on the cosmology and metaphysics of Jewish and Arabic thinkers, which are combined with Aristotelian sources in order to produce a synthesis of Aristotelian and Platonic views.
Stoicism, Epicureanism, and Skepticism underwent a revival over the course of the fifteenth and sixteenth centuries as part of the ongoing recovery of ancient literature and thought. The revival of Stoicism began with Petrarca, whose renewal of Stoicism moved along two paths. The first one was inspired by Seneca and consisted in the presentation, in works such as De vita solitaria (The Life of Solitude) and De otio religioso (On Religious Leisure), of a way of life in which the cultivation of the scholarly work and ethical perfection are one. The second was his elaboration of Stoic therapy against emotional distress in De secreto conflictu curarum mearum (On the Secret Conflict of My Worries), an inner dialogue of the sort prescribed by Cicero and Seneca, and in De remediis utriusque fortunae (Remedies for Good and Bad Fortune, 1366), a huge compendium based on a short apocryphal tract attributed at the time to Seneca.
While many humanists shared Petrarca’s esteem for Stoic moral philosophy, others called its stern prescriptions into question. They accused the Stoics of suppressing all emotions and criticized their view for its inhuman rigidity. In contrast to the extreme ethical stance of the Stoics, they preferred the more moderate Peripatetic position, arguing that it provides a more realistic basis for morality, since it places the acquisition of virtue within the reach of normal human capacities. Another Stoic doctrine that was often criticized on religious grounds was the conviction that the wise man is entirely responsible for his own happiness and has no need of divine assistance.
The most important exponent of Stoicism during the Renaissance was the Flemish humanist Justus Lipsius (1547–1606), who worked hard to brighten the appeal of Stoicism to Christians. His first Neostoic work was De constantia (On Constancy, 1584), in which he promoted Stoic moral philosophy as a refuge from the horrors of the civil and religious wars that ravaged the continent at the time. His main accounts of Stoicism were Physiologia Stoicorum (Physical Theory of the Stoics) and Manuductio ad stoicam philosophiam (Guide to Stoic Philosophy), both published in 1604. Together they constituted the most learned account of Stoic philosophy produced since antiquity.
During the Middle Ages, Epicureanism was associated with contemptible atheism and hedonist dissipation. In 1417, Bracciolini found Lucretius’s poem De rerum natura, the most informative source on Epicurean teaching, which, together with Ambrogio Traversari’s translation of Diogenes Laertius’s Life of Epicurus into Latin, contributed to a more discriminating appraisal of Epicurean doctrine and a repudiation of the traditional prejudice against the person of Epicurus himself. In a letter written in 1428, Francesco Filelfo (1398–1481) insisted that, contrary to popular opinion, Epicurus was not “addicted to pleasure, lewd and lascivious,” but rather “sober, learned and venerable.” In the epistolary treatise Defensio Epicuri contra Stoicos, Academicos et Peripateticos (Defense of Epicurus against Stoics, Academics and Peripatetics), Cosma Raimondi (d. 1436) vigorously defended Epicurus and the view that the supreme good consists in pleasure both of the mind and the body. He argued that pleasure, according to Epicurus, is not opposed to virtue, but both guided and produced by it. Some humanists tried to harmonize Epicurean with Christian doctrine. In his dialogue De voluptate (On Pleasure, 1431), which was two years later reworked as De vero falsoque bono (On the True and False Good), Valla examined Stoic, Epicurean, and Christian conceptions of the true good. To the ultimate good of the Stoics, that is, virtue practiced for its own sake, Valla opposed that of the Epicureans, represented by pleasure, on the grounds that pleasure comes closer to Christian happiness, which is superior to either pagan ideal.
The revival of ancient philosophy was particularly dramatic in the case of Skepticism, whose revitalisation grew out of many of the currents of Renaissance thought and contributed to make the problem of knowledge crucial for early modern philosophy. The major ancient texts stating the Skeptical arguments were slightly known in the Middle Ages. It was in the fifteenth and sixteenth century that Sextus Empiricus’s Outlines of Pyrrhonism and Against the Mathematicians, Cicero’s Academica, and Diogenes Laertius’s Life of Pyrrho started to receive serious philosophical consideration.
The most significant and influential figure in the development of Renaissance Skepticism is Michel de Montaigne (1533–1592). The most thorough presentation of his Skeptical views occurs in Apologie de Raimond Sebond (Apology for Raymond Sebond), the longest and most philosophical of his essays. In it, he developed in a gradual manner the many kinds of problems that make people doubt the reliability of human reason. He considered in detail the ancient Skeptical arguments about the unreliability of information gained by the senses or by reason, about the inability of human beings to find a satisfactory criterion of knowledge, and about the relativity of moral opinions. He concluded that people should suspend judgment on all matters and follow customs and traditions. He combined these conclusions with fideism.
Many Renaissance appropriators of Academic and Pyrrhonian Skeptical arguments did not see any intrinsic value in Skepticism, but rather used it to attack Aristotelianism and disparage the claims of human science. They challenged the intellectual foundations of medieval Scholastic learning by raising serious questions about the nature of truth and about the ability of humans to discover it. In Examen vanitatis doctrinae gentium et veritatis Christianae disciplinae (Examination of the Vanity of Pagan Doctrine and of the Truth of Christian Teaching, 1520), Gianfrancesco Pico della Mirandola (1469–1533) set out to prove the futility of pagan doctrine and the truth of Christianity. He regarded Skepticism as ideally suited to his campaign, since it challenged the possibility of attaining certain knowledge by means of the senses or by reason, but left the scriptures, grounded in divine revelation, untouched. In the first part of the work, he used the Skeptical arguments contained in the works of Sextus Empiricus against the various schools of ancient philosophy; and in the second part he turned Skepticism against Aristotle and the Peripatetic tradition. His aim was not to call everything into doubt, but rather to discredit every source of knowledge except scripture and condemn all attempts to find truth elsewhere as vain.
In a similar way, Agrippa von Nettesheim (1486–1535), whose real name was Heinrich Cornelius, demonstrated in De incertitudine et vanitate scientiarum atque artium (On the Uncertainty and Vanity of the Arts and Sciences, 1530) the contradictions of scientific doctrines. With stylistic brilliance, he described the controversies of the established academic community and dismissed all academic endeavors in view of the finitude of human experience, which in his view comes to rest only in faith.
The fame of the Portuguese philosopher and medical writer Francisco Sanches (1551–1623) rests mainly on Quod nihil scitur (That Nothing Is Known, 1581), one of the best systematic expositions of philosophical Skepticism produced during the sixteenth century. The treatise contains a radical criticism of the Aristotelian notion of science, but beside its critical aim, it had a constructive objective, which posterity has tended to neglect, consisting in Sanches’s quest for a new method of philosophical and scientific inquiry that could be universally applied. This method was supposed to be expounded in another book that was either lost, remained unpublished, or was not written at all.
In 1543, Nicolaus Copernicus (1473–1543) published De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres), which proposed a new calculus of planetary motion based on several new hypotheses, such as heliocentrism and the motion of the earth. The first generation of readers underestimated the revolutionary character of the work and regarded the hypotheses of the work only as useful mathematical fictions. The result was that astronomers appreciated and adopted some of Copernicus’s mathematical models but rejected his cosmology. Yet, the Aristotelian representation of the universe did not remain unchallenged and new visions of nature, its principles, and its mode of operation started to emerge.
During the sixteenth century, there were many philosophers of nature who felt that Aristotle’s system could no longer regulate honest inquiry into nature. Therefore, they stopped trying to adjust the Aristotelian system and turned their backs on it altogether. It is hard to imagine how early modern philosophers, such as Francis Bacon (1561–1626), Pierre Gassendi (1592–1655,) and René Descartes (1596–1650), could have cleared the ground for the scientific revolution without the work of novatores such as Bernardino Telesio (1509–1588), Francesco Patrizi (1529–1597), Giordano Bruno (1548–1600), and Tommaso Campanella (1568–1639).
Telesio grounded his system on a form of empiricism, which maintained that nature can only be understood through sense perception and empirical research. In 1586, two years before his death, he published the definitive version of his work De rerum natura iuxta propria principia (On the Nature of Things according to their Own Principles). The book is a frontal assault on the foundations of Peripatetic philosophy, accompanied by a proposal for replacing Aristotelianism with a system more faithful to nature and experience. According to Telesio, the only things that must be presupposed are passive matter and the two principles of heat and cold, which are in perpetual struggle to occupy matter and exclude their opposite. These principles were meant to replace the Aristotelian metaphysical principles of matter and form. Some of Telesio’s innovations were seen as theologically dangerous and his philosophy became the object of vigorous attacks. De rerum natura iuxta propria principia was included on the Index of Prohibited Books published in Rome in 1596.
Through the reading of Telesio’s work, Campanella developed a profound distaste for Aristotelian philosophy and embraced the idea that nature should be explained through its own principles. He rejected the fundamental Aristotelian principle of hylomorphism and adopted instead Telesio’s understanding of reality in terms of the principles of matter, heat, and cold, which he combined with Neoplatonic ideas derived from Ficino. His first published work was Philosophia sensibus demonstrata (Philosophy as Demonstrated by the Senses, 1591), an anti-Peripatetic polemic in defense of Telesio’s system of natural philosophy. Thereafter, he was censured, tortured, and repeatedly imprisoned for his heresies. During the years of his incarceration, he composed many of his most famous works, such as De sensu rerum et magia (On the Sense of Things and On Magic, 1620), which sets out his vision of the natural world as a living organism and displays his keen interest in natural magic; Ateismus triomphatus (Atheism Conquered), a polemic against both reason of state and Machiavelli’s conception of religion as a political invention; and Apologia pro Galileo (Defense of Galileo), a defense of the freedom of thought (libertas philosophandi) of Galileo and of Christian scientists in general. Campanella’s most ambitious work is Metaphysica (1638), which constitutes the most comprehensive presentation of his philosophy and whose aim is to produce a new foundation for the entire encyclopedia of knowledge. His most celebrated work is the utopian treatise La città del sole (The City of the Sun), which describes an ideal model of society that, in contrast to the violence and disorder of the real world, is in harmony with nature.
In contrast to Telesio, who was a fervent critic of metaphysics and insisted on a purely empiricist approach in natural philosophy, Patrizi developed a program in which natural philosophy and cosmology were connected with their metaphysical and theological foundations. His Discussiones peripateticae (Peripatetic Discussions) provides a close comparison of the views of Aristotle and Plato on a wide range of philosophical issues, arguing that Plato’s views are preferable on all counts. Inspired by such Platonic predecessors as Proclus and Ficino, Patrizi elaborated his own philosophical system in Nova de universalis philosophia (The New Universal Philosophy, 1591), which is divided in four parts: Panaugia, Panarchia, Pampsychia, and Pancosmia. He saw light as the basic metaphysical principle and interpreted the universe in terms of the diffusion of light. The fourth and last part of the work, in which he expounded his cosmology showing how the physical world derives its existence from God, is by far the most original and important. In it, he replaced the four Aristotelian elements with his own alternatives: space, light, heat, and humidity. Gassendi and Henry More (1614–1687) adopted his concept of space, which indirectly came to influence Newton.
A more radical cosmology was proposed by Bruno, who was an extremely prolific writer. His most significant works include those on the art of memory and the combinatory method of Ramon Llull, as well as the moral dialogues Spaccio de la bestia trionfante (The Expulsion of the Triumphant Beast, 1584), Cabala del cavallo pegaseo (The Kabbalah of the Pegasean Horse, 1585) and De gl’heroici furori (The Heroic Frenzies, 1585). Much of his fame rests on three cosmological dialogues published in 1584: La cena de le ceneri (The Ash Wednesday Supper), De la causa, principio et uno (On the Cause, the Principle and the One) and De l’infinito, universo et mondi (On the Infinite, the Universe and the Worlds). In these, with inspiration from Lucretius, the Neoplatonists, and, above all, Nicholas of Cusa, he elaborates a coherent and strongly articulated ontological monism. Individual beings are conceived as accidents or modes of a unique substance, that is, the universe, which he describes as an animate and infinitely extended unity containing innumerable worlds. Bruno adhered to Copernicus’s cosmology but transformed it, postulating an infinite universe. Although an infinite universe was by no means his invention, he was the first to locate a heliocentric system in infinite space. In 1600, he was burned at the stake by the Inquisition for his heretical teachings.
Even though these new philosophies of nature anticipated some of the defining features of early modern thought, many of their methodological characteristics appeared to be inadequate in the face of new scientific developments. The methodology of Galileo Galilei (1564–1642) and of the other pioneers of the new science was essentially mathematical. Moreover, the development of the new science took place by means of methodical observations and experiments, such as Galileo’s telescopic discoveries and his experiments on inclined planes. The critique of Aristotle’s teaching formulated by natural philosophers such as Telesio, Campanella, Patrizi, and Bruno undoubtedly helped to weaken it, but it was the new philosophy of the early seventeenth century that sealed the fate of the Aristotelian worldview and set the tone for a new age.
- Allen, M. J. B., & Rees, V., eds., Marsilio Ficino: His Theology, his Philosophy, his Legacy (Leiden: Brill, 2002).
- Bellitto, C., & al., eds., Introducing Nicholas of Cusa: A Guide to a Renaissance Man (New York: Paulist Press, 2004).
- Bianchi, L., Studi sull’aristotelismo del Rinascimento (Padua: Il Poligrafo, 2003).
- Blum, P. R., ed., Philosophers of the Renaissance (Washington, D.C.: The Catholic University of America Press, 2010).
- Copenhaver, B. P., & Schmitt, C. B., Renaissance Philosophy (Oxford: Oxford University Press, 1992).
- Damiens, S., Amour et Intellect chez Leon l’Hébreu (Toulouse: Edouard Privat Editeur, 1971).
- Dougherty, M. V., ed., Pico della Mirandola: New Essays (Cambridge: Cambridge University Press, 2008).
- Ernst, G., Tommaso Campanella: The Book and the Body of Nature, transl. D. Marshall (Dordrecht: Springer, 2010).
- Fantazzi, C., ed., A Companion to Juan Luis Vives (Leiden: Brill, 2008).
- Gatti, H., ed., Giordano Bruno: Philosopher of the Renaissance (Aldershot: Ashgate, 2002).
- Granada, M. A., La reivindicación de la filosofía en Giordano Bruno (Barcelona: Herder, 2005).
- Guerlac, R., Juan Luis Vives against the Pseudodialecticians: A Humanist Attack on Medieval Logic (Dordrecht: Reidel, 1979).
- Hankins, J., Plato in the Italian Renaissance, 2 vols. (Leiden: Brill, 1990).
- Hankins, J., Humanism and Platonism in the Italian Renaissance, 2 vols. (Rome: Edizioni di storia e letteratura, 2003–4).
- Hankins, J., ed., The Cambridge Companion to Renaissance Philosophy (Cambridge: Cambridge University Press, 2007).
- Headley, J. M., Tommaso Campanella and the Transformation of the World (Princeton, N.J.: Princeton University Press, 1997).
- Kraye, J., Classical Traditions in Renaissance Philosophy (Aldershot: Ashgate, 2002).
- Mack, P., Renaissance Argument: Valla and Agricola in the Traditions of Rhetoric and Dialectic (Leiden: Brill, 1993).
- Mahoney, E. P., Two Aristotelians of the Italian Renaissance: Nicoletto Vernia and Agostino Nifo (Aldershot: Ashgate, 2000).
- Mikkeli, H., An Aristotelian Response to Renaissance Humanism: Jacopo Zabarella on the Nature of Arts and Sciences (Helsinki: Societas Historica Finlandiae, 1992).
- Nauert, C. A., Agrippa and the Crisis of Renaissance Thought (Urbana: University of Illinois Press, 1965).
- Nauta, L., In Defense of Common Sense: Lorenzo Valla’s Humanist Critique of Scholastic Philosophy (Cambridge, MA.: Harvard University Press, 2009).
- Noreña, C. G., Juan Luis Vives (The Hague: Nijhoff, 1970).
- Ong, W. J., Ramus: Method and the Decay of Dialogue (Cambridge, MA.: Harvard University Press).
- Paganini, G., & Maia Neto, J. R., eds., Renaissance Scepticisms (Dordrecht: Springer, 2009).
- Pine, M. L., Pietro Pomponazzi: Radical Philosopher of the Renaissance (Padova: Antenore, 1986).
- Popkin, R. H., The History of Scepticism from Savonarola to Bayle (Oxford: Oxford University Press, 2003).
- Rummel, E., The Humanist-Scholastic Debate in the Renaissance and Reformation (Cambridge, MA.: Harvard University Press, 1995).
- Schmitt, C. B., Gianfrancesco Pico della Mirandola (1469–1533) and His Critique of Aristotle (The Hague: Nijhoff, 1967).
- Schmitt, C. B., Cicero Scepticus: A Study of the Influence of the Academica in the Renaissance (The Hague: Nijhoff, 1972).
- Schmitt, C. B., Aristotle and the Renaissance (Cambridge, MA.: Harvard University Press, 1983).
- Schmitt, C. B., & al., eds., The Cambridge History of Renaissance Philosophy (Cambridge: Cambridge University Press, 1988).
- Skinner, Q., The Foundations of Modern Political Thought, vol. 1, The Renaissance (Cambridge: Cambridge University Press, 1978).
- Yates, F. A., Giordano Bruno and the Hermetic Tradition (London: Rouledge & Kegan Paul, 1964).
Last updated: April 28, 2012 | Originally published: April 27, 2012
Categories: Renaissance Philosophy | http://www.iep.utm.edu/renaissa/ | 13 |
15 | I visit this website daily to discover which e-books are available for free, and a few days ago I noticed Mind Mapping for Kids: How Elementary School Students Can Use Mind Maps to Improve Reading Comprehension and Critical Thinking by Toni Krasnic so I downloaded it. You can find very good books on the website, but when they are first published, the authors offer them for free for a few days to get the word out.
I know that a mind map is a great tool, but I have never really invested the time to learn how to use it properly. Since this book is meant for kids, I thought that the author would do a very good job of explaining how to use mind maps. All the mind maps and worksheets are available for download for free, but you have to first sign-up at the author’s website. Although a large percentage of Part III of the book is not relevant to us, the book is worth reading.
Five Big Ideas
- We need to stop pushing information and provide opportunities for our [readers] to pull information that’s of interest to them.
- Under the right circumstances, any topic can be made interesting.
- Take control of your learning to become a self-directed learner.
- Reading for the sake of reading is inadequate. Readers must make meaning from reading for comprehension to take place.
- Before reading, if you’re reading to gather information, or to further your knowledge, write down the questions that you’d like to answer.
Mind Mapping for Kids by Toni Krasnic is meant to assist students to improve reading comprehension and critical thinking. The intent is to help students become better readers and learners. Since learning is a lifelong process, the intent is suited for adults as well.
“When the learner makes decisions about what concepts to highlight and how to connect them, he or she is demonstrating comprehension. In order to compose a visual map, comprehension must take place, something that cannot be said of traditional note taking. For students struggling with information and abstract ideas, the ability to transform information into visual maps is especially useful in helping them get a fuller picture of what they’re learning.”
When constructing a mind map, two things are critical:
- Identify/add key concepts.
- Organize/connect key concepts correctly and meaningfully.
In addition to discussing what mind maps are, and how to use them, Krasnic does the same for concept maps and explains the differences between the two visual maps. But much of his focus is on mind maps since that is the primary focus of Mind Mapping for Kids.
I was very happy to see that the author included a mind map of how professionals can use mind maps. He emphasized, which I’m in agreement with, that learning is a lifelong process.
9 Ways to Use Mind Maps
Krasnic surveyed students on how they used mind maps, then he revised and added information to include his findings in Mind Mapping for Kids, but what struck me is that the information is relevant to others as well.
- One-Place Repository of Information and Resources: The Pro version of mind mapping software allows you to attach documents, add images, link to sites, in addition to just taking notes. So this is a way for you to keep the resources you need for a project in one place. If you are looking for a digital notebook, and not the core functionalities of mind mapping, use Evernote (SummaReview of Work Smarter with Evernote by Alexandra Samuel and The Evernote Bible by Brandon Collins, a Book Review).
- Holistic Integration of Knowledge: Mind maps allow you to blend new information with what you already know, and you can also expand or collapse them.
- Personal Dashboard to Manage Tasks and Goals: You can use mind maps to plan and manage your day-to-day life.
- Note Taking, Research, and Writing: Keep all the information for a project in one place. Although Evernote is great for this, mind mapping allows you to visually see the info before you write your project report.
- Transparent Thinking: Allows you to get your thoughts on digital paper, so you can clearly see them. And this is a great way to not only clarify your thoughts, but share them with others.
- Improved Memory and Recall: Putting your thoughts on a mind map, as well as combining old and new information, help to cement information in your brain.
- Increased Creativity Through Free-Form, Non-Linear Thinking: Since mind maps are dynamic, they promote creativity because they’re free form and you are not forced to approach things in a linear way.
- Problem Solving, Decision Making, and Taking Action: Mind maps allow you to see how ideas and concepts relate to each other, which foster problem solving and decision making, which make it easier to take concrete action.
- Transform Rote Studying Into Self-Directed Learning: Mind maps force you to think about what you have learned so that you can capture it visually. This automatically eliminates rote learning, transforming it into self-directed learning. It’s an active way of learning.
8 Research-Based Reading Strategies
- Summarize: Identify the important ideas.
- Existing Knowledge: Rely on your exiting knowledge and previous experiences.
- Connect: Connect the new information to what you already know.
- Visualize: Picture what you are reading.
- Evaluate: Judge what you’re reading, does it make sense? Is it believable?
- Infer: What does the information mean? Dig deeper than the obvious.
- Synthesize: When you look at the complete picture, what insights can you glean?
- Question: Ask questions to understand your reading. And sometimes you read because you have questions. Keep those questions front and centre while you’re reading.
4 Concise Learning Methods
- Acquire key concepts.
- Meaningfully organize and connect key concepts in a visual map (mind map).
- Think critically.
- Ask key questions.
Krasnic combines the eight reading strategies, the four concise learning methods and mind maps to create the concise reading method. As you work your way through the reading strategies to increase your reading comprehension, be mindful that it’s not a standalone step. Create a mind map while you are reading or after you have completed your reading.
While you are creating your mind map, recognize that the eight reading strategies and the four learning methods feed into each other, and if you have done a good job in organizing your thoughts and understanding of each, the better your mind mapping, the more you’ll comprehend what you have read. Use the following to remember the eight reading strategies and four learning methods and how they relate to each other, and use as a basis for creating your mind map.
- Acquiring key concepts goes hand-in-hand with the summarize strategy.
- Organizing and connecting the information in a meaningful way is connected to the knowledge, connect, and visualize strategies.
- Critically thinking is related to evaluate, infer, and synthesize strategies.
- Asking questions feed into the question strategy.
To relate the mind map to life, Krasnic does an excellent job of attaching questions to each reading strategy to improve comprehension. For instance, when connecting what you’re reading to what you already know, two simple questions are “What do I already know about this topic?” and “What does this information remind me of?” The mere action of asking yourself questions, forces you to think about and process the information. In addition, the questions are intelligent ones that are worthy of answering.
The way to view mind mapping is an additional took in your toolkit that will assist you in becoming more productive. I recommend Mind Mapping for Kids: How Elementary School Students Can Use Mind Maps to Improve Reading Comprehension and Critical Thinking by Toni Krasnic.
Book link is affiliate link. | http://theinvisiblementor.com/2013/02/20/summareview-of-mind-mapping-for-kids-by-toni-krasnic/ | 13 |
20 | Abduction, or inference to the best explanation, is a method of reasoning in which one chooses the hypothesis that would, if true, best explain the relevant evidence. Abductive reasoning starts from a set of accepted facts and infers most likely, or best, explanations. The term "abduction" also sometimes only refers to the generation of hypotheses that explain observations or conclusions, but the former definition is more common both in philosophy and computing.
Aristotle discussed abductive reasoning (apagoge, Greek) in his Prior Analytics. Charles Peirce formulated abduction as a method of scientific research and introduced it into modern logic. The concept of abduction is applied beyond logic to the social sciences and the development of artificial intelligence.
Logical reasoning: Deduction, induction, and abduction
There are three kinds of logical reasoning in logic: Deduction, induction, and abduction. Given a precondition, a conclusion, and a rule that the precondition implies the conclusion, they can be explained in the following way:
- Deduction means determining the conclusion. It is using the rule and its precondition to make a conclusion. Example: "When it rains, the grass gets wet. It rains. Thus, the grass is wet." Mathematicians are commonly associated with this style of reasoning.
- It allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. A deductive statement is based on accepted truths. For example, all bachelors are unmarried men. It is true by definition and is independent of sense experience.
- Induction means determining the rule. It is learning the rule after numerous examples of the conclusion following the precondition. Example: "The grass has been wet every time it has rained. Thus, when it rains, the grass gets wet." Scientists are commonly associated with this style of reasoning.
- It allows inferring some a from multiple instantiations of b when a entails b. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires perception for it to be true. For example, the statement, "it is snowing outside" is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience.
- Abduction means determining the precondition. It is using the conclusion and the rule to assume that the precondition could explain the conclusion. Example: "When it rains, the grass gets wet. The grass is wet, it must have rained." Diagnosticians and detectives are commonly associated with this style of reasoning.
- It allows inferring a as an explanation of b. Because of this, abduction allows the precondition a of “a entails b” to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like “a entails b” is used for inference. As such abduction is formally equivalent to the logical fallacy affirming the consequent or Post hoc ergo propter hoc, because there are multiple possible explanations for b.
Unlike deduction and induction, abduction can produce results that are incorrect within its formal system. However, it can still be useful as a heuristic, especially when something is known about the likelihood of different causes for b.
In logic, an explanation is expressed by T, which represents a domain and a set of observations O. Abduction is the process of deriving a set of explanations of O according to T and picking out one of those explanations. For E to be an explanation of O according to T, it should satisfy two conditions:
- O follows from E and T;
- E is consistent with T.
In formal logic, O and E are assumed to be sets of literals. The two conditions for E being an explanation of O according to theory T are formalized as:
- is consistent.
Among the possible explanations E satisfying these two conditions, some other condition of minimality is usually imposed to avoid irrelevant facts (not contributing to the entailment of O) being included in the explanations. Abduction is then the process that picks out some member of E. Criteria for picking out a member representing "the best" explanation include the simplicity, the prior probability, or the explanatory power of the explanation.
A proof theoretical abduction method for first order classical logic based on sequent calculus and dual calculus, based on semantic tableaux (analytic tableaux) has been proposed. The methods are sound and complete, and they work for full first order logic without requiring any preliminary reduction of formulae into normal forms. These methods have also been extended to modal logic.
Abductive logic programming is a computational framework that extends normal logic programming with abduction. It separates the theory T into two components, one of which is a normal logic program, used to generate E by means of backward reasoning, the other of which is a set of integrity constraints, used to filter the set of candidate explanations.
A different formalization of abduction is based on inverting the function that calculates the visible effects of the hypotheses. Formally, one is given a set of hypotheses H and a set of manifestations M; they are related by the domain knowledge, represented by a function e that takes as an argument a set of hypotheses and gives as a result the corresponding set of manifestations. In other words, for every subset of the hypotheses , their effects are known to be e(H').
Abduction is performed by finding a set such that . In other words, abduction is performed by finding a set of hypotheses H' such that their effects e(H') include all observations M.
A common assumption is that the effects of the hypotheses are independent, that is, for every , it holds that . If this condition is met, abduction can be seen as a form of set covering.
History of the concept
The philosopher Charles Peirce introduced abduction into modern logic. In his works before 1900, he mostly uses the term to mean the use of a known rule to explain an observation; for example, “if it rains the grass is wet,” is a known rule used to explain that the grass is wet. In other words, it would be more technically correct to say, "If the grass is wet, the most probable explanation is that it recently rained."
He later used the term to mean creating new rules to explain new observations, emphasizing that abduction is the only logical process that actually creates anything new. Namely, he described the process of science as a combination of abduction, deduction, and induction, stressing that new knowledge is only created by abduction.
This is contrary to the common use of abduction in the social sciences and in artificial intelligence, where the old meaning is used. Contrary to this use, Peirce stated that the actual process of generating a new rule is not “hampered” by logic rules. Rather, he pointed out that humans have an innate ability to infer correctly; possessing this ability is explained by the evolutionary advantage it gives. Peirce's second use of "abduction" is most similar to induction.
Norwood Russell Hanson, a philosopher of science, wanted to grasp a logic that explained how scientific discoveries take place. He used Peirce's notion of abduction for this.
Further development of the concept can be found in Peter Lipton's Inference to the Best Explanation.
Applications in artificial intelligence include fault diagnosis, belief revision, and automated planning. The most direct application of abduction is that of automatically detecting faults in systems: Given a theory relating faults with their effects and a set of observed effects, abduction can be used to derive sets of faults that are likely to be the cause of the problem.
Abduction can also be used to model automated planning. Given a logical theory relating action occurrences with their effects (for example, a formula of the event calculus), the problem of finding a plan for reaching a state can be modeled as the problem of abducting a set of literals implying that the final state is the goal state.
Belief revision, the process of adapting beliefs in view of new information, is another field in which abduction has been applied. The main problem of belief revision is that the new information may be inconsistent with the corpus of beliefs, while the result of the incorporation cannot be inconsistent. This process can be done by the use of abduction: Once an explanation for the observation has been found, integrating it does not generate inconsistency. This use of abduction is not straightforward, as adding propositional formulae to other propositional formulae can only make inconsistencies worse. Instead, abduction is done at the level of the ordering of preference of the possible worlds.
In the philosophy of science, abduction has been the key inference method to support scientific realism, and much of the debate about scientific realism is focused on whether abduction is an acceptable method of inference.
Abductive validation is the process of validating a given hypothesis through abductive reasoning. Under this principle, an explanation is valid if it is the best possible explanation of a set of known data. The best possible explanation is often defined in terms of simplicity and elegance (such as Occam's razor). Abductive validation is common practice in hypothesis formation in science.
After obtaining results from an inference procedure, we may be left with multiple assumptions, some of which may be contradictory. Abductive validation is a method for identifying the assumptions that will lead to a goal.
- ↑ Paul Edwards, (ed.), The Encyclopedia of Philosophy (New York: Macmillan Publishing Co, Inc., 1967).
- ↑ Tibor Schwendtner, Laszlo Ropolyi, and Olga Kiss (eds.), Hermeneutika és a természettudományok. Áron Kiadó (Budapest, 2001).
- ↑ Peter Lipton, Inference to the Best Explanation (London: Routledge, 1991, ISBN 0415242029).
- ↑ Kave Eshghi, "Abductive planning with the event calculus," in Robert A. Kowalski, Kenneth A. Bowen (eds.), Logic Programming, Proceedings of the Fifth International Conference and Symposium (Seattle, Washington: MIT Press 1988, ISBN 0-262-61056-6).
- ↑ April M. S. McMahon, Understanding Language Change (Cambridge: Cambridge University Press, 1994, ISBN 0-521-44665-1).
- Awbrey, Jon, and Susan Awbrey. 1995. "Interpretation as Action: The Risk of Inquiry." Inquiry: Critical Thinking Across the Disciplines. 15, 40-52.
- Cialdea Mayer, Marta, and Fiora Pirri. 1993. "First order abduction via tableau and sequent calculi." Logic Jnl IGPL 1993 1: 99-117.
- Edwards, Paul (ed.). 1967. The Encyclopedia of Philosophy. New York: Macmillan Publishing Co, Inc.
- Eiter, T., and G. Gottlob. 1995. "The Complexity of Logic-Based Abduction." Journal of the ACM, 42.1, 3-42.
- Harman, Gilbert. 1965. "The Inference to the Best Explanation." The Philosophical Review. 74:1, 88-95.
- Josephson, John R., and Susan G. Josephson (eds.). 1995. Abductive Inference: Computation, Philosophy, Technology. Cambridge: Cambridge University Press.
- Kowalski, R., and K.A. Bowen. 1988. Logic Programming: Proceedings of the Fifth International Conference and Symposium. Cambridge, MA: MIT Press.
- Lipton, Peter. 2001. Inference to the Best Explanation. London: Routledge. ISBN 0415242029.
- McMahon, A.M.S. 1994. Understanding Language Change. Cambridge: Cambridge University Press. ISBN 0521441196.
- Menzies, T. 1996. "Applications of Abduction: Knowledge-Level Modeling." International Journal of Human-Computer Studies. 45.3, 305-335.
- Yu, Chong Ho. 1994. "Is There a Logic of Exploratory Data Analysis?" Annual Meeting of American Educational Research Association. New Orleans, 1994.
All links retrieved August 11, 2012.
- Chapter 3. Deduction, Induction, and Abduction in article Charles Sanders Peirce of the Stanford Encyclopedia of Philosophy
- What is Abductive Inference? - Uwe Wirth, Frankfurt University
General philosophy sources
- Stanford Encyclopedia of Philosophy.
- The Internet Encyclopedia of Philosophy. 2007.
- Philosophy Sources on Internet EpistemeLinks.
- Guide to Philosophy on the Internet.
- Paideia Project Online.
- Project Gutenberg.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Abductive_reasoning | 13 |
20 | Correlation addresses the relationship between two different factors (variables). The statistic is called a correlation coefficient. A correlation coefficient can be calculated when there are two (or more) sets of scores for the same individuals or matched groups.
A correlation coefficient describes direction (positive or negative) and degree (strength) of relationship between two variables. The higher the correlation coefficient, the stronger the relationship. The coefficient also is used to obtain a p value indicated whether the degree of relationship is greater than expected by chance. For correlation, the null hypothesis is that the correlation coefficient = 0.
Examples: Is there a relationship between family income and scores on the SAT? Does amount of time spent studying predict exam grade? How does alcohol intake affect reaction time?
Raw data sheet
The notation X is used for scores on the independent (predictor) variable. Y is used for the scores on the outcome (dependent) variable.
Subject Variable 1 Variable 2 1 X1 Y1
X = score on the 1st variable (predictor)
Y = score on the 2nd variable (outcome)
2 X2 Y2 3 X3 Y3 4 X4 Y4 5 X5 Y5 6 X6 Y6 7 X7 Y7 8 X8 Y8
Contrast/comparison versus Correlation
The modules on descriptive and inferential statistics describe
contrasting groups -- Do samples differ on some outcome? ANOVA analyzes central
tendency and variability along an outcome variable. Chi-square compares observed
with expected outcomes. ANOVA and Chi-square compare different subjects (or
the same subjects over time) on the same outcome variable.
Correlation looks at the relative position of the same subjects on different variables. More....
Correlation can be positive or negative, depending upon the direction of the relationship. If both factors increase and decrease together, the relationship is positive. If one factor increases as the other decreases, then the relationship is negative. It is still a predictable relationship, but inverse, changing in opposite rather than same direction. Plotting a relationship on a graph (called a scatterplot) provides a picture of the relationship between two factors (variables). More....
A correlation coefficient can vary from -1.00 to +1.00. The closer the coefficient is to zero (from either + or -), the less strong the relationship. The sign indicates the direction of the relationship: plus (+) = positive, minus (-) = negative. Take a look at the correlation coefficients (on the graph itself) for the 3 examples from the scatterplot tutorial.
Correlations as low as .14 are statistically significant in large samples (e.g., 200 cases or more).
The important point to remember in correlation is that we cannot make any assumption about cause. The fact that 2 variables co-vary in either a positive or negative direction does not mean that one is causing the other. Remember the 3 criteria for cause-and-effect and the third variable problem.
There are two different formulas to use in calculating correlation. For normal distributions, use the Pearson Product-moment Coefficient (r). When the data are ranks (1st, 2nd, 3rd, etc.), use the Spearman Rank-order Coefficient (rs). More details are provided in the next two sections.
Bivariate and multiple regression
The correlation procedure discussed thus far is called bivariate correlation. That is because there are two factors (variables) involved -- bi = 2. The term regression refers to a diagonal line drawn on the data scatterplot. You saw that in the tutorial. The formula for correlation calculates how closely the data points are to the regression line.
Multiple regression is correlation for more than 2 factors. The concept is fairly simple, but the calculation is not, and requires use of a computer program (or many hours of hand calculation).
Next section: Correlation for normally-distributed variables, Pearson r | http://psychology.ucdavis.edu/sommerb/sommerdemo/correlation/intro.htm | 13 |
66 | Induction or inductive reasoning, sometimes called inductive logic, is reasoning which takes us "beyond the confines of our current evidence or knowledge to conclusions about the unknown." The premises of an inductive argument support the conclusion but do not entail it; i.e. they do not ensure its truth. Induction is used to ascribe properties or relations to types based on an observation instance (i.e., on a number of observations or experiences); or to formulate laws based on limited observations of recurring phenomenal patterns. Induction is employed, for example, in using specific propositions such as:
This ice is cold. (or: All ice I have ever touched was cold.)
This billiard ball moves when struck with a cue. (or: Of one hundred billiard balls struck with a cue, all of them moved.)
...to infer general propositions such as:
All ice is cold.
All billiard balls move when struck with a cue.
Another example would be:
3+5=8 and eight is an even number. Therefore, an odd number added to another odd number will result in an even number.
Inductive reasoning has been attacked several times. Historically, David Hume denied its logical admissibility. Sextus Empiricus questioned how the truth of the Universals can be established by examining some of the particulars. Examining all the particulars is difficult as they are infinite in number. During the twentieth century, thinkers such as Karl Popper and David Miller have disputed the existence, necessity and validity of any inductive reasoning, including probabilistic (Bayesian) reasoning . Some say scientists still rely on induction but Popper and Miller dispute this: Scientists cannot rely on induction simply because it does not exist.
Note that mathematical induction is not a form of inductive reasoning. While mathematical induction maybe inspired by the non-base cases, the formulation of a base case firmly establishes it as a form of deductive reasoning.
All observed crows are black.
All crows are black.
This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain. Unless we can systematically falsify the possibility of crows of another colour, the statement (conclusion) may actually be false.
For example, one could examine the bird's genome and learn whether it's capable of producing a differently coloured bird. In doing so, we could discover that albinism is possible, resulting in light-coloured crows. Even if you change the definition of "crow" to require blackness, the original question of the colour possibilities for a bird of that species would stand, only semantically hidden.
A strong induction is thus an argument in which the truth of the premises would make the truth of the conclusion probable, but not necessary.
I always hang pictures on nails.
All pictures hang from nails.
Assuming the first statement to be true, this example is built on the certainty that "I always hang pictures on nails" leading to the generalisation that "All pictures hang from nails". However, the link between the premise and the inductive conclusion is weak. No reason exists to believe that just because one person hangs pictures on nails that there are no other ways for pictures to be hung, or that other people cannot do other things with pictures. Indeed, not all pictures are hung from nails; moreover, not all pictures are hung. The conclusion cannot be strongly inductively made from the premise. Using other knowledge we can easily see that this example of induction would lead us to a clearly false conclusion. Conclusions drawn in this manner are usually overgeneralisations.
Many speeding tickets are given to teenagers.
All teenagers drive fast.
In this example, the premise is built upon a certainty; however, it is not one that leads to the conclusion. Not every teenager observed has been given a speeding ticket. In other words, unlike "The sun rises every morning", there are already plenty of examples of teenagers not being given speeding tickets. Therefore the conclusion drawn can easily be true or false, and the inductive logic does not give us a strong conclusion. In both of these examples of weak induction, the logical means of connecting the premise and conclusion (with the word "therefore") are faulty, and do not give us a strong inductively reasoned statement.
See main article: Problem of induction.
Formal logic, as most people learn it, is deductive rather than inductive. Some philosophers claim to have created systems of inductive logic, but it is controversial whether a logic of induction is even possible. In contrast to deductive reasoning, conclusions arrived at by inductive reasoning do not have the same degree of certainty as the initial premises. For example, a conclusion that all swans are white is false, but may have been thought true in Europe until the settlement of Australia or New Zealand, when Black Swans were discovered. Inductive arguments are never binding but they may be cogent. Inductive reasoning is deductively invalid. (An argument in formal logic is valid if and only if it is not possible for the premises of the argument to be true whilst the conclusion is false.) In induction there are always many conclusions that can reasonably be related to certain premises. Inductions are open; deductions are closed. It is however possible to derive a true statement using inductive reasoning if you know the conclusion. The only way to have an efficient argument by induction is for the known conclusion to be able to be true only if an unstated external conclusion is true, from which the initial conclusion was built and has certain criteria to be met in order to be true (separate from the stated conclusion). By substitution of one conclusion for the other, you can inductively find out what evidence you need in order for your induction to be true. For example, you have a window that opens only one way, but not the other. Assuming that you know that the only way for that to happen is that the hinges are faulty, inductively you can postulate that the only way for that window to be fixed would be to apply oil (whatever will fix the unstated conclusion). From there on you can successfully build your case. However, if your unstated conclusion is false, which can only be proven by deductive reasoning, then your whole argument by induction collapses. Thus ultimately, inductive reasoning is not reliable.
The classic philosophical treatment of the problem of induction, meaning the search for a justification for inductive reasoning, was by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, but this is not a guarantee that it will always do so. As Hume said, someone who insisted on sound deductive justifications for everything would starve to death.
Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence for instance, as in archaeology, count as induction. Induction could also be across space rather than time, for instance as in physical cosmology where conclusions about the whole universe are drawn from the limited perspective we are able to observe (see cosmic variance); or in economics, where national economic policy is derived from local economic performance.
Twentieth-century philosophy has approached induction very differently. Rather than a choice about what predictions to make about the future, induction can be seen as a choice of what concepts to fit to observations or of how to graph or represent a set of observed data. Nelson Goodman posed a "new riddle of induction" by inventing the property "grue" to which induction as a prediction about the future does not apply.
The proportion Q of the sample has attribute A.
The proportion Q of the population has attribute A.
How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. The hasty generalisation and biased sample are fallacies related to generalisation.
A statistical syllogism proceeds from a generalization to a conclusion about an individual.
A proportion Q of population P has attribute A.
An individual I is a member of P.
There is a probability which corresponds to Q that I has A.
Simple induction proceeds from a premise about a sample group to a conclusion about another individual.
Proportion Q of the known instances of population P has attribute A.
Individual I is another member of P.
There is a probability corresponding to Q that I has A.
This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.
I has attributes A, B, and C
J has attributes A and B
So, J has attribute C
An analogy relies on the inference that the attributes known to be shared (the similarities) imply that C is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between I and J. The fallacy related to this process is false analogy. As with other forms of inductive argument, even the best reasoning in an argument from analogy can only make the conclusion probable given the truth of the premises, not certain.
Analogical reasoning is very frequent in common sense, science, philosophy and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning. For more information on inferences by analogy, see Juthe, 2005.
A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.
A prediction draws a conclusion about a future individual from a past sample.
Proportion Q of observed members of group G have had attribute A.
There is a probability corresponding to Q that other members of group G will have attribute A when next observed.
Of the candidate systems for an inductive logic, the most influential is Bayesianism. This uses probability theory as the framework for induction. Given new evidence, Bayes' theorem is used to evaluate how much the strength of a belief in a hypothesis should change.
There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayes' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms.
Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or prior probabilities; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy a generalization of the principle of indifference and transformation groups are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions.
Cox's theorem, which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic.
Based on an analysis of measurement theory (specifically the axiomatic work of Krantz-Luce-Suppes-Tversky), Henry E. Kyburg, Jr. produced a novel account of how error and predictiveness could be mediated by epistemological probability. It explains how one can adopt a rule, such as PV=nRT, even though the new universal generalization produces higher error rates on the measurement of P, V, and T. It remains the most detailed procedural account of induction, in the sense of scientific theory-formation. | http://everything.explained.at/Inductive_reasoning/ | 13 |
50 | Logic is a domain of philosophy concerned with rational criteria that applies to argumentation. Logic includes a study of argumentation within natural language, consistent reasoning, valid argumentation, and errors in reasoning. It is divided into two main domains: Formal and informal logic.
Formal logic is the traditional domain of logic in western philosophy. It is a domain that covers logical form, consistency, valid argumentation, and logical systems.
Logical form allows us to symbolize statements by stripping statements of their content. For example, consider the statement “if it will rain today, then the roads will become slippery.” The logical form of this statement would be presented in propositional logic as “if A, then B.” In that case “A” stands for “it will rain today” and “B” stands for “the roads will be slippery.” Logical connectives are kept, such as “if,” “and,” “or,” and “not.”
Logicians don’t usually write statements as “if A, then B.” Instead, they usually use a symbol for logical connectives, such as “→.” We can state “if A, then B” as “A → B.”
Two statements are consistent if it’s possible for them both to be true at the same time. For example, the statement “if it will rain today, then the roads will be slippery” is consistent with the statement “it will not rain today.” Logic provides us with a way to determine when statements are consistent, which is important to us because all true statements about the world are consistent. (Two true statements can never form a contradiction. For example, “Aliens live on another planet” and “aliens don’t live on another planet” form a contradiction, so one of the statements is false.)
We know that two statements are consistent as long as they can all be true at the same time, and contradictory when they can’t. Whenever two propositions contradict, one proposition can be symbolized as “A” and the other can be symbolized as “not-A.” For example, “it will rain today” contradicts “it will not rain today.”
Some statements are also self-contradictory, such as “one person exists and no people exist.” Many self-contradictions can be symbolized as “A and not-A.” These statements are always false.
Tautological statements are always true, such as “either the Moon revolves around the Earth or the Moon doesn’t revolve around the Earth.” Many tautologies can be symbolized as “A or not-A.”
A valid argument has an argument form that could never have true premises and a false conclusion at the same time. For example, “If it will rain today, then the roads will be slippery. It will rain today. Therefore, the roads will be slippery” is valid because it has the argument form “If A, then B. A. Therefore, B.” All arguments with this form are valid.
Logic gives us the tools to determine when an argument is logically valid. If a deductive argument is not logically valid, then it does not provide us with a good reason to agree with the conclusion. If the premises are true, then the conclusion could still be false.
An example of an invalid argument is “At least one person exists. If at least one person exists, then at least one mammal exists. Therefore, no mammals exist.” Although the premises are true, the conclusion is false. This argument does not do what arguments are supposed to do—provide us with a good reason to think the conclusion is true.
Logical systems have (1) a formal language that allows us to symbolize statements of natural language, (2) axioms, and (3) rules of inference.
- A formal language is a way we can present the form of our statements involving logical connectives.
- Axioms are rules, such as the rule that states that contradictions can’t exist.
- Rules of inference are rules that state what premises can be used to validly infer various conclusions. For example, a rule known as “modus ponens” states that we can use “A” and “if A, then B” as premises to validly infer that “B.”
Logical systems are needed in order for us to best determine when statements are consistent or when arguments are valid.
Informal logic is domain that covers the application of rational argumentation within natural language—how people actually talk. What we call “critical thinking” is often said to involve informal logic, and critical thinking classes generally focus on informal logic. Informal logic mainly focuses on rational argumentation, the distinction between inductive and deductive reasoning, argument identification, premise and conclusion identification, hidden assumption identification, and error identification.
Arguments are a series of two or more statements including premises (supporting statements) and conclusions (statements that are supposed to be justified by the premises). For example, “All human beings that had lived in the distant past had died. Therefore, all human beings are probably mortal.”
The idea of rational argumentation is that it is supposed to give us a good reason to believe the conclusion is true. If an argument is good enough, then we should believe the conclusion is true. If an argument is rationally persuasive enough, then it would be irrational to think the conclusion is false. For example, consider the argument “All objects that were dropped near the surface of the Earth fell. Therefore, all objects that are dropped near the surface of the Earth will probably fall.” This argument gives us a good reason to believe the the conclusion to be true, and it would seem to be irrational to think it’s false.
The distinction between deductive and inductive reasoning
Deductive arguments are meant to be valid. If the premises are true, then the conclusion is supposed to be inevitable. Inductive arguments are not meant to be valid. If the premises of an inductive argument are true, then the conclusion is supposed to be likely true. If an inductive argument is strong and the premises are true, then it is unlikely for the conclusion to be false.
An example of a valid deductive argument was given above when valid arguments were discussed. Let’s assume that “if it will rain today, then the roads will be wet” and that “it will rain today.” In that case we have no choice but to agree that “the roads will be wet.”
An example of a strong inductive argument was given in the argument involving dropping objects. It is unlikely that dropped objects will not fall in the future assuming that they always fell in the past.
Knowing what arguments are and why people use them helps us know when people give arguments in everyday conversation. It can also be helpful to know the difference between arguments and other similar things. For example, arguments are not mere assertions. A person who gives a mere assertion is telling you what she believes to be true, but a person who gives an argument tells you why she believes we should agree that a conclusion is true.
Premise and conclusion identification
Knowing what premises and conclusions are helps us know how to know which are which in everyday conversation. For example, a person can say “the death penalty is wrong because it kills people.” In this case the premise is “the death penalty kills people” and the conclusion is “the death penalty is wrong.”
Hidden assumption identification
Knowing that an argument is meant to be rationally persuasive can help us realize when hidden assumptions are required by an argument. For example, the argument that “the death penalty is wrong because it kills people” requires the hidden assumption that “it’s always wrong to kill people.” Without that assumption the argument will not be rationally persuasive. If it’s not always wrong to kill people, then perhaps the death penalty is not wrong after all.
Knowing about several errors of reasoning (i.e. fallacies) can help us know when people have errors of reasoning in arguments they present in everyday conversation. For example, the argument “my friend Joe never died, so no person will die in the future” contains an error. The problem with this argument is the unjustified assumption that we can know what will happen to everyone in the future based on what happened to a single person given a limited amount of time. This type of error is known as the “hasty generalization” fallacy.
What’s the difference between logic and epistemology?
Epistemology is the philosophical study of knowledge, justification, and rationality. It asks questions, such as the following:
- What is knowledge?
- Is knowledge possible?
- What are the ways we can rationally justify our beliefs?
- When it is irrational for a person to have a belief?
- When should a person agree that a belief is true?
These issues are highly related to logic, and many philosophers have equated “logic” with “epistemology.” For example, the Stoic philosophers included epistemology in their domain of “logic.”
I believe that logic should now be considered to be part of the domain of epistemology. However, for educational purposes it is considered to be a separate subject and it’s not taught in epistemology classes.
Logic classes deal with argument form and certain rational criteria that applies to argumentation, but epistemology classes generally deal with somewhat abstract questions, as were listed above. Perhaps one of the most important issues that logic deals with much less than epistemology is justification—logic tends not to tell us when premises are justified and how well justified they are, but epistemology attempts to tell us when premises are justified, and when a premise is justified enough to rationally require us to believe it’s true.
Why do logic and epistemology classes teach different things? Perhaps because philosophers who have an interest in epistemology have historically not cared as much about logic and vice versa.
But why would philosophers who care about epistemology not care as much about logic? Perhaps because logic tends to be concerned with issues that can be answered with a much higher degree of certainty. We know what arguments are. We know that good arguments must apply certain rational criteria. We can determine when arguments are valid or invalid. We can determine that many arguments have hidden premises or various errors. However, we can’t determine the nature of knowledge, justification, and rationality with that degree of certainty. It is more controversial when a belief is justified and at what point a belief is justified enough to rationally require us to believe it’s true.
What is the essence of logic?
I don’t think that logic has an essence. It’s a domain concerned with certain rational criteria involved with argumentation, but not all criteria. Epistemology also covers related issues. What we consider to be logic or epistemology mainly has to do with a history of philosophers (and mathematicians) who label themselves as “logicians” or “epistemologists” and teach classes in the corresponding domains. These terms are used merely because they are convenient to us.
However, I think we can say that logic is a domain of epistemology that has a restricted focus, and that focus is mainly restricted to issues that we think we can answer with a much higher degree of certainty than usual. Logic and mathematics are now often taken to be part of the same domain, and both generally offer us with a degree of certainty higher than the natural sciences. Whenever scientific findings conflict logic, we are much more likely to think that our scientific findings are false than that our understanding of logic is false.
The same can not be said of epistemology once logic is removed from it. There are examples of epistemological issues that do seem to involve a great deal of certainty. I think we should be confident that we should believe that “1+1=2” and that it’s irrational to believe that “1+1=3.” Epistemology tells us what we should believe in that sense. However, there is also a great deal of uncertainty that is usually involved with epistemology. The big questions in epistemology are still very controversial. | http://ethicalrealism.wordpress.com/2012/10/10/what-is-logic/ | 13 |
24 | In number theory, the Euclidean algorithm (also called Euclid's algorithm) is an algorithm to determine the greatest common divisor (GCD) of two elements of any Euclidean domain (for example, the integers). Its major significance is that it does not require factoring the two integers, and it is also significant in that it is one of the oldest algorithms known, dating back to the ancient Greeks.
These algorithms can be used in any context where division with remainder is possible. This includes rings of polynomials over a field as well as the ring of Gaussian integers, and in general all Euclidean domains. Applying the algorithm to the more general case other than natural numbers will be discussed in more detail later in the article.
function gcd(a, b)
if b = 0 return a
else return gcd(b, a mod b)
or in C/C++ as
function gcd(a, b)
while b ≠ 0
t := b
b := a mod b
a := t
function gcd(a, b)
if a = 0 return b
while b ≠ 0
if a > b
a := a − b
b := b − a
With the recursive algorithm:
|gcd(||1071,||1029)||The initial arguments|
|=||gcd(||1029,||42)||The second argument is 1071 mod 1029|
|=||gcd(||42,||21)||The second argument is 1029 mod 42|
|=||gcd(||21,||0)||The second argument is 42 mod 21|
|=||21||Since b=0, we return a|
With the iterative algorithm:
|1071||1029||Step 1: The initial inputs|
|1029||42||Step 2: The remainder of 1071 divided by 1029 is 42, which is put on the right, and the divisor 1029 is put on the left.|
|42||21||Step 3: We repeat the loop, dividing 1029 by 42, and get 21 as remainder.|
|21||0||Step 4: Repeat the loop again, since 42 is divisible by 21, we get 0 as remainder, and the algorithm terminates. The number on the left, that is 21, is the gcd as required.|
Observe that a ≥ b in each call. If initially, b > a, there is no problem; the first iteration effectively swaps the two values.
Any common divisor of a and b is also a divisor of r. To see why this is true, consider that r can be written as r = a − qb. Now, if there is a common divisor d of a and b such that a = sd and b = td, then r = (s−qt)d. Since all these numbers, including s−qt, are whole numbers, it can be seen that r is divisible by d.
The above analysis is true for any divisor d; thus, the greatest common divisor of a and b is also the greatest common divisor of b and r. Therefore it is enough if we continue searching for the greatest common divisor with the numbers b and r. Since r is smaller in absolute value than b, we will reach r = 0 after finitely many steps.
When analyzing the running time of Euclid's algorithm, the inputs requiring the most divisions are two successive Fibonacci numbers (because their ratios are the convergents in the slowest continued fraction expansion to converge, that of the golden ratio) as proved by Gabriel Lamé, and the worst case requires O(n) divisions, where n is the number of digits in the input. However, the divisions themselves are not constant time operations; the actual time complexity of the algorithm is . The reason is that division of two n-bit numbers takes time , where m is the length of the quotient. Consider the computation of gcd(a,b) where a and b have at most n bits, let be the sequence of numbers produced by the algorithm, and let be their lengths. Then , and the running time is bounded by
This is considerably better than Euclid's original algorithm, in which the modulus operation is effectively performed using repeated subtraction in steps. Consequently, that version of the algorithm requires time for n-digit numbers, or time for the number m.
Euclid's algorithm is widely used in practice, especially for small numbers, due to its simplicity. An alternative algorithm, the binary GCD algorithm, exploits the binary representation used by computers to avoid divisions and thereby increase efficiency, although it too is O(n²); it merely shrinks the constant hidden by the big-O notation on many real machines.
There are more complex algorithms that can reduce the running time to . See Computational complexity of mathematical operations for more details.
The quotients 1,24,2 count certain squares nested within a rectangle R having length 1071 and width 1029, in the following manner:
(1) there is 1 1029×1029 square in R whose removal leaves a 42×1029 rectangle, R1;
(2) there are 24 42×42 squares in R1 whose removal leaves a 21×42 rectangle, R2;
(3) there are 2 21×21 squares in R2 whose removal leaves nothing.
The "visual Euclidean algorithm" of nested squares applies to an arbitrary rectangle R. If the (length)/(width) of R is an irrational number, then the visual Euclidean algorithm extends to a visual continued fraction.
As an example, consider the ring of polynomials with rational coefficients. In this ring, division with remainder is carried out using long division. The resulting polynomials are then made monic by factoring out the leading coefficient.
We calculate the greatest common divisor of
Following the algorithm gives these values:
This agrees with the explicit factorization. For general Euclidean domains, the proof of correctness is by induction on some size function. For the integers, this size function is just the identity. For rings of polynomials over a field, it is the degree of the polynomial (note that each step in the above table reduces the degree by at least one).
US Patent Issued to Koninklijke Philips Electronics on June 14 for "Advanced Convergence for Multiple Iterative Algorithm" (German Inventors)
Jun 16, 2011; ALEXANDRIA, Va., June 16 -- United States Patent no. 7,961,839, issued on June 14, was assigned to Koninklijke Philips...
US Patent Issued to NEC Laboratories America on Jan. 17 for "Processing High-Dimensional Data Via Em-Style Iterative Algorithm" (California Inventors)
Jan 23, 2012; ALEXANDRIA, Va., Jan. 23 -- United States Patent no. 8,099,381, issued on Jan. 17, was assigned to NEC Laboratories America Inc.... | http://www.reference.com/browse/Iterative+algorithm | 13 |
46 | Please Read How You Can Help Keep the Encyclopedia Free
Locke's Political Philosophy
John Locke (1632–1704) is among the most influential political philosophers of the modern period. In the Two Treatises of Government, he defended the claim that men are by nature free and equal against claims that God had made all people naturally subject to a monarch. He argued that people have rights, such as the right to life, liberty, and property, that have a foundation independent of the laws of any particular society. Locke used the claim that men are naturally free and equal as part of the justification for understanding legitimate political government as the result of a social contract where people in the state of nature conditionally transfer some of their rights to the government in order to better ensure the stable, comfortable enjoyment of their lives, liberty, and property. Since governments exist by the consent of the people in order to protect the rights of the people and promote the public good, governments that fail to do so can be resisted and replaced with new governments. Locke is thus also important for his defense of the right of revolution. Locke also defends the principle of majority rule and the separation of legislative and executive powers. In the Letter Concerning Toleration, Locke denied that coercion should be used to bring people to (what the ruler believes is) the true religion and also denied that churches should have any coercive power over their members. Locke elaborated on these themes in his later political writings, such as the Second Letter on Toleration and Third Letter on Toleration.
For a more general introduction to Locke's history and background, the argument of the Two Treatises, and the Letter Concerning Toleration, see Section 1, Section 3, and Section 4, respectively, of the main entry on John Locke in this encyclopedia. The present entry focuses on seven central concepts in Locke's political philosophy.
- 1. The Law of Nature
- 2. State of Nature
- 3. Property
- 4. Consent, Political Obligation, and the Ends of Government
- 5. Locke and Punishment
- 6. Separation of Powers and the Dissolution of Government
- 7. Toleration
- Other Internet Resources
- Related Entries
Perhaps the most central concept in Locke's political philosophy is his theory of natural law and natural rights. The natural law concept existed long before Locke as a way of expressing the idea that there were certain moral truths that applied to all people, regardless of the particular place where they lived or the agreements they had made. The most important early contrast was between laws that were by nature, and thus generally applicable, and those that were conventional and operated only in those places where the particular convention had been established. This distinction is sometimes formulated as the difference between natural law and positive law.
Natural law is also distinct from divine law in that the latter, in the Christian tradition, normally referred to those laws that God had directly revealed through prophets and other inspired writers. Natural law can be discovered by reason alone and applies to all people, while divine law can be discovered only through God's special revelation and applies only to those to whom it is revealed and who God specifically indicates are to be bound. Thus some seventeenth-century commentators, Locke included, held that not all of the 10 commandments, much less the rest of the Old Testament law, were binding on all people. The 10 commandments begin “Hear O Israel” and thus are only binding on the people to whom they were addressed (Works 6:37). As we will see below, even though Locke thought natural law could be known apart from special revelation, he saw no contradiction in God playing a part in the argument, so long as the relevant aspects of God's character could be discovered by reason alone. In Locke's theory, divine law and natural law are consistent and can overlap in content, but they are not coextensive. Thus there is no problem for Locke if the Bible commands a moral code that is stricter than the one that can be derived from natural law, but there is a real problem if the Bible teaches what is contrary to natural law. In practice, Locke avoided this problem because consistency with natural law was one of the criteria he used when deciding the proper interpretation of Biblical passages.
In the century before Locke, the language of natural rights also gained prominence through the writings of such thinkers as Grotius, Hobbes, and Pufendorf. Whereas natural law emphasized duties, natural rights normally emphasized privileges or claims to which an individual was entitled. There is considerable disagreement as to how these factors are to be understood in relation to each other in Locke's theory. Leo Strauss, and many of his followers, take rights to be paramount, going so far as to portray Locke's position as essentially similar to that of Hobbes. They point out that Locke defended a hedonist theory of human motivation (Essay 2.20) and claim that he must agree with Hobbes about the essentially self-interested nature of human beings. Locke, they claim, only recognizes natural law obligations in those situations where our own preservation is not in conflict, further emphasizing that our right to preserve ourselves trumps any duties we may have.
On the other end of the spectrum, more scholars have adopted the view of Dunn, Tully, and Ashcraft that it is natural law, not natural rights, that is primary. They hold that when Locke emphasized the right to life, liberty, and property he was primarily making a point about the duties we have toward other people: duties not to kill, enslave, or steal. Most scholars also argue that Locke recognized a general duty to assist with the preservation of mankind, including a duty of charity to those who have no other way to procure their subsistence (Two Treatises 1.42). These scholars regard duties as primary in Locke because rights exist to ensure that we are able to fulfill our duties. Simmons takes a position similar to the latter group, but claims that rights are not just the flip side of duties in Locke, nor merely a means to performing our duties. Instead, rights and duties are equally fundamental because Locke believes in a “robust zone of indifference” in which rights protect our ability to make choices. While these choices cannot violate natural law, they are not a mere means to fulfilling natural law either.
Another point of contestation has to do with the extent to which Locke thought natural law could, in fact, be known by reason. Both Strauss and Peter Laslett, though very different in their interpretations of Locke generally, see Locke's theory of natural law as filled with contradictions. In the Essay Concerning Human Understanding, Locke defends a theory of moral knowledge that negates the possibility of innate ideas (Essay Book 1) and claims that morality is capable of demonstration in the same way that Mathematics is (Essay 3.11.16, 4.3.18–20). Yet nowhere in any of his works does Locke make a full deduction of natural law from first premises. More than that, Locke at times seems to appeal to innate ideas in the Second Treatise (2.11), and in The Reasonableness of Christianity (Works 7:139) he admits that no one has ever worked out all of natural law from reason alone. Strauss infers from this that the contradictions exist to show the attentive reader that Locke does not really believe in natural law at all. Laslett, more conservatively, simply says that Locke the philosopher and Locke the political writer should be kept very separate.
More recent scholarship has tended to reject this position. Yolton, Colman, Ashcraft, Grant, Simmons, Tuckness and others all argue that there is nothing strictly inconsistent in Locke's admission in The Reasonableness of Christianity. That no one has deduced all of natural law from first principles does not mean that none of it has been deduced. The supposedly contradictory passages in the Two Treatises are far from decisive. While it is true that Locke does not provide a deduction in the Essay, it is not clear that he was trying to. Section 4.10.1–19 of that work seems more concerned to show how reasoning with moral terms is possible, not to actually provide a full account of natural law. Nonetheless, it must be admitted that Locke did not treat the topic of natural law as systematically as one might like. Attempts to work out his theory in more detail with respect to its ground and its content must try to reconstruct it from scattered passages in many different texts.
To understand Locke's position on the ground of natural law it must be situated within a larger debate in natural law theory that predates Locke, the so-called “voluntarism-intellectualism,” or “voluntarist-rationalist” debate. At its simplest, the voluntarist declares that right and wrong are determined by God's will and that we are obliged to obey the will of God simply because it is the will of God. Unless these positions are maintained, the voluntarist argues, God becomes superfluous to morality since both the content and the binding force of morality can be explained without reference to God. The intellectualist replies that this understanding makes morality arbitrary and fails to explain why we have an obligation to obey God.
With respect to the grounds and content of natural law, Locke is not completely clear. On the one hand, there are many instances where he makes statements that sound voluntarist to the effect that law requires a law giver with authority (Essay 1.3.6, 4.10.7). Locke also repeatedly insists in the Essays on the Law of Nature that created beings have an obligation to obey their creator (ELN 6). On the other hand there are statements that seem to imply an external moral standard to which God must conform (Two Treatises 2.195; Works 7:6). Locke clearly wants to avoid the implication that the content of natural law is arbitrary. Several solutions have been proposed. One solution suggested by Herzog makes Locke an intellectualist by grounding our obligation to obey God on a prior duty of gratitude that exists independent of God. A second option, suggested by Simmons, is simply to take Locke as a voluntarist since that is where the preponderance of his statements point. A third option, suggested by Tuckness (and implied by Grant), is to treat the question of voluntarism as having two different parts, grounds and content. On this view, Locke was indeed a voluntarist with respect to the question “why should we obey the law of nature?” Locke thought that reason, apart from the will of a superior, could only be advisory. With respect to content, divine reason and human reason must be sufficiently analogous that human beings can reason about what God likely wills. Locke takes it for granted that since God created us with reason in order to follow God's will, human reason and divine reason are sufficiently similar that natural law will not seem arbitrary to us.
Those interested in the contemporary relevance of Locke's political theory must confront its theological aspects. Straussians make Locke's theory relevant by claiming that the theological dimensions of his thought are primarily rhetorical; they are “cover” to keep him from being persecuted by the religious authorities of his day. Others, such as Dunn, take Locke to be of only limited relevance to contemporary politics precisely because so many of his arguments depend on religious assumptions that are no longer widely shared. More recently a number of authors, such as Simmons and Vernon, have tried to separate the foundations of Locke's argument from other aspects of it. Simmons, for example, argues that Locke's thought is over-determined, containing both religious and secular arguments. He claims that for Locke the fundamental law of nature is that “as much as possible mankind is to be preserved” (Two Treatises 135). At times, he claims, Locke presents this principle in rule-consequentialist terms: it is the principle we use to determine the more specific rights and duties that all have. At other times, Locke hints at a more Kantian justification that emphasizes the impropriety of treating our equals as if they were mere means to our ends. Waldron, in his most recent work on Locke, explores the opposite claim: that Locke's theology actually provides a more solid basis for his premise of political equality than do contemporary secular approaches that tend to simply assert equality.
With respect to the specific content of natural law, Locke never provides a comprehensive statement of what it requires. In the Two Treatises, Locke frequently states that the fundamental law of nature is that as much as possible mankind is to be preserved. Simmons argues that in Two Treatises 2.6 Locke presents 1) a duty to preserve one's self, 2) a duty to preserve others when self-preservation does not conflict, 3) a duty not to take away the life of another, and 4) a duty not to act in a way that “tends to destroy” others. Libertarian interpreters of Locke tend to downplay duties of type 1 and 2. Locke presents a more extensive list in his earlier, and unpublished in his lifetime, Essays on the Law of Nature. Interestingly, Locke here includes praise and honor of the deity as required by natural law as well as what we might call good character qualities.
Locke's concept of the state of nature has been interpreted by commentators in a variety of ways. At first glance it seems quite simple. Locke writes “want [lack] of a common judge, with authority, puts all persons in a state of nature” and again, “Men living according to reason, without a common superior on earth, to judge between them, is properly the state of nature.” (Two Treatises 2.19) Many commentators have taken this as Locke's definition, concluding that the state of nature exists wherever there is no legitimate political authority able to judge disputes and where people live according to the law of reason. On this account the state of nature is distinct from political society, where a legitimate government exists, and from a state of war where men fail to abide by the law of reason.
Simmons presents an important challenge to this view. Simmons points out that the above statement is worded as a sufficient rather than necessary condition. Two individuals might be able, in the state of nature, to authorize a third to settle disputes between them without leaving the state of nature, since the third party would not have, for example, the power to legislate for the public good. Simmons also claims that other interpretations often fail to account for the fact that there are some people who live in states with legitimate governments who are nonetheless in the state of nature: visiting aliens (2.9), children below the age of majority (2.15, 118), and those with a “defect” of reason (2.60). He claims that the state of nature is a relational concept describing a particular set of moral relations that exist between particular people, rather than a description of a particular geographical territory. The state of nature is just the way of describing the moral rights and responsibilities that exist between people who have not consented to the adjudication of their disputes by the same legitimate government. The groups just mentioned either have not or cannot give consent, so they remain in the state of nature. Thus A may be in the state of nature with respect to B, but not with C.
Simmons' account stands in sharp contrast to that of Strauss. According to Strauss, Locke presents the state of nature as a factual description of what the earliest society is like, an account that when read closely reveals Locke's departure from Christian teachings. State of nature theories, he and his followers argue, are contrary to the Biblical account in Genesis and evidence that Locke's teaching is similar to that of Hobbes. As noted above, on the Straussian account Locke's apparently Christian statements are only a façade designed to conceal his essentially anti-Christian views. According to Simmons, since the state of nature is a moral account, it is compatible with a wide variety of social accounts without contradiction. If we know only that a group of people are in a state of nature, we know only the rights and responsibilities they have toward one another; we know nothing about whether they are rich or poor, peaceful or warlike.
A complementary interpretation is made by John Dunn with respect to the relationship between Locke's state of nature and his Christian beliefs. Dunn claimed that Locke's state of nature is less an exercise in historical anthropology than a theological reflection on the condition of man. On Dunn's interpretation, Locke's state of nature thinking is an expression of his theological position, that man exists in a world created by God for God's purposes but that governments are created by men in order to further those purposes.
Locke's theory of the state of nature will thus be tied closely to his theory of natural law, since the latter defines the rights of persons and their status as free and equal persons. The stronger the grounds for accepting Locke's characterization of people as free, equal, and independent, the more helpful the state of nature becomes as a device for representing people. Still, it is important to remember that none of these interpretations claims that Locke's state of nature is only a thought experiment, in the way Kant and Rawls are normally thought to use the concept. Locke did not respond to the argument “where have there ever been people in such a state” by saying it did not matter since it was only a thought experiment. Instead, he argued that there are and have been people in the state of nature. (Two Treatises 2.14) It seems important to him that at least some governments have actually been formed in the way he suggests. How much it matters whether they have been or not will be discussed below under the topic of consent, since the central question is whether a good government can be legitimate even if it does not have the actual consent of the people who live under it; hypothetical contract and actual contract theories will tend to answer this question differently.
Locke's treatment of property is generally thought to be among his most important contributions in political thought, but it is also one of the aspects of his thought that has been most heavily criticized. There are important debates over what exactly Locke was trying to accomplish with his theory. One interpretation, advanced by C.B. Macpherson, sees Locke as a defender of unrestricted capitalist accumulation. On Macpherson's interpretation, Locke is thought to have set three restrictions on the accumulation of property in the state of nature: 1) one may only appropriate as much as one can use before it spoils (Two Treatises 2.31), 2) one must leave “enough and as good” for others (the sufficiency restriction) (2.27), and 3) one may (supposedly) only appropriate property through one's own labor (2.27). Macpherson claims that as the argument progresses, each of these restrictions is transcended. The spoilage restriction ceases to be a meaningful restriction with the invention of money because value can be stored in a medium that does not decay (2.46–47). The sufficiency restriction is transcended because the creation of private property so increases productivity that even those who no longer have the opportunity to acquire land will have more opportunity to acquire what is necessary for life (2.37). According to Macpherson's view, the “enough and as good” requirement is itself merely a derivative of a prior principle guaranteeing the opportunity to acquire, through labor, the necessities of life. The third restriction, Macpherson argues, was not one Locke actually held at all. Though Locke appears to suggest that one can only have property in what one has personally labored on when he makes labor the source of property rights, Locke clearly recognized that even in the state of nature, “the Turfs my Servant has cut” (2.28) can become my property. Locke, according to Macpherson, thus clearly recognized that labor can be alienated. As one would guess, Macpherson is critical of the “possessive individualism” that Locke's theory of property represents. He argues that its coherence depends upon the assumption of differential rationality between capitalists and wage-laborers and on the division of society into distinct classes. Because Locke was bound by these constraints, we are to understand him as including only property owners as voting members of society.
Macpherson's understanding of Locke has been criticized from several different directions. Alan Ryan argued that since property for Locke includes life and liberty as well as estate (Two Treatises 2.87), even those without land could still be members of political society. The dispute between the two would then turn on whether Locke was using property in the more expansive sense in some of the crucial passages. James Tully attacked Macpherson's interpretation by pointing out that the First Treatise specifically includes a duty of charity toward those who have no other means of subsistence (1.42). While this duty is consistent with requiring the poor to work for low wages, it does undermine the claim that those who have wealth have no social duties to others.
Tully also argued for a fundamental reinterpretation of Locke's theory. Previous accounts had focused on the claim that since persons own their own labor, when they mix their labor with that which is unowned it becomes their property. Robert Nozick criticized this argument with his famous example of mixing tomato juice one rightfully owns with the sea. When we mix what we own with what we do not, why should we think we gain property instead of losing it? On Tully's account, focus on the mixing metaphor misses Locke's emphasis on what he calls the “workmanship model.” Locke believed that makers have property rights with respect to what they make just as God has property rights with respect to human beings because he is their maker. Human beings are created in the image of God and share with God, though to a much lesser extent, the ability to shape and mold the physical environment in accordance with a rational pattern or plan. Waldron has criticized this interpretation on the grounds that it would make the rights of human makers absolute in the same way that God's right over his creation is absolute. Sreenivasan has defended Tully's argument against Waldron's response by claiming a distinction between creating and making. Only creating generates an absolute property right, and only God can create, but making is analogous to creating and creates an analogous, though weaker, right.
Another controversial aspect of Tully's interpretation of Locke is his interpretation of the sufficiency condition and its implications. On his analysis, the sufficiency argument is crucial for Locke's argument to be plausible. Since Locke begins with the assumption that the world is owned by all, individual property is only justified if it can be shown that no one is made worse off by the appropriation. In conditions where the good taken is not scarce, where there is much water or land available, an individual's taking some portion of it does no harm to others. Where this condition is not met, those who are denied access to the good do have a legitimate objection to appropriation. According to Tully, Locke realized that as soon as land became scarce, previous rights acquired by labor no longer held since “enough and as good” was no longer available for others. Once land became scarce, property could only be legitimated by the creation of political society.
Waldron claims that, contrary to Macpherson, Tully, and others, Locke did not recognize a sufficiency condition at all. He notes that, strictly speaking, Locke makes sufficiency a sufficient rather than necessary condition when he says that labor generates a title to property “at least where there is enough, and as good left in common for others” (Two Treatises 2.27). Waldron takes Locke to be making a descriptive statement, not a normative one, about the condition that happens to have initially existed. Waldron also argues that in the text “enough and as good” is not presented as a restriction and is not grouped with other restrictions. Waldron thinks that the condition would lead Locke to the absurd conclusion that in circumstances of scarcity everyone must starve to death since no one would be able to obtain universal consent and any appropriation would make others worse off.
One of the strongest defenses of Tully's position is presented by Sreenivasan. He argues that Locke's repetitious use of “enough and as good” indicates that the phrase is doing some real work in the argument. In particular, it is the only way Locke can be thought to have provided some solution to the fact that the consent of all is needed to justify appropriation in the state of nature. If others are not harmed, they have no grounds to object and can be thought to consent, whereas if they are harmed, it is implausible to think of them as consenting. Sreenivasan does depart from Tully in some important respects. He takes “enough and as good” to mean “enough and as good opportunity for securing one's preservation,” not “enough and as good of the same commodity (such as land).” This has the advantage of making Locke's account of property less radical since it does not claim that Locke thought the point of his theory was to show that all original property rights were invalid at the point where political communities were created. The disadvantage of this interpretation, as Sreenivasan admits, is that it saddles Locke with a flawed argument. Those who merely have the opportunity to labor for others at subsistence wages no longer have the liberty that individuals had before scarcity to benefit from the full surplus of value they create. Moreover poor laborers no longer enjoy equality of access to the materials from which products can be made. Sreenivasan thinks that Locke's theory is thus unable to solve the problem of how individuals can obtain individual property rights in what is initially owned by all people without consent.
Simmons presents a still different synthesis. He sides with Waldron and against Tully and Sreenivasan in rejecting the workmanship model. He claims that the references to “making” in chapter five of the Two Treatises are not making in the right sense of the word for the workmanship model to be correct. Locke thinks we have property in our own persons even though we do not make or create ourselves. Simmons claims that while Locke did believe that God had rights as creator, human beings have a different limited right as trustees, not as makers. Simmons bases this in part on his reading of two distinct arguments he takes Locke to make: the first justifies property based on God's will and basic human needs, the second based on “mixing” labor. According to the former argument, at least some property rights can be justified by showing that a scheme allowing appropriation of property without consent has beneficial consequences for the preservation of mankind. This argument is overdetermined, according to Simmons, in that it can be interpreted either theologically or as a simple rule-consequentialist argument. With respect to the latter argument, Simmons takes labor not to be a substance that is literally “mixed” but rather as a purposive activity aimed at satisfying needs and conveniences of life. Like Sreenivasan, Simmons sees this as flowing from a prior right of people to secure their subsistence, but Simmons also adds a prior right to self-government. Labor can generate claims to private property because private property makes individuals more independent and able to direct their own actions. Simmons thinks Locke's argument is ultimately flawed because he underestimated the extent to which wage labor would make the poor dependent on the rich, undermining self-government. He also joins the chorus of those who find Locke's appeal to consent to the introduction of money inadequate to justify the very unequal property holdings that now exist.
A final question concerns the status of those property rights acquired in the state of nature after civil society has come into being. It seems clear that at the very least Locke allows taxation to take place by the consent of the majority rather than requiring unanimous consent (2.140). Nozick takes Locke to be a libertarian, with the government having no right to take property to use for the common good without the consent of the property owner. On his interpretation, the majority may only tax at the rate needed to allow the government to successfully protect property rights. At the other extreme, Tully thinks that, by the time government is formed, land is already scarce and so the initial holdings of the state of nature are no longer valid and thus are no constraint on governmental action. Waldron's view is in between these, acknowledging that property rights are among the rights from the state of nature that continue to constrain the government, but seeing the legislature as having the power to interpret what natural law requires in this matter in a fairly substantial way.
The most direct reading of Locke's political philosophy finds the concept of consent playing a central role. His analysis begins with individuals in a state of nature where they are not subject to a common legitimate authority with the power to legislate or adjudicate disputes. From this natural state of freedom and independence, Locke stresses individual consent as the mechanism by which political societies are created and individuals join those societies. While there are of course some general obligations and rights that all people have from the law of nature, special obligations come about only when we voluntarily undertake them. Locke clearly states that one can only become a full member of society by an act of express consent (Two Treatises 2.122). The literature on Locke's theory of consent tends to focus on how Locke does or does not successfully answer the following objection: few people have actually consented to their governments so no, or almost no, governments are actually legitimate. This conclusion is problematic since it is clearly contrary to Locke's intention.
Locke's most obvious solution to this problem is his doctrine of tacit consent. Simply by walking along the highways of a country a person gives tacit consent to the government and agrees to obey it while living in its territory. This, Locke thinks, explains why resident aliens have an obligation to obey the laws of the state where they reside, though only while they live there. Inheriting property creates an even stronger bond, since the original owner of the property permanently put the property under the jurisdiction of the commonwealth. Children, when they accept the property of their parents, consent to the jurisdiction of the commonwealth over that property (Two Treatises 2.120). There is debate over whether the inheritance of property should be regarded as tacit or express consent. On one interpretation, by accepting the property, Locke thinks a person becomes a full member of society, which implies that he must regard this as an act of express consent. Grant suggests that Locke's ideal would have been an explicit mechanism of society whereupon adults would give express consent and this would be a precondition of inheriting property. On the other interpretation, Locke recognized that people inheriting property did not in the process of doing so make any explicit declaration about their political obligation.
However this debate is resolved, there will be in any current or previously existing society many people who have never given express consent, and thus some version of tacit consent seems needed to explain how governments could still be legitimate. Simmons finds it difficult to see how merely walking on a street or inheriting land can be thought of as an example of a “deliberate, voluntary alienating of rights” (69). It is one thing, he argues, for a person to consent by actions rather than words; it is quite another to claim a person has consented without being aware that they have done so. To require a person to leave behind all of their property and emigrate in order to avoid giving tacit consent is to create a situation where continued residence is not a free and voluntary choice. Simmons' approach is to agree with Locke that real consent is necessary for political obligation but disagree about whether most people in fact have given that kind of consent. Simmons claims that Locke's arguments push toward “philosophical anarchism,” the position that most people do not have a moral obligation to obey the government, even though Locke himself would not have made this claim.
Hannah Pitkin takes a very different approach. She claims that the logic of Locke's argument makes consent far less important in practice than it might appear. Tacit consent is indeed a watering down of the concept of consent, but Locke can do this because the basic content of what governments are to be like is set by natural law and not by consent. If consent were truly foundational in Locke's scheme, we would discover the legitimate powers of any given government by finding out what contract the original founders signed. Pitkin, however, thinks that for Locke the form and powers of government are determined by natural law. What really matters, therefore, is not previous acts of consent but the quality of the present government, whether it corresponds to what natural law requires. Locke does not think, for example, that walking the streets or inheriting property in a tyrannical regime means we have consented to that regime. It is thus the quality of the government, not acts of actual consent, that determine whether a government is legitimate. Simmons objects to this interpretation, saying that it fails to account for the many places where Locke does indeed say a person acquires political obligations only by his own consent.
John Dunn takes a still different approach. He claims that it is anachronistic to read into Locke a modern conception of what counts as “consent.” While modern theories do insist that consent is truly consent only if it is deliberate and voluntary, Locke's concept of consent was far more broad. For Locke, it was enough that people be “not unwilling.” Voluntary acquiescence, on Dunn's interpretation, is all that is needed. As evidence Dunn can point to the fact that many of the instances of consent Locke uses, such as “consenting” to the use of money, make more sense on this broad interpretation. Simmons objects that this ignores the instances where Locke does talk about consent as a deliberate choice and that, in any case, it would only make Locke consistent at the price of making him unconvincing.
A related question has to do with the extent of our obligation once consent has been given. The interpretive school influenced by Strauss emphasizes the primacy of preservation. Since the duties of natural law apply only when our preservation is not threatened (2.6), then our obligations cease in cases where our preservation is directly threatened. This has important implications if we consider a soldier who is being sent on a mission where death is extremely likely. Grant points out that Locke believes a soldier who deserts from such a mission (Two Treatises 2.139) is justly sentenced to death. Grant takes Locke to be claiming not only that desertion laws are legitimate in the sense that they can be blamelessly enforced (something Hobbes would grant) but that they also imply a moral obligation on the part of the soldier to give up his life for the common good (something Hobbes would deny). According to Grant, Locke thinks that our acts of consent can in fact extend to cases where living up to our commitments will risk our lives. The decision to enter political society is a permanent one for precisely this reason: the society will have to be defended and if people can revoke their consent to help protect it when attacked, the act of consent made when entering political society would be pointless since the political community would fail at the very point where it is most needed. People make a calculated decision when they enter society, and the risk of dying in combat is part of that calculation. Grant also thinks Locke recognizes a duty based on reciprocity since others risk their lives as well.
Most of these approaches focus on Locke's doctrine of consent as a solution to the problem of political obligation. A different approach asks what role consent plays in determining, here and now, the legitimate ends that governments can pursue. One part of this debate is captured by the debate between Seliger and Kendall, the former viewing Locke as a constitutionalist and the latter viewing him as giving almost untrammeled power to majorities. On the former interpretation, a constitution is created by the consent of the people as part of the creation of the commonwealth. On the latter interpretation, the people create a legislature which rules by majority vote. A third view, advanced by Tuckness, holds that Locke was flexible at this point and gave people considerable flexibility in constitutional drafting.
A second part of the debate focuses on ends rather than institutions. Locke states in the Two Treatises that the power of the Government is limited to the public good. It is a power that hath “no other end but preservation” and therefore cannot justify killing, enslaving, or plundering the citizens. (2.135). Libertarians like Nozick read this as stating that governments exist only to protect people from infringements on their rights. An alternate interpretation, advanced in different ways by Tuckness, draws attention to the fact that in the following sentences the formulation of natural law that Locke focuses on is a positive one, that “as much as possible” mankind is to be preserved. On this second reading, government is limited to fulfilling the purposes of natural law, but these include positive goals as well as negative rights. On this view, the power to promote the common good extends to actions designed to increase population, improve the military, strengthen the economy and infrastructure, and so on, provided these steps are indirectly useful to the goal of preserving the society. This would explain why Locke, in the Letter, describes government promotion of “arms, riches, and multitude of citizens” as the proper remedy for the danger of foreign attack (Works 6: 42)
John Locke defined political power as “a Right of making Laws with Penalties of Death, and consequently all less Penalties” (Two Treatises 2.3). Locke’s theory of punishment is thus central to his view of politics and part of what he considered innovative about his political philosophy. But he also referred to his account of punishment as a “very strange doctrine” (2.9), presumably because it ran against the assumption that only political sovereigns could punish. Locke believed that punishment requires that there be a law, and since the state of nature has the law of nature to govern it, it is permissible to describe one individual as “punishing” another in that state. Locke’s rationale is that since the fundamental law of nature is that mankind be preserved and since that law would “be in vain” with no human power to enforce it, it must therefore be legitimate for individuals to punish each other even before government exists. In arguing this, Locke was disagreeing with Samuel Pufendorf. Samuel Pufendorf had argued strongly that the concept of punishment made no sense apart from an established positive legal structure.
Locke realized that the crucial objection to allowing people to act as judges with power to punish in the state of nature was that such people would end up being judges in their own cases. Locke readily admitted that this was a serious inconvenience and a primary reason for leaving the state of nature (Two Treatises 2.13). Locke insisted on this point because it helped explain the transition into civil society. Locke thought that in the state of nature men had a liberty to engage in “innocent delights” (actions that are not a violation of any applicable laws), to seek their own preservation within the limits of natural law, and to punish violations of natural law. The power to seek one’s preservation is limited in civil society by the law and the power to punish is transferred to the government. (128–130). The power to punish in the state of nature is thus the foundation for the right of governments to use coercive force.
The situation becomes more complex, however, if we look at the principles which are to guide punishment. Rationales for punishment are often divided into those that are forward-looking and backward-looking. Forward-looking rationales include deterring crime, protecting society from dangerous persons, and rehabilitation of criminals. Backward-looking rationales normally focus on retribution, inflicting on the criminal harm comparable to the crime. Locke may seem to conflate these two rationales in passages like the following:
And thus in the State of Nature, one Man comes by a Power over another; but yet no Absolute or Arbitrary Power, to use a Criminal when he has got him in his hands, according to the passionate heats, or boundless extravagancy of his own Will, but only to retribute to him, so far as calm reason and conscience dictates, what is proportionate to his Transgression, which is so much as may serve for Reparation and Restraint. For these two are the only reasons, why one Man may lawfully do harm to another, which is that [which] we call punishment. (Two Treatises 2.8)
Locke talks both of retribution and of punishing only for reparation and restraint. Some have argued that this is evidence that Locke is combining both rationales for punishment in his theory (Simmons 1992). A survey of other seventeenth-century natural rights justifications for punishment, however, indicates that it was common to use words like “retribute” in theories that reject what we would today call retributive punishment. In the passage quoted above, Locke is saying that the proper amount of punishment is the amount that will provide restitution to injured parties, protect the public, and deter future crime. Locke’s attitude toward punishment in his other writings on toleration, education, and religion consistently follows this path toward justifying punishment on grounds other than retribution. His emphasis on restitution is interesting because restitution is backward looking in a sense (it seeks to restore an earlier state of affairs) but also forward looking in that it provides tangible benefits to those who receive the restitution (Tuckness 2010). There is a link here between Locke’s understanding of natural punishment and his understanding of legitimate state punishment. Even in the state of nature, a primary justification for punishment is that it helps further the positive goal of preserving human life and human property. The emphasis on deterrence, public safety, and restitution in punishments administered by the government mirrors this emphasis.
A second puzzle regarding punishment is the permissibility of punishing internationally. Locke describes international relations as a state of nature, and so in principle, states should have the same power to punish breaches of the natural law in the international community that individuals have in the state of nature. This would legitimize, for example, punishment of individuals for war crimes or crimes against humanity even in cases where neither the laws of the particular state nor international law authorize punishment. Thus in World War II, even if “crimes of aggression” was not at the time recognized as a crime for which individual punishment was justified, if the actions violated that natural law principle that one should not deprive another of life, liberty, or property, the guilty parties could still be liable to criminal punishment. The most common interpretation has thus been that the power to punish internationally is symmetrical with the power to punish in the state of nature.
Recent scholarship, however, has argued that there is an asymmetry between the two cases because Locke also talks about states being limited in the goals that they can pursue. Locke often says that the power of the government is to be used for the protection of the rights of its own citizens, not for the rights of all people everywhere (Two Treatises 1.92, 2.88, 2.95, 2.131, 2.147). Locke argues that in the state of nature a person is to use the power to punish to preserve his society, mankind as a whole. After states are formed, however, the power to punish is to be used for the benefit of his own particular society (Tuckness 2008). In the state of nature, a person is not required to risk his life for another (Two Treatises 2.6) and this presumably would also mean a person is not required to punish in the state of nature when attempting to punish would risk the life of the punisher. Locke may therefore be objecting to the idea that soldiers can be compelled to risk their lives for altruistic reasons. In the state of nature, a person could refuse to attempt to punish others if doing so would risk his life and so Locke reasons that individuals may not have consented to allow the state to risk their lives for altruistic punishment of international crimes.
Locke claims that legitimate government is based on the idea of separation of powers. First and foremost of these is the legislative power. Locke describes the legislative power as supreme (Two Treatises 2.149) in having ultimate authority over “how the force for the commonwealth shall be employed” (2.143). The legislature is still bound by the law of nature and much of what it does is set down laws that further the goals of natural law and specify appropriate punishments for them (2.135). The executive power is then charged with enforcing the law as it is applied in specific cases. Interestingly, Locke’s third power is called the “federative power” and it consists of the right to act internationally according to the law of nature. Since countries are still in the state of nature with respect to each other, they must follow the dictates of natural law and can punish one another for violations of that law in order to protect the rights of their citizens.
The fact that Locke does not mention the judicial power as a separate power becomes clearer if we distinguish powers from institutions. Powers relate to functions. To have a power means that there is a function (such as making the laws or enforcing the laws) that one may legitimately perform. When Locke says that the legislative is supreme over the executive, he is not saying that parliament is supreme over the king. Locke is simply affirming that “what can give laws to another, must needs be superior to him” (Two Treatises 2.150). Moreover, Locke thinks that it is possible for multiple institutions to share the same power; for example, the legislative power in his day was shared by the House of Commons, the House of Lords, and the King. Since all three needed to agree for something to become law, all three are part of the legislative power ( 1.151). He also thinks that the federative power and the executive power are normally placed in the hands of the executive, so it is possible for the same person to exercise more than one power (or function). There is, therefore, no one to one correspondence between powers and institutions (Tuckness 2002a).
Locke is not opposed to having distinct institutions called courts, but he does not see interpretation as a distinct function or power. For Locke, legislation is primarily about announcing a general rule stipulating what types of actions should receive what types of punishments. The executive power is the power to make the judgments necessary to apply those rules to specific cases and administer force as directed by the rule (Two Treatises 2.88–89). Both of these actions involve interpretation. Locke states that positive laws “are only so far right, as they are founded on the Law of Nature, by which they are to be regulated and interpreted” (2.12). In other words, the executive must interpret the laws in light of its understanding of natural law. Similarly, legislation involves making the laws of nature more specific and determining how to apply them to particular circumstances ( 2.135) which also calls for interpreting natural law. Locke did not think of interpreting law as a distinct function because he thought it was a part of both the legislative and executive functions (Tuckness 2002a).
If we compare Locke’s formulation of separation of powers to the later ideas of Montesquieu, we see that they are not so different as they may initially appear. Although Montesquieu gives the more well known division of legislative, executive, and judicial, as he explains what he means by these terms he reaffirms the superiority of the legislative power and describes the executive power as having to do with international affairs (Locke’s federative power) and the judicial power as concerned with the domestic execution of the laws (Locke’s executive power). It is more the terminology than the concepts that have changed. Locke considered arresting a person, trying a person, and punishing a person as all part of the function of executing the law rather than as a distinct function.
Locke believed that it was important that the legislative power contain an assembly of elected representatives, but as we have seen the legislative power could contain monarchical and aristocratic elements as well. Locke believed the people had the freedom to created “mixed” constitutions that utilize all of these. For that reason, Locke’s theory of separation of powers does not dictate one particular type of constitution and does not preclude unelected officials from having part of the legislative power. Locke was more concerned that the people have representatives with sufficient power to block attacks on their liberty and attempts to tax them without justification. This is important because Locke also affirms that the community remains the real supreme power throughout. The people retain the right to “remove or alter” the legislative power (Two Treatises 2.149). This can happen for a variety of reasons. The entire society can be dissolved by a successful foreign invasion (2.211), but Locke is more interested in describing the occasions when the people take power back from the government to which they have entrusted it. If the rule of law is ignored, if the representatives of the people are prevented from assembling, if the mechanisms of election are altered without popular consent, or if the people are handed over to a foreign power, then they can take back their original authority and overthrow the government (2.212–17). They can also rebel if the government attempts to take away their rights (2.222). Locke thinks this is justifiable since oppressed people will likely rebel anyway and those who are not oppressed will be unlikely to rebel. Moreover, the threat of possible rebellion makes tyranny less likely to start with (2.224–6). For all these reasons, while there are a variety of legitimate constitutional forms, the delegation of power under any constitution is understood to be conditional.
Locke’s understanding of separation of powers is complicated by the doctrine of prerogative. Prerogative is the right of the executive to act without explicit authorization for a law, or even contrary to the law, in order to better fulfill the laws that seek the preservation of human life. A king might, for example, order that a house be torn down in order to stop a fire from spreading throughout a city (Two Treatises 1.159). Locke defines it more broadly as “the power of doing public good without a rule” (1.167). This poses a challenge to Locke’s doctrine of legislative supremacy. Locke handles this by explaining that the rationale for this power is that general rules cannot cover all possible cases and that inflexible adherence to the rules would be detrimental to the public good and that the legislature is not always in session to render a judgment (2.160). The relationship between the executive and the legislature depends on the specific constitution. If the chief executive has no part in the supreme legislative power, then the legislature could overrule the executive’s decisions based on prerogative when it reconvenes. If, however, the chief executive has a veto, the result would be a stalemate between them. Locke describes a similar stalemate in the case where the chief executive has the power to call parliament and can thus prevent it from meeting by refusing to call it into session. In such a case, Locke says, there is no judge on earth between them as to whether the executive has misused prerogative and both sides have the right to “appeal to heaven” in the same way that the people can appeal to heaven against a tyrannical government (2.168).
The concept of an “appeal to heaven” is an important concept in Locke’s thought. Locke assumes that people, when they leave the state of nature, create a government with some sort of constitution that specifies which entities are entitled to exercise which powers. Locke also assumes that these powers will be used to protect the rights of the people and to promote the public good. In cases where there is a dispute between the people and the government about whether the government is fulfilling its obligations, there is no higher human authority to which one can appeal. The only appeal left, for Locke, is the appeal to God. The “appeal to heaven,” therefore, involves taking up arms against your opponent and letting God judge who is in the right.
In Locke's Letter Concerning Toleration, he develops several lines of arguments that are intended to establish the proper spheres for religion and politics. His central claims are that government should not use force to try to bring people to the true religion and that religious societies are voluntary organizations that have no right to use coercive power over their own members or those outside their group. One recurring line of argument that Locke uses is explicitly religious. Locke argues that neither the example of Jesus nor the teaching of the New Testament gives any indication that force is a proper way to bring people to salvation. He also frequently points out what he takes to be clear evidence of hypocrisy, namely that those who are so quick to persecute others for small differences in worship or doctrine are relatively unconcerned with much more obvious moral sins that pose an even greater threat to their eternal state.
In addition to these and similar religious arguments, Locke gives three reasons that are more philosophical in nature for barring governments from using force to encourage people to adopt religious beliefs (Works 6:10–12). First, he argues that the care of men's souls has not been committed to the magistrate by either God or the consent of men. This argument resonates with the structure of argument used so often in the Two Treatises to establish the natural freedom and equality of mankind. There is no command in the Bible telling magistrates to bring people to the true faith and people could not consent to such a goal for government because it is not possible for people, at will, to believe what the magistrate tells them to believe. Their beliefs are a function of what they think is true, not what they will. Locke's second argument is that since the power of the government is only force, while true religion consists of genuine inward persuasion of the mind, force is incapable of bringing people to the true religion. Locke's third argument is that even if the magistrate could change people's minds, a situation where everyone accepted the magistrate's religion would not bring more people to the true religion. Many of the magistrates of the world believe religions that are false.
Locke's contemporary, Jonas Proast, responded by saying that Locke's three arguments really amount to just two, that true faith cannot be forced and that we have no more reason to think that we are right than anyone else has. Proast argued that force can be helpful in bringing people to the truth “indirectly, and at a distance.” His idea was that although force cannot directly bring about a change of mind or heart, it can cause people to consider arguments that they would otherwise ignore or prevent them from hearing or reading things that would lead them astray. If force is indirectly useful in bringing people to the true faith, then Locke has not provided a persuasive argument. As for Locke's argument about the harm of a magistrate whose religion is false using force to promote it, Proast claimed that this was irrelevant since there is a morally relevant difference between affirming that the magistrate may promote the religion he thinks true and affirming that he may promote the religion that actually is true. Proast thought that unless one was a complete skeptic, one must believe that the reasons for one's own position are objectively better than those for other positions.
Jeremy Waldron (1993), in an influential article, restated the substance of Proast's objection for a contemporary audience. He argued that, leaving aside Locke's Christian arguments, his main position was that it was instrumentally irrational, from the perspective of the persecutor, to use force in matters of religion because force acts only on the will and belief is not something that we change at will. Waldron pointed out that this argument blocks only one particular reason for persecution, not all reasons. Thus it would not stop someone who used religious persecution for some end other than religious conversion, such as preserving the peace. Even in cases where persecution does have a religious goal, Waldron agrees with Proast that force may be indirectly effective in changing people's beliefs. Much of the current discussion about Locke's contribution to contemporary political philosophy in the area of toleration centers on whether Locke has a good reply to these objections from Proast and Waldron.
Some contemporary commentators try to rescue Locke's argument by redefining the religious goal that the magistrate is presumed to seek. Susan Mendus, for example, notes that successful brainwashing might cause a person to sincerely utter a set of beliefs, but that those beliefs might still not count as genuine. Beliefs induced by coercion might be similarly problematic. Paul Bou Habib argues that what Locke is really after is sincere inquiry and that Locke thinks inquiry undertaken only because of duress is necessarily insincere. These approaches thus try to save Locke's argument by showing that force really is incapable of bringing about the desired religious goal.
Other commentators focus on Locke's first argument about proper authority, and particularly on the idea that authorization must be by consent. David Wootton argues that even if force occasionally works at changing a person's belief, it does not work often enough to make it rational for persons to consent to the government exercising that power. A person who has good reason to think he will not change his beliefs even when persecuted has good reason to prevent the persecution scenario from ever happening. Richard Vernon argues that we want not only to hold right beliefs, but also to hold them for the right reasons. Since the balance of reasons rather than the balance of force should determine our beliefs, we would not consent to a system in which irrelevant reasons for belief might influence us.
Other commentators focus on the third argument, that the magistrate might be wrong. Here the question is whether Locke's argument is question begging or not. The two most promising lines of argument are the following. Wootton argues that there are very good reasons, from the standpoint of a given individual, for thinking that governments will be wrong about which religion is true. Governments are motivated by the quest for power, not truth, and are unlikely to be good guides in religious matters. Since there are so many different religions held by rulers, if only one is true then likely my own ruler's views are not true. Wootton thus takes Locke to be showing that it is irrational, from the perspective of the individual, to consent to government promotion of religion. A different interpretation of the third argument is presented by Tuckness. He argues that the likelihood that the magistrate may be wrong generates a principle of toleration based on what is rational from the perspective of a legislator, not the perspective of an individual citizen. Drawing on Locke's later writings on toleration, he argues that Locke's theory of natural law assumes that God, as author of natural law, takes into account the fallibility of those magistrates who will carry out the commands of natural law. If “use force to promote the true religion” were a command of natural law addressed to all magistrates, it would not promote the true religion in practice because so many magistrates wrongly believe that their religion is the true one. Tuckness claims that in Locke's later writings on toleration he moved away from arguments based on what it is instrumentally rational for an individual to consent to. Instead, he emphasized testing proposed principles based on whether they would still fulfill their goal if universally applied by fallible human beings.
- Filmer, Robert, Patriarcha and Other Writings, Johann P. Sommerville (ed.), Cambridge: Cambridge University Press, 1991.
- Hooker, Richard, 1594, Of the Laws of Ecclesiastical Polity, A. S. McGrade (ed.), Cambridge: Cambridge University Press, 1975.
- Locke, John, Works, 10 vols. London, 1823; reprinted, Aalen: Scientia Verlag, 1963.
- –––, 1690, An Essay Concerning Human Understanding, Peter H. Nidditch (ed.), Oxford: Clarendon Press, 1975.
- –––, 1689, Letter Concerning Toleration, James Tully (ed.), Indianapolis: Hackett Publishing Company, 1983.
- –––, 1689, Two Treatises of Government, P. Laslett (ed.), Cambridge: Cambridge University Press, 1988.
- –––, 1693, Some Thoughts Concerning Education; and On the Conduct of the Understanding, Ruth Grant and Nathan Tarcov (eds.), Indianapolis: Hackett, 1996.
- –––, Political Essays, Mark Goldie (ed.), Cambridge: Cambridge University Press, 1997.
- –––, An Essay Concerning Toleration and Other Writings on Law and Politics, 1667–1683, J.R. Milton and Phillip Milton (eds.), Oxford: Clarendon Press, 2006.
- Montesquieu, 1748, The Spirit of the Laws, Anne Cohler, Basia Miller, and Harold Stone (trans. and eds.), Cambridge: Cambridge University Press, 1989.
- Proast, Jonas, 1690, The Argument of the Letter concerning Toleration Briefly Consider'd and Answered, in The Reception of Locke's Politics, vol. 5, Mark Goldie (ed.), London: Pickering & Chatto, 1999.
- –––, 1691, A Third Letter to the Author of …, in The Reception of Locke's Politics, vol. 5, Mark Goldie (ed.), London: Pickering & Chatto, 1999.
- Pufendorf, Samuel, 1672, De Jure Naturae et Gentium (Volume 2), Oxford: Clarendon Press, 1934.
- Aaron, Richard, 1937, John Locke, Oxford: Oxford University Press.
- Armitage, David, 2004, “John Locke, Carolina, and the Two Treatises of Government”, Political Theory, 32: 602–627.
- Arneil, Barbara, 1996, John Locke and America, Oxford: Clarendon Press.
- Ashcraft, Richard, 1986, Revolutionary Politics and Locke's Two Treatises of Government, Princeton: Princeton University Press.
- Ashcraft, Richard, 1987,Locke's Two Treatises of Government, London: Unwin Hymen Ltd.
- Butler, M.A. “Early Liberal Roots of Feminism: John Locke and the Attack on Patriarchy”, American Political Science Review, 72: 135–150.
- Chappell, Vere, 1994, The Cambridge Companion to Locke, Cambridge: Cambridge University Press.
- Creppell, Ingrid, 1996, “Locke on Toleration: The Transformation of Constraint”, Political Theory, 24: 200–240.
- Colman, John, 1983, John Locke's Moral Philosophy, Edinburgh: Edinburgh University Press.
- Cranston, Maurice, 1957, John Locke, A Biography, London: Longmans, Green.
- Dunn, John, 1969, The Political Thought of John Locke, Cambridge: Cambridge University Press.
- –––, 1980, “Consent in the Political Theory of John Locke”, in Political Obligation in its Historical Context, Cambridge: Cambridge University Press.
- –––, 1990, “What Is Living and What Is Dead in the Political Theory of John Locke?”, in Interpreting Political Responsibility, Princeton: Princeton University Press.
- –––, 1991, “The Claim to Freedom of Conscience: Freedom of Speech, Freedom of Thought, Freedom of Worship?”, in From Persecution to Toleration: the Glorious Revolution and Religion in England, Ole Peter Grell, Jonathan Israel, and Nicholas Tyacke (eds.), Oxford: Clarendon Press.
- Farr, J., 2008, “Locke, Natural Law, and New World Slavery”, Political Theory, 36: 495–522.
- Franklin, Julian, 1978, John Locke and the Theory of Sovereignty, Cambridge: Cambridge University Press.
- Forde, Steven, 2001, “Natural Law, Theology, and Morality in Locke”, American Journal of Political Science, 45: 396–409.
- Forster, Greg, 2005, John Locke's Politics of Moral Consensus, Cambridge: Cambridge University Press.
- Goldie, Mark, 1983, “John Locke and Anglican Royalism”, Political Studies, 31: 61–85.
- Grant, Ruth, 1987, John Locke's Liberalism: A Study of Political Thought in its Intellectual Setting, Chicago: University of Chicago Press.
- Harris, Ian, 1994, The Mind of John Locke, Cambridge: Cambridge University Press.
- Hirschmann, Nancy J and Kirstie Morna McClure (eds.), 2007, Feminist Interpretations of John Locke, University Park, PA: Penn State University Press.
- Macpherson, C.B., 1962, The Political Theory of Possessive Individualism: Hobbes to Locke, Oxford: Clarendon Press.
- Marshall, John, 1994, John Locke: Resistance, Religion, and Responsibility, Cambridge: Cambridge University Press.
- Marshall, John, 2006, John Locke, Toleration, and Early Enlightenment Culture, Cambridge: Cambridge University Press.
- Herzog, Don, 1985, Without Foundations, Ithaca: Cornell University Press.
- Horton, John and Susan Mendus (eds.), 1991, John Locke: A Letter Concerning Toleration in Focus, New York: Routledge.
- Kendall, Willmoore, 1959, John Locke and the Doctrine of Majority Rule, Urbana: University of Illinois Press.
- Nozick, Robert, 1974. Anarchy, State, and Utopia, New York: Basic Books.
- Pangle, Thomas, 1988, The Spirit of Modern Republicanism, Chicago: University of Chicago Press.
- Parker, Kim Ian. 2004, The Biblical Politics of John Locke, Waterloo, ON: Wilfrid Laurier University Press.
- Pasquino, Pasquale, 1998, “Locke on King's Prerogative”, Political Theory, 26: 198–208.
- Pitkin, Hanna, 1965, “Obligation and Consent I”, American Political Science Review, 59: 991–999.
- Roover, Jakob De and S. N. Balagangadhara, 2008, “ John Locke, Christian Liberty, and the Predicament of Liberal Toleration”, Political Theory, 36: 523–549.
- Ryan, Alan, 1965, “John Locke and the Dictatorship of the Proletariat”, Political Studies, 13: 219–230.
- Seliger, Martin, 1968, The Liberal Politics of John Locke, London: Allen & Unwin.
- Simmons, A. John, 1992, The Lockean Theory of Rights, Princeton: Princeton University Press.
- –––, 1993, On The Edge of Anarchy: Locke, Consent, and the Limits of Society, Princeton: Princeton University Press.
- Sreenivasan, Gopal, 1995, The Limits of Lockean Rights in Property, Oxford: Oxford University Press.
- Strauss, Leo, 1953, Natural Right and History, Chicago: University of Chicago Press.
- Tarcov, Nathan, 1984, Locke's Education for Liberty, Chicago: University of Chicago Press.
- Tuckness, Alex, 1999, “The Coherence of a Mind: John Locke and the Law of Nature”, Journal of the History of Philosophy, 37: 73–90.
- –––, 2002a, Locke and the Legislative Point of View: Toleration, Contested Principles, and Law, Princeton: Princeton University Press.
- –––, 2002b, “Rethinking the Intolerant Locke”, American Journal of Political Science, 46: 288–298.
- Tuckness, Alex, 2008, “Punishment, Property, and the Limits of Altruism: Locke's International Asymmetry”, American Political Science Review, 208: 467–480.
- Tuckness, Alex, 2010, “Retribution and Restitution in Locke's Theory of Punishment”, Journal of Politics, 72: 720–732.
- Tully, James, 1980, A Discourse on Property, John Locke and his adversaries, Cambridge: Cambridge University Press.
- –––, 1993, An Approach to Political Philosophy: Locke in Contexts, Cambridge: Cambridge University Press.
- Vernon, Richard, 1997, The Career of Toleration: John Locke, Jonas Proast, and After, Montreal and Kingston: McGill-Queens University Press.
- Waldron, Jeremy, 1988, The Right to Private Property, Oxford: Clarendon Press.
- –––, 1993, “Locke, Toleration, and the Rationality of Persecution” in Liberal Rights: Collected Papers 1981–1991, Cambridge: Cambridge University Press, pp. 88–114.
- –––, 2002, God, Locke, and Equality: Christian Foundations of Locke's Political Thought, Cambridge: Cambridge University Press.
- Wood, Neal, 1983, The Politics of Locke's Philosophy, Berkeley, University of California Press.
- –––, 1984, John Locke and Agrarian Capitalism, Berkeley, University of California Press.
- Woolhouse, R.S., 2007, John Locke: A Biography, Cambridge: Cambridge University Press.
- Yolton, John, 1958, “Locke on the Law of Nature”, Philosophical Review, 67: 477–498.
- –––, 1969, John Locke: Problems and Perspectives, Cambridge: Cambridge University Press.
- Zukert, Michael, 1994, Natural Rights and the New Republicanism, Princeton: Princeton University Press.
- The Works of John Locke, 1824 edition; several volumes, including the Essay Concerning Human Understanding, Two Treatises of Government, all four Letters on Toleration, and his writings on money.
- The Episteme Links Locke page, keeps an up-to-date listing of links to Locke sites on the web.
- John Locke's Political Philosophy, entry by Alexander Moseley, in the Internet Encyclopedia of Philosophy
- John Locke, at The Great Voyages web site, maintained by William Uzgalis (Oregon State University).
- Images of Locke, at the National Portrait Gallery, Great Britain.
contractarianism | Grotius, Hugo | Hobbes, Thomas | legitimacy, political | Locke, John | paternalism | political obligation | property and ownership | Pufendorf, Samuel Freiherr von: moral and political philosophy | rights | social contract: contemporary approaches to | http://plato.stanford.edu/entries/locke-political/index.html | 13 |
17 | What Caused the American Civil War
What Caused the American Civil War?
Racism caused the American Civil War, plain and simple: a racism that transcended social culture, geographic section, and political orientation, and became entangled with the creation of the Constitution and the Union.
Columbus discovered the Americas in 1492, and within three years the beginning was made of the harsh oppression which would cause the native races to disappear and bring Africans in chains to America. One hundred and thirty years later, at the time the Jamestown colony took root in Virginia, slavery was the rule in the Americas. Initially, to people its American colonies, the British government sent indentured servants to the New World, then criminals, and finally Africans as slaves. By the middle of the Eighteenth Century, the slave trade had developed into a huge business, profitable to both the indigenous entrepreneurs along the West African coast and the owners of New England ships. Slaves were assembled in Africa through purchase, barter, raiding, kidnapping and warfare, brought to the coast by Africans and sold to African brokers who held them in barracoons until ships arrived to carry them by the Middle Passage to America. The total volume of this trade is unknown, but during its heyday at least seven million enslaved souls reached American shores.
Slave Percentage of Total Population by State, 1790
(See, Clayton E. Cramer, Black Demographic Data, A Sourcebook (1997))
The Fledgling United States, 1787
The map illustrates the military situation as the founders knew it, in 1787, when at Philadelphia they drafted the Constitution. At that time the general government of the “United States” was known as the Continental Congress, a body made up of representatives of the several “States” which could pass no substantial laws governing the whole without unanimous consent. With their “country” surrounded as it was by the Great Powers of Europe, the founders at the time the constitution was written had to be thinking there was going to be heavy military confrontations between their Union and the Great Powers for possession of the resources of the continent. Therefore, the paramount thought in all their minds had to be the concept of unity—the principle of all for one and one for all. This reality explains how two distinctly different societies became so locked together politically that no key but war could separate them.
History records the plain fact that, at the time the Constitution was drafted, the attitude of the people living in the states north of the Mason-Dixon line was steadily coalescing in support of abolition. In 1777, Vermont prohibited slavery through its constitution. In 1783, the supreme courts of Massachusetts and New Hampshire declared slavery volative of their state constitutions. In 1784, the legislatures of Rhode Island and
The Continental Congress was in session in New York at the same time the Constitutional Convention was in session in Philadelphia. Key negotiations occurred between the two bodies which resulted in the formation of the new government.
On the South’s side there could be no common government unless its slave population was counted in the calculation of the number of representatives to be assigned its congressional districts. On the North’s side, there could be no common government unless Free states would always exceed Slave states and thus ultimately control the balance of power.
Both sides understood at the beginning of the new compact that when all the existing territory of the Union was turned into states, there would be more Free states than Slave; but not enough Free states to make up a super majority—the number needed to amend the constitution. As long as that number did not reach three fourths the constitution, permitting slavery plainly by its terms in any state that embraced it, could never be amended. Thus, the constitutional provision of Article I, in conjunction with Article IV, Section 2—“No person held to labor in one State, under the laws thereof, escaping into another shall. . . be discharged from such labor”—created the uneasy alliance that preserved the institution of slavery in the South for another eighty years.
The Continental Congress was handed the proposed constitution that the Philadelphia Convention had drafted: instead of voting it up or down, under the authority of the Articles of Confederation, the Congress chose to send it to the legislatures of the several states as Madison proposed; so that the legislatures might form conventions of the people to vote it up or down. Thus, in political theory, the sovereign power of the people would trump the paper barrier presented by the “perpetuality” of the Articles of Confederation..
Articles of Confederation and Perpetual Union between the States
(Ratified by Unanimous Consent July 9, 1778)
“Whereas the Delegates of the United States of America in Congress assembled agree to certain articles of confederation and perpetual Union between the states to wit:
Article I. The style of this confederacy shall be “The United States of America.”
Article II. Each State retains its sovereignty, freedom and independence. . .
Article III. The said states enter into a firm league of friendship with each other, for their common defense, the security of their liberties, and. . .bind themselves to assist each other against all force made upon them. . .
Article IV. . . . the free inhabitants of each of these states shall be entitled to all privileges and immunities of free citizens in the several states. . .
Article V. Each state shall maintain its own delegates to the Congress. . . In determining questions in the United States, in Congress assembled, each state shall have one vote.
Article XIII. The Articles of Confederation shall be inviolably observed by every state, and the Union shall be perpetual; nor shall any alternation at any time hereafter be made in any of them; unless such alternation be agreed to in a congress of the united states, and be afterwards confirmed by the legislatures of every state. . . and we do further solemnly plight and engage the faith of our respective constituents, that they shall abide by the determination of the united states in congress assembled, on all questions. . .and that the articles shall be inviolably observed by the states and that the union shall be perpetual.”
The Proposed Constitution
“Article VII. The ratification of the conventions of nine states, shall be sufficient for the establishment of this constitution between the states so ratifying the same.”
The government of the United States to be spontaneously reconstituted upon the vote of nine states: so much for the “inviolability” of words. Here is the explicit semantic seed of civil war and the implicit manifestation of the ultimate axiom of political science. Despite the solemn pledge, four times repeated, that the Union was to be “perpetual” as framed by the Articles, which required unanimous consent of the states to be changed, the states of the perpetual Union, not consenting, would suddenly be out into the cold.
It’s not surprisingly, then, that soon after the Constitution became the supreme law of the land, there emerged an irrepressible struggle between the two sections for political supremacy: one side pressing for the restriction of slavery, the other side pressing for its expansion. As the threat of war with the Great Powers faded, the North shrugged off its constitutional commitments about slavery and began sticking knives into the South. The South, too entwined with an alien population of Africans to get rid of it, was left with no rational choice, but to seize upon the example set by the founders and declare, by the sovereign power of its people, independence from the North.
When the Constitution became operative, in 1789, the United States was composed of six slave states: Virginia, Delaware, Maryland, North Carolina, South Carolina and Georgia; and seven essentially Free states—Massachusetts Bay, New Hampshire, Rhode Island and Providence Plantation, Connecticut, New York, New Jersey and Pennsylvania. Between 1789 and 1819, operating on the basis of equal division, the Congress admitted into the Union five Free states—Vermont, Ohio, Indiana, Illinois and Maine—and five Slave states: Kentucky, Tennessee, Louisiana, Mississippi and Alabama.
In 1804, the United States through a treaty with France received possession of the territory of the Spanish Empire, extending from the charter limits of Virginia, the Carolinas, and Georgia, and ending at the line of the Sabine River in Arkansas. In 1819, under a treaty with Spain, the U.S. acquired the territory of Florida.
The acquisition of this new territory put considerable political stress on the principle of division that was inherent in the compact shaped by the Constitution. In 1818, the free inhabitants of that part of the Louisiana Territory known as Missouri established a provisional government and petitioned Congress for admission into the Union as a state.
The Missouri Compromise
In the House of Representatives, Tallmadge of New York moved that Missouri be admitted upon condition that all children of slaves born after the date of admission be deemed free when they reached the age of twenty-five, and that the introduction of slaves into the state after its admission be prohibited. Tallmadge’s amendment passed the House, but was stricken by the Senate, sending the bill back to the House. The House refused to pass the bill without the amendment.
When the debate continued into the session of 1819, Henry Clay, then a member of the House, urged the admission of Missouri without the amendment, on the ground that, under Article 4, Section 4 of the Constitution, which provides that “The United States shall guarantee to every State a Republican form of government,” Missouri was entitled to decide for itself whether its laws should recognize a right of property in persons. On this basis, the House again passed the admissions bill and sent it to the Senate.
In the Senate, the argument arose that, under its power to “make all needful rules and regulations for the Territory of the United States,” (Article 4. Section 3) Congress had authority to prohibit slavery, and the prohibition should be imposed for all territory above Missouri’s southern border—the so-called 36-30 line.
In June, 1820, as the debate over admission continued in Congress, Missouri ratified a constitution that contained a provision excluding free Negroes from residence. A majority of congressmen then voted against admission, on the ground that free Negroes were citizens of the states in which they resided and, hence, citizens of the United States, entitled to all the privileges and immunities of same, which included the right to travel anywhere in the United States.
The outcome of the debate in the Senate was the passage of a resolution accepting Missouri into the Union, under the constitution prohibiting the residence of free Negroes, but with the condition that slavery would henceforth be prohibited in the remaining territory above the 36-30 line. After more furious debate in the House, the bill of admission passed the Congress, with the proviso that Missouri promise not to enforce its “no free Negroes” provision. Missouri agreed to this and thus became a state.
Under the 36-30 rule, between 1820 and 1837, the Free states of Maine and Michigan, and the Slave states of Missouri, Arkansas, and Florida, were admitted into the Union.
In 1845, the Republic of Texas was admitted into the Union. There the matter in dispute rested until the war with Mexico, in 1846-47, added the Spanish Crown’s old Southwestern lands west of the Sabine River to the Territory of the United States. After this war, two Free states were admitted: Iowa and Wisconsin. The Free and Slave states were now evenly balanced at fifteen a piece.
In August 1848, a bill for organizing the Oregon territory into a state was introduced in the House of Representatives. Now began the political struggle in earnest, which led directly to the collapse of the Whig party and the emergence of the Republican Party, the election of Abraham Lincoln and the descent of the people of the United States into civil war.
Consistent with the principle of the 36-30 rule, the Oregon Admission Bill was passed by the House with a general slavery restriction in it and sent to the Senate. In the Senate, Illinois Senator Stephen Douglas moved to strike the restriction and insert in its place the provision that the 36-30 line be extended to the Pacific Ocean. The Senate adopted the amendment and the bill returned to the House. Quickly, a majority of representatives voted to reject the bill, for it was plain to see that, if the 36-30 was so extended, the territories of Southern California, Nevada, Utah, Arizona, and New Mexico, forcibly taken from Mexico in 1847, would be open to the introduction of slavery.
With the weight of congressional representation by now firmly grounded in the general population of the Free states, the political fact was plain that the votes of the Free states controlled the balance of power in Congress and they would use that power to prevent the admission of new slave states. Even so, in the Senate, the votes showed that some senators were more interested in the economic profits flowing from the admission of states than in preventing the introduction of slavery.
In the Senate, at the beginning of the Oregon debate, it appeared that sixteen states were in favor of extending the 36-30 line. Two of these states were Pennsylvania and Indiana. Nine states, all Northern, were against it, and three states—New York, Michigan, and Illinois, were divided. On the final vote, the vote was 14 Free states to remove the Douglas amendment and 13 Slave states to keep it. Missouri’s vote was divided, Senator Thomas H. Benton voting with the Free states. The senators from Iowa and Florida did not vote. In the House of Representatives seventy-eight of the eighty-eight votes for the amendment were from Slave states and four from Free states. 121 votes were cast against it: only one of these votes was cast by a representative of a Slave state.
When the Congress convened in 1849, there was great excitement throughout the land. The congressional votes over the Oregon Bill had shown that the Free states were no longer willing to honor the principle of equal division which had originally underpinned the consensus of the Philadelphia Convention. As a consequence of this changing attitude, the Whig Party would disintegrate, the Republican Party would be born, and the Democratic Party would split into conservative and radical factions, with the radicals eventually coalescing with the new Republicans.
In the summer of 1849, President Taylor manipulated events in California which resulted in a setting up of a convention, the framing of a constitution, and a petition arriving at Congress seeking admission as a state.
In January 1850, the Democrats controlled the Senate but the House was deadlocked: 111 Democrats, 105 Whigs, and 13 Freesoilers.
Henry Clay now appeared in the Senate as senator from Kentucky. When he took his seat in the tiny Senate chamber, John C. Calhoun and Daniel Webster —both old men now—were still there. Among the younger men there was Stephen Douglas, now the recognized leader of the Democratic Party, Jefferson Davis of Mississippi, Salmon Chase of Ohio, the founder of the Republican Party, William Seward of New York. And Fillmore, as Vice President, occupied the chair.
When the 1850 session opened, Thomas H. Benton of Missouri introduced a bill to reduce the size of Texas. Other senators introduced bills to spilt Texas into more than one state. Still others proposed territorial governments for California, New Mexico, and Utah.
Now began an intensity of rhetoric that rose and rose in shrill noise and anger until the collapse of the Union in 1860. It began with Henry Clay gaining the Senate floor and, holding it for two days, arguing for a series of resolutions. Clay proposed that the matter of Texas be postponed, that California be admitted, that the territorial governments for Utah and New Mexico be organized without the slavery restriction, and that the domestic slave trade existing in the District of Columbia be abolished.
At this time, Douglas was chairman of the Committee on Territories in the Senate and McClernand was chairman of the committee in the House. Alexander Stephens and Robert Toombs of Georgia controlled the Southern Whigs in the House and they persuaded Douglas to compromise between the two sides: in exchange for the admission of California as a Free state, additional states to be formed from the remaining territory could determine for themselves whether to recognize or reject slavery.
No doubt motivated by his political ambitions, Douglas agreed to Stephens's plan and both Douglas and McClernand introduced bills in their respective chambers to that effect. At the same time, President Taylor sent California's petition for admission to the Congress for ratification.
Compromise of 1850
At the time these issues came to a head, in March of 1850, the senators were at their seats, with the galleries and privileged seats and places on the floor filled with ladies, officers of the government and members of the House and other visitors. Everyone present knew that when California came in the Union a Free state, the principle of equal division of territory between the Free and Slave states would be lost forever, and the balance of power in favor of the Free states, as it had in the House, would shift in the Senate.
In the course of the session, Seward of New York and Davis of Mississippi, friends outside the Senate, stood behind their wooden desks, gesticulating and hurling invectives at each other. Davis proclaimed that the Slave states would never take less that the 36/30 line extended to the Pacific with the right to hold slaves in California below the line.
Benton of Missouri cut in before Seward could respond, to say no earthly power existed which could compel the majority to vote for slavery in California. In the flaring of temper, Foote was seen to draw a pistol from his coat and point it at Benton, when, suddenly, the appearance of the gaunt form of John C. Calhoun hushed the clamoring throng.
Calhoun leaned heavily on his cane as he slowly swayed down the center aisle of the chamber. The contending senators stepped aside into the rows of desks to make way for him to pass. Calhoun's face was deeply tanned, but his cheeks were sunken and his body seemedswallowed in the great cloak he wore.
Clay, Webster, Davis, Douglas and others crowded around him, escorting him to his place among the desk rows. When he reached his old seat, Calhoun gathered the folds of his long cloak in his hands and feebly sat down in his chair. There was a general scurrying among the people in the chamber as they found their places and Vice President Fillmore recognized the senior senator from South Carolina.
Calhoun rose slowly to his full height to say in anticlimax that Mason from Virginia would read his speech, and he sat back down.
In a matter of days Calhoun would be dead. Calhoun's speech was cold and blunt. He had no illusions about the nature of the Union. He knew that the incredible acquisition by the United States of territory, which stretched from Oregon to the Gulf of Mexico, would cause the unraveling of the ropes that held the country together.
How to preserve the Union? Not by Clay's plan, Calhoun contended, for it ignored the root of the issue: the Union could not be preserved by admitting California as a Free state. It could only be done by the North conceding to the South an equal right to the acquired territory, to enforce the fugitive slave provision in the Constitution, stop the antislavery agitation in the halls of Congress, in the pulpits and the press, and amend the Constitution to expressly recognize the right of property in man.
Interrupting Mason’s reading of his speech, Calhoun raised himself from his seat and asked his supporters to show their hands. Hands tentatively appeared one by one above the heads of some of the spectators in the galleries and the senators on the floor. As Calhoun scanned the faces of his fellow senators, Mason continued with the speech, saying that, if the North would not do these things, the States should part in peace. And, if the Free states were unwilling to let the South go in peace, "Tell us so,” Calhoun said, “and we shall know what to do when you reduce the question to submission or resistance."
At this statement, the chamber became quiet as a church. Daniel Webster leaned forward in his chair, staring gloomily into space; Thomas Benton on the back bench sat rigid like a slab of granite; Henry Clay sat with his hands shielding his face. In the minds of each of the politicians came a quick black image of cities in smoking ruins. And everywhere in the little chamber was felt the veiled touch of dreadful black ghosts wandering.
On March 7th, 1850, Daniel Webster took the Senate floor and responded to Calhoun's speech, his piercing black eyes flashing. Webster was dressed in tight vanilla breeches, with a blue cloth coat cut squarely at the waist, and adorned with brass buttons, his neck encasedin a high soft collar surrounded by black stock.
Webster flatly rejected the idea of separation of the States as a physical impossibility. It is impossible, he said, for the simple reason that the Mississippi cannot be cut in two, the North to control its headwaters and the South its mouth. How could the North's commerce flow uninterrupted from the Ohio and Mississippi valleys to the Caribbean? What would become of the border states as they are pulled north and south? What would become of commerce between the West and the East?
Then the Senator from Massachusetts suggested to the Senate the one politically honest solution which might have redeemed the tyranny of the people of the Free states, in bottling up the African Negroes in the old states of the South.
Return to Virginia, Webster proposed, and through her to the whole South, the two hundred millions of dollars the National government obtained from the sale of the old Northwest Territory she ceded to the United States—in exchange for the abolition of slavery in the South.
Here was a solution to the problem of maintaining the South’s economic integrity—a solution which recognized that the existence of slavery was a national, not sectional, responsibility; a solution which shared the burden the abolition of slavery entailed. But to adopt it, there must have been included the recognition that the Africans were now “citizens of the United States,” with all the privileges and immunities that term entails—the right to travel, the right to litigate in the courts, and the right to vote. This the Northern senators were not then prepared to allow. It would mean living with the Africans on a basis of equality.
Once freed, where were the Africans to go? How were they to earn their living? What was to be their new place in society? Where? And what was to be the conditions of the society in which they might find their place?
The then existing social caste of the African was founded in a deep-rooted prejudice in Northern public opinion as well as the South. Before the Revolution, it was not southern planters who brought the Africans in chains to America's shore. It was New England vessels, owned by New England businessmen, manned by New England citizens, which traversed the Atlantic Ocean a thousand times to bring black cargo wailing into the ports of Norfolk, Charleston and Savannah. In 1850, the laws of many of the Free states did not recognize free Africans as citizens. They could own certain property and they were required to pay taxes but they could neither vote nor serve on juries, and their children were forced to attend segregated public schools.
Just the year before, for example, in 1849, a little five year old colored girl, Sara Roberts, had sued the Boston School District, seeking the right to attend the school closest to her home, instead of the colored school way across town. Though colored children, the Supreme Court ruled, have a right to public education, the right was limited to a separate education. (See Roberts v. Boston (1849) 59 Mass. 198) New England had no slaves, it is true, but still a majority of its citizens didn’t want to live with Negroes.
Thus, even if the Government of the United States could have found the means somehow, to compensate the slaveowners for the taking of their property—Alexander Stephens thought compensation was worth two billion—and though the former slaves might live peacefully with their former owners, it could not be done on the basis of equality under law, and certainly not on the basis of citizenship. Emancipation would bring the Africans the freedom to perform work for some form of wages, but for a long time to come, in the eyes of most whites in the North as well as South, they would be a degraded and despised people not fit to socialize with.
Distribution of Slave Population in 1860
The Death of John C. Calhoun
John C. Calhoun died on March 31, 1850. The funeral ceremonies were conducted in the Senate chamber. President Taylor, Vice-president Fillmore, and Cobb, the Speaker of the House, attended with the members of the Supreme Court. The diplomatic corps was also present, standing with the other dignitaries in the well in front of the screaming eagle perched above the Senate President's chair. Daniel Webster and Henry Clay walked at the head of the simple metal casket as the pallbearers brought it down the center aisle past the rest of the senators standing by their desks.
The senators and dignitaries closed in around the pallbearers as they set Calhoun's casket down. In his eulogy, Webster said there was nothing mean or low about the man who had spent his life in the service of the National government, first as senator, then secretary of state, then vice president and finally as senator again. In fulfilling his public duties, Webster said, Calhoun was perfectly patriotic and honest.
. When the ceremony ended, the casket of South Carolina's greatest son was transported by caisson through the streets of Washington to the Navy landing and taken by vessel down the Chesapeake, past the Capes into the ocean and then to Charleston harbor where it was brought ashore and laid to rest in the quiet little churchyard of St. Phillips Church. Today, a hard-faced statute of Calhoun stands in a small, bare park in Charleston through which African Americans daily stroll.
Thomas Hart Benton was the next oldest member of the Senate behind Webster. He had spent thirty years in the Senate, voting always against measures which favored the slave interest. To convey his disdain for Calhoun's political views, Benton had turned his back as Webster spoke. Benton thought Calhoun's ideas were treason.
Benton was wrong about Calhoun. In Calhoun's view, allegiance to the sovereign meant faithful service to one's native state, the minority social group of which each American citizen was then a constituent member, and not faithful service to the Federal government.
The constitutional function of the federal government, Calhoun thought, was to administer the external affairs of the aggregate of the group. His view was consistent with the view of the Old states, whose political leaders designed the original Union. The delegates to the Constitutional Convention which framed the constitution, in 1787, were elected by the state legislatures. But the instrument, when it came from their hands, was nothing but a mere proposal. It carried no obligation. The people of each state acted upon it by assembling through their delegates in separate conventions held in each state. Thus, the government of the United States ultimately derives its whole authority from these state conventions. Sovereignty, whether the Federal Government likes it or nor, resides in the people.
By accepting the stipulation that the assent of the people of merely nine states was sufficient to make the constitution operative, the delegates to the Constitutional Convention and the delegates to the United States Congress expressly adopted the political principle that the people of the states, in a combination which amounted to less than the whole people of the United States, were naturally free to leave the "perpetual union" of the United States and among themselves, "form a more perfect union."
The only constraint on the power of the people of the seceding states to disengage from the perpetual union defined by the Articles of Confederation was the power of the people of the States remaining loyal to the original Union to resist disengagement. The nation styled the United States of America, therefore, was certainly not one Nation indivisible, with liberty and justice for all: it was a combination of divergent political societies, motivated by self-interest to unite together against the world.
When the people of the Old states first formed a Union between themselves under the Articles of Confederation, Virginia held title from the English crown to the territory north and west of the Ohio River extending to the Mississippi valley and the Great Lakes. Virginia could have remained aloof from the original Union and adopted the policy of concentrating a population sympathetic to its culture in the area of what is now Ohio and Michigan. Such a policy would have blocked New England from expanding the influence of its culture westward.
In such circumstance, if New England did not attempt by force of arms to wrest the Northwest Territory away from Virginia, Virginia and its allies might eventually have gained possession of all the territory between the Mississippi and the Pacific. Just look at the map!
Virginia certainly possessed the men, materiel and the allies necessary to enforce a policy of unilateral expansion into the western territories. Instead, Virginia not only joined the original Union but assented to the adoption of a more perfect union transferring in the process title to its Northwest Territory to the United States—with the stipulation that slavery be prohibited there. Truly Virginia, the mother of states, stands at the head of the first flight.
Virginia's voluntary transfer to the United States of its title to the Northwest Territory radically changed the strategic situation for New England. Instead of being bottled up on the northeast seaboard of the continent, the people of New England could peaceably migrate west and north of the river Ohio and take their culture with them. It can hardly be imagined, under such circumstance, that New England could have reasonably believed that Virginia and her allies would not likewise expect to migrate with their culture west and south of the river.
The principle of division of the Territory of the United States between two fundamentally divergent forms of Republican government, therefore, must have been understood by the whole people of the United States to be the bedrock upon which the political stability of the Union depended. If the representatives in Congress of a majority of the people of the United States were to discard it, without reference to the powers granted them by the Constitution, they could expect the people of the affected States to judge for themselves whether the usurpation justified their secession.
After Calhoun's death, the Congress returned to the debate regarding the admission of California in the Union as a Free State. As a consequence of the debate that had been waged between Webster, Clay, Benton and Davis, in the early months of the 1850 Senate session, the bills and amendments the senators had suggested were sent to a joint committee on the territories.
The Southern Whigs, led by Alexander Stephens and Robert Toombs, who were in the House at that time, wanted the Congress to agree that, in organizing all other territorial governments formed from the newly acquired Spanish territories, the settlers should be left alone to introduce slaves or not, and to frame their constitution as they might please. Stephen Douglas, as chairman of the committee on territories in the Senate, agreed with the Southern Whigs' plan and introduced a bill in the Senate in March 1850. Then Henry Clay was made chairman of a committee to review the series of resolutions he had offered on the Senate floor.
On May 8, 1850, Clay reported an "omnibus" bill which provided for restrictions on the introduction of slavery into the New Mexico territory. Jefferson Davis countered with an amendment which would allow slavery in Utah and Douglas moved to strike previousamendments to his bill. In the House, the majority rejected several amendments which would have allowed slavery to exist in the western territories.
In the Senate, in June 1850, Webster spoke in favor of Douglas's latest proposed amendment, which would leave the territorial governments free to decide the slavery issue for themselves. This meant that all territorial governments formed after the admission of California in the Union would not be subject to a slavery restriction.
At this point in the debate, President Taylor died, Fillmore became president and Webster left the Senate to join the Cabinet. After these events, in August 1850, with the admission of California and the organization of the territories of Utah and New Mexico, the Congress adopted the policy of leaving the issue of slavery to the territorial legislature to decide. At this, a country lawyer in Illinois, Abraham Lincoln, perked up.
In December 1852, a bill was introduced in the House to organize Nebraska territory. This territory was part of the territory obtained by France from Spain and ceded to the United States, in 1804. The bill passed and went to the Senate, in March 1853, but was voted down. The bill as it passed the House provided for the organization of a territory bounded by the 45th parallel on the north, Missouri and Iowa on the east, in the south by the 36th parallel and on the west by the Rocky Mountains.
The issue of organizing the Nebraska territory had come up in the Senate before 1853, but the Slave states rejected the organization bills because they did not want to open the territory to settlement under the restriction imposed by the Missouri Compromise. In addition, by keeping settlers out of Nebraska, the proposed transcontinental railroad could not be built from either Chicago or St. Louis, leaving open the possibility that the railroad would pass through Texas
Later, in December 1853, Dodge of Iowa introduced an organization bill for Nebraska. This bill was referred to the Committee on the Territories which was chaired by Stephen A. Douglas. In January 1854, Douglas reported favorably on Dodge's bill, but an amendment was attached to the bill which declared that, in accordance with the principles adopted in 1850, all questions relating to slavery should be left to the decision of the people who occupied the territory.
When southern senators indicated that they would introduce an amendment expressly repealing the Missouri Compromise, Douglas withdrew the Committee's report and presented it again with two amendments: one provided for two territories to be named Nebraska and Kansas and the other asserted that the Missouri Compromise had been superseded by the Compromise of 1850.
On March 3, 1854, the Senate passed Dodge's bill as reported by Douglas by a vote of 37 to 14. Slave state senators voted 23 in favor. Free state senators voted 14 to 12 in favor. The vote made plain that the Free state senators cared more about opening the Indian territory for construction of a railroad to the Pacific than they did about restricting slavery.
The House, with the general population of the Nation having shifted in favor of the restriction of slavery, experienced a bitter fight over the issue of the express repeal of the Missouri Compromise. On May 22, 1854, the Dodge bill passed the House by a vote of 113 to 100. As soon as the bill became law, the people of the border states began agitating for the opening of the Indian territory south of Kansas and west of Arkansas in order to open trade routes to Texas, New Mexico and California.
Meanwhile, Charles Sumner, freshman senator from Massachusetts, joined by Salmon Chase of Ohio, published a paper, which appealed to disaffected Whigs and Democrats to oppose the "monstrous plot" of the slave power to spread slavery further into the territories.
When Douglas introduced the revised bill for debate in the Senate, he had charged that Sumner and Chase were confederates in a conspiracy to force the abolition of slavery. The two senators, Douglas had bellowed, were the agents of "Niggerism in the Congress of the United States." Interrupting Douglas, Sumner snapped back that the policy behind the effort to repeal the Missouri Compromise, which the amended bill expressly codified, was a "soulless, eyeless monster—horrid, unshapen and vast."
For a month, Douglas, Butler, Mason and Sumner and Chase wrangled over the issue. Douglas saw the opponents to the bill as strutting down the path of abolition, "in Indian file, each treading close upon the heels of the other" avoiding ground "which did not bear the foot-print of the Abolition champion." Deep into the debate, Sumner finally gained the floor and declared that the slave power was reneging on a solemn covenant of peace after the free power had performed its side of the bargain; that it was destroying, with Douglas's revised bill, a "Landmark of Freedom." Immediately when Sumner finished his speech and sat down, Douglas took the floor and challenged Sumner's assertion that the Missouri Compromise was sacred. If one congressional act touching slavery was to be considered sacred, why not another like the Fugitive Slave Act which increasingly the Free States were repudiating. When the vituperative debate between the two antagonists finally ended, in May 1854, the bill easily passed the Senate. In the House, the debate lasted two weeks, the bill passing by a 13 vote majority. The repeal of the Missouri Compromise was history. The power of patronage proved greater than the power of principle.
Immediately after the passage of the Kansas-Nebraska organization bill, a petition signed by 3,000 Massachusetts citizens asking for the repeal of the 1850 Fugitive Slave Law was received in the Senate. Since 1850, every Free State had experienced great excitement over a "fugitive slave case." In Racine, Wisconsin, for instance, in March 1854, an African named Joshua Glover was arrested on a warrant issued by a United States District Court judge under the Fugitive Slave Law. Glover was accused of being a runaway slave from Missouri. Two United States marshals, with four other men, broke into Glover's house, arrested Glover and transported him to Milwaukee where he was placed in jail. The next morning, news of Glover's arrest by the marshals spread across Wisconsin. Soon a mob gathered in front of the jail. As the crowd in the courthouse square increased to five thousand, speakers denounced Glover's arrest and demanded the repeal of the slave catching law. Soon the temper of the mob became volatile and men gathered in a knot in front of the jailhouse door and battered it down, freeing Glover who the crowd then lifted bodily over their heads and carried away through the streets, shouting, "No slave hunters in Wisconsin," Glover escaped across Lake Michigan to Canada in a schooner.In Boston, the very day that Charles Sumner rose in the Senate to speak in support of the Massachusetts petition to repeal the Fugitive Slave law, Faneuil Hall was filled with citizens protesting the arrest by United States marshals of an African named Anthony Burns. Speakers soon incited the crowd to action and the citizens streamed out of Faneuil Hall and through the streets and attempted to storm the courthouse where Burns was being held. In the melee that followed one of the police officers guarding Burns was killed. President Pierce immediately ordered Federal troops to Boston and they took Burns into their custody and returned him to his master in Virginia.
In his speech, in support of the Massachusetts petition, Sumner told the Senate that the repeal of the Missouri Compromise annulled all past compromises with the slave power and made future compromise impossible. No more would the Free States tolerate the "disgusting rites" by which the slave hunters sent their dogs, with savage jaws, howling into Massachusetts after men escaping from bondage, Sumner said.
In the course of the uproar that followed Sumner's vehement words, Senator Butler of South Carolina gained the floor and demanded that the Free State senators say whether South Carolina could expect the return of runaway slaves if the Fugitive Slave Law was repealed.
Charles Sumner sat in the desk row in front of Butler's and when Butler spoke, Sumner jerked his chair back from his desk and stood up and faced him. Speaking over Butler's head to the spectators crowded together behind him in the vestibule space and the public gallery at the back of the senate chamber, Sumner shouted out,"
Is thy servant a dog, that he should do this thing?"
Butler's face flushed and he stumbled slightly as he took a step backward.
"Dogs? Dogs?," Butler cried.
Behind Butler, Mason of Virginia leaped to his feet and, stabbing his index finger toward the domed ceiling of the Senate chamber, he hissed at Sumner, "Black Republican, you dare to tell us there are dogs in the Constitution." Other senators shouted out that Sumner should be expelled for dishonoring his solemn oath to support the Constitution which provided that a "person held to service" in a Slave State escaping to a Free State "shall be delivered up" on demand of his master.
As the verbal storm swirled around him, Sumner braced himself against his chair. He stood tight-fisted and scanned the hot, red faces around him with black burning eyes.
"How many are there here," he shouted, "who will stoop with Butler and Mason to be a slave hunter? Who is here who will hunt the bondmen down, flying from Carolina's hateful hell? "
Calls of "censure, Censure," rang out from senators seated on both sides of the aisle, but no one directly answered Sumner's challenge. Sweeping his arm in an arch around the Senate chamber, Sumner continued,
"No Sir. No Sir, I do not believe there are any dogs, however keen their scent or savage their jaws, that can bind me to return your fugitive slaves."
Senator Cass of Michigan rose to remonstrate with Sumner, labeling his outburst "the most un-American and un-patriotic that ever grated on the ears." Douglas of Illinois joined Cass to charge Sumner with uttering obscenities which should be suppressed as "unfit for decent young men to read." Mason chimed in with the rebuke that Sumner's language reeked of "vice in its most odious form."
In rebuttal, Sumner attacked Douglas directly, saying, "No person with the upright form of man can be allowed—" Sumner's voice broke off.
Douglas leaped back to his feet in a rage. "Say it," Douglas shouted.
"I will say it," Sumner retorted; "No person with the upright form of man can be allowed, without violation of all decency, to switch out from his tongue the perpetual stench of offensive personality. . . The nameless animal to which I now refer, is not the proper model for an American senator. Will the Senator from Illinois take notice?"
"I will not imitate you," Douglas shouted back.
Sumner would not stop. "Again the Senator has switched his tongue, and again he fills the Senate with its offensive odor."
When the newspapers reported Sumner's harangue, the public response from the North was highly favorable toward Sumner. The residents of Washington, generally pro-slavery in sympathy, discussed his speech on the street corners, expressing the view that somebody ought to kick the Massachusetts senator down a flight of stairs.
During the next two years, the issue of the settlement of Kansas and the recognition of a territorial government constantly occupied the attention of the Congress. The radical Democrats and Whigs, now transformed into new Republicans, actively supported the migration of people from the Free States to Kansas territory while the slave power in the Democratic Party supported the immigration of Southerners. The Pierce administration appointed a Southerner to act as Territorial Governor and he quickly held elections for a territorial legislature. Since Southern immigrants outnumbered their Northern counterparts early in the process of settling Kansas, the slave power won a majority of the seats in the legislature, which was seated at the town of LeCompton, and it promptly adopted the civil law of Missouri. As time passed, however, settlers from the Free States began to arrive in substantial number and established towns in the northwestern part of the territory. Then they met in convention and organized a shadow legislature seated at Topeka and it adopted a constitution which prohibited any Africans, whether free or slave, from residing in Kansas. In January 1856, President Pierce issued a proclamation which recognized LeCompton as the legitimate legislature and ordered the shadow legislature at Topeka to disband. When the members of the Topeka legislature refused, supporters of the LeCompton legislature sacked the free soil stronghold of Lawrence, Kansas. In retaliation, John Brown and his five sons appeared on the scene and began killing slave-holding settlers in the countryside.
As these events were debated on the floor of the Senate, Sumner continued to bitterly attack his opponents on a personal level, always returning in his arguments to Senator Butler of South Carolina and Senator Mason of Virginia, and rebuking them for swinging the "overseer's lash" in the Senate, as if it were one of their plantations stocked with slaves. During these debates various senators made motions to expel the Massachusetts senator for perjury and treason, but the motions never came to a vote.
Senator Charles Sumner Attacked
Finally, in May 1856, Sumner spoke for three hours, calling the concept of popular sovereignty a "crime against Kansas" by which the people of the Free states were swindled into accepting the repeal of the Missouri Compromise.
Several days after Sumner's "crime against Kansas" speech, Preston Brooks, a young congressman from South Carolina, cameinto the vestibule of the Senate chamber, carrying a walking stick. The cane had a gold head and tapered from the head down to the end with a weight of about a pound. At 12:45 p.m. the Senate recessed and most of the senators cleared the chamber except for a scattered few. Brooks came down the center aisle and sat down at a desk several seats removed from Sumner who was reading from a pile of documents at his desk. When all the spectators had exited the gallery above the Senate floor, Brooks got up from the desk, came down the aisle, to a position in front of Sumner. When Sumner looked up at Brooks's call of his name, Brooks began furiously whacking at his head with the cane. Sumner tried to rise, but got caught up in his chair. Finally breaking free, Sumner staggered sideways and fell between the desk rows, while Brooks frantically whipped the cane back and forth across his face and shoulders.
Only when the cane splintered into pieces too small for Brooks to handle, did the assault end. When Brooks backed away, Sumner laid motionless on the crimson carpet of the Senate floor. Globs of dark red blood oozed from the cuts and gashes of his face and formed a pool around his head. Slowly, Sumner rolled over on his hands and knees and struggled to rise. Stephen Douglas came into the chamber from the cloakrooms where he had been standing behind the Senate president's chair, but did not approach Sumner. Robert Toombs of Georgia and John Crittenden of Kentucky also appeared in the room but they did not offer Sumner help. By the time Sumner's few friends arrived, Sumner was alone, slumped in his chair, the blood still seeping from his head wounds down his neck, saturating the blue broadcloth coat he wore. The wounds Brooks inflicted on Sumner did not cause permanent physical damage, but they destroyed Sumner's will. Once taken from the Senate chamber, the abolition champion soon left America and traveled through Europe for two years, only returning to his Senate seat in 1859.
The Dred Scott Decision
The breakdown in political civility in the Senate was made permanent in December 1856, when the United States Supreme Court announced its decision in the matter of Dred Scott. Eight years earlier, in 1848, Dred Scott's wife, Harriet, was sued in Missouri state court by a Mr. Emmerson. Emmerson alleged that he had purchased Dred and Harriet from an army officer who had taken the Scotts as slaves from Missouri to army posts in the Free State of Illinois and the Territory of Minnesota, and then returned with them to Missouri. Emmerson's action was tried to a jury who gave verdict for Harriet, but the trial court granted Emmerson a new trial. Harriet appealed from the order granting new trial but lost in the Missouri Supreme Court. Dred Scott then instituted suit against Emmerson in St. Louis Circuit Court. Scott contended that the fact that he and Harriet had been taken voluntarily into Illinois and Minnesota Territory made them free under both Illinois law and the Missouri Compromise. The circuit court agreed with Scott and Emmerson appealed to the Missouri Supreme Court.
The Missouri Supreme Court acknowledged that, as a matter of comity between the courts of the Free and Slave States, many times in the past persons held to service had been adjudged to be free by the courts of the Slave States on the ground that the master had forfeited his chattel interest in such persons because they had been wrongfully held to service in territories or States where slavery was deemed unlawful. Similarly, prior to 1850 at least, many decisions of the Free State courts had held that, in a spirit of comity and in light of the Fugitive Slave Clause in the U.S. Constitution, slaves escaping from a Slave State to a Free State must be returned to the Slave State.
But the laws of other states, the Missouri Supreme Court held, "have no intrinsic right to be enforced beyond the limits of the State for which they were enacted." Since 1850, the Supreme Court observed, the courts of the Free State had repeatedly refused to recognize the legitimacy of the Fugitive Slave Law enacted by Congress as the controlling law of the land. Indeed, the Free State courts by 1856 persistently refused to punish persons who were known to attack federal marshals holding runaway slaves in custody. This conduct on the part of the citizens and courts of the Free states, the Missouri Supreme Court held, justified enforcing the public policy of Missouri which recognized the right of property in persons held to service as paramount.
After the Scotts lost a second time in the Missouri state courts, Emerson sold Dred Scott to John Sanford, a citizen of New York. Scott, alleging that he was a citizen of Missouri, then sued Sanford for his freedom in the Federal District Court in Missouri. Scott based his suit for freedom on the ground that, since the Missouri Compromise had prohibited slavery in that part of the Louisiana territory at the time he had been taken there, he was now free. Sanford opposed the suit on the ground that the federal court lacked jurisdiction because Scott, as an African whose ancestors were brought to America as slaves, could not be a citizen of Missouri. The district court rejected Sanford's argument regarding Scott's lack of standing, but granted judgment for Sanford against Scott's claim that his being taken to Minnesota made him free.
Scott appealed the decision to the United States Supreme Court. Led by Chief Justice Roger Taney, a majority of the Supreme Court, all southerners, refused to recognize that Scott was a citizen of the United States within the meaning of the Constitution and, therefore, the justices held, he could not sue John Sanford in federal court.
Although a free African residing in a state may be recognized by the people of that state to be, like them, a citizen, Taney wrote, he cannot be a citizen of a state, "in the sense in which the word `citizen' is used in the Constitution of the United States." Taney argued that the word "citizen" as used in the constitution is synonymous with the words "people of the United States" which describes the sovereign, the source of the supreme law. In Taney's peculiar view, Dred Scott could not possibly be included as a part of the people of the United States, in 1856, because at the time the people established the constitution as the supreme law of the land, in 1789, Scott's ancestors were considered "beings of an inferior order. . . [so] that they had no rights which the white man was bound to respect." Therefore, Taney concluded, Scott was an alien who lacked the rights, privileges and immunities guaranteed citizens of the United States, one of which was the privilege of bringing suits in its courts.
Chief Justice Taney wasn't satisfied, however, with resolving the matter of Dred Scott by narrowly interpreting the meaning of "citizen of the United States." The Chief Justice and his associates had been in secret communication with James Buchanan who had been elected President in November, 1856. Taney promised Buchanan that the Court would use Dred Scott's case to rule on the issue of whether Congress had the power to make unlawful, white persons forcibly holding black persons to service in the territories of the United States.
In 1820, the Congress had based the enactment of the Missouri Compromise on the express power granted it by the Constitution, "to make all needful rules and regulations respecting the Territory or other property belonging to the United States." Taney, with an apparent majority of the Supreme Court supporting him, rejected the reasonable notion that the Territory clause authorized Congress to enact laws which prohibited citizens of the United States from holding Africans to service in the territories of the United States. The Framers intended the express grant of power to make rules and regulations for the administration of the territories, Taney asserted, to apply only to that particular territory, which, at the time of the Constitution's ratification in 1789, was claimed by the United States. The power to regulate the affairs of new territories acquired after 1789, Taney maintained, springed solely from the express grants of power to make war and treaties, which implied the power to acquire territory.
In the exercise of the latter powers, Taney argued, Congress could make rules and regulations for the new territories acquired by the United States only in a manner which promoted "the interests of the whole people of the United States" on whose behalf the territory was acquired. As the agent of the whole people of the United States, Taney wrote, it was the duty of the general government, in organizing the territories for settlement, to not enact laws which infringed upon the "rights of property of the citizen who might go there to reside." Thus, despite the fact that the laws Congress enacts constitute the supreme law of the land, "anything in the. . . laws of any State to the contrary notwithstanding," in the strange logic of Taney's mind, Congress was powerless to prohibit a person from taking to the territories, a person held to service under only the common law of a Slave state.
Before the whole people made the Constitution the supreme law of the land, in 1789, the sole basis for recognizing a white person's right to hold a black person as property was the common law of the state in which the white person resided. Yet, as the Missouri Supreme Court explained in 1848, with Dred Scott's state court suit for freedom, the state courts had always understood that their respective common laws had "no intrinsic right to be enforced beyond the limits of the State for which they were enacted."
It is true that the whole people did recognize as the supreme law the duty of the general government to deliver up a black person escaping from a Slave State, if the right to hold the black person as property in the state was shown; but the whole people did not write anything in their Constitution that said their general government must recognize the right of white persons to hold black persons as property anywhere else. Chief Justice Taney and his associates could have easily decided, therefore, that, since the general government held the territories in trust to advance the interests of the whole people, it was reasonable for the citizen of a Slave State emigrating to the territories to expect to enjoy the right of property in pigs, cows and horses because the whole people recognized such rights; but it would be unreasonable for the same citizen to expect to enjoy the right of property in man which the whole people did not recognize.
All of the great moments in the Nation's political struggle over slavery—the Missouri Compromise, the Compromise of 1850, the emergence of the doctrine of popular sovereignty, the disintegration of the Democratic and Whig parties, the rise of the Black Republicans and the Dred Scott decision—were certainly nails in the coffin of domestic slavery, but it remained for the nation to produce Abraham Lincoln and place him in command of the executive branch of government, to undo the Union as it was, under the constitution framed by the founders, and to replace it with the constitution we live by today.The founders used the constitution as the means to control the power of democracy, in order to protect the minority from the tyranny of the majority. If the power of the democracy attempts to usurp the supreme law, twist it into something it is not, whatever the moral ground it claims, the retribution it faces is civil war. At its barest root the American Civil War was about the human impulse not to submit willingly to the power of the majority to oppress. This is the irony of the war; the oppressor becoming the oppressed: It is this sacred heritage--the power of the people to change their government--that both black and white Americans can share; and General Lee's great battleflag, though sorely used in latter times, is the most poignant example of it.
Joe Ryan Video
Senator Henry Clay Slavery Debate
Comments On What Caused The Civil War
♦Buzz Queen writes::
Joe Ryan replies:
"State Rights" is an abstract legal theory that the language of the Constitution and the circumstances surrounding its ratification suggest strongly the Founders were attuned to at the time they drafted the document. Abolitionism—the favoring of freedom for the Africans—was a regional phenomenon, limited to the politicians of New England and a few outsiders like Salmon Chase of Ohio and Thaddeus Stevens of Pennsylvania. Had the men of the North, generally, been of the mind-set of Chase, Stevens, and Charles Sumner it seems highly likely that civil war would have been averted, as these men, supported by a majority of their fellows, might well have guided Congress into adopting measures that made it practical for the Southern States to change their domestic policy toward the Africans.
♦Don from New Orleans writes::
Don, to give a lawyer's direct answer to your question, the railroad went to Canada, because the Constitution's Fugitive Slave Clause gave the slaveowner standing in federal court to reclaim the runaway slave in any State where he could be found. See Prigg v. Pennslyvania cited in What Happened in May 1862. As long as the slaveowner can prove his title to the "property" the Federal Court, by virtue of the Fugitive Slave Clause, will order the federal marshall to sieze the "property" and return it to its owner.
There is no connection that I can see between the concept of racism as the cause of the war and the fact the railroad went to Canada.
♦Robert Naranda writes:
Joe Ryan replies:
The gentleman's comment is a bit too obtuse for me to grasp the meaning; perhaps some of you can shed a little light. The first sentence of the piece states the theory of the case, the text provides the argument. This is an ordinary method employed to communicate to readers a writer's point of view.
♦ Ken writes:
Secession was illegal and morally incorrect, fueled as it was by the efforts of slave-holding men to create a white male supremacist state. Your efforts to defend these folks is just sad if it wasn't so dangerous.
Joe Ryan replies:
My Gosh, Ken seems wilfully blind to the fact that the United States as a whole, in 1861, was composed of white male racists and as a whole was therefore morally responsible for causing the Civil War.
♦ Laura writes to say:
♦Theresa writes to say:
I have to disagree with your opening statement that racism caused the civil war “plain and simple.” It’s never that plain and simple. Slavery and the spilt over it caused the civil war, and racism is just a part of slavery. I blame the Dark Ages of Europe for the cause of the Civil War! That’s where lack of respect for human life fell to new lows, allowing for widespread acceptance of slavery, and then it was brought by Spain to the New World, to feed the gold lust of the kings, and then, slowly, over the next 300 years a whole society became dependent on slavery, due in no small measure to laziness on the part of whites who wouldn’t labor in hot climates (plain and simple), and because of fear. And it’s this fear that is the root of racism. The fact slavery became embedded in southern culture forced a split in our country, to the extent that some states wanted to secede, but the President at that time—the guy on the penny—was not having any of that, and that’s how the war got started: not just racism plain and simple.
Joe Ryan replies:
I think your reference to fear hits the mark: the theme—Racism caused the Civil War—hangs on the core issue of what the real fear was. If, instead of throwing insults at each other, back and forth across the aisle for ten years (1850-1860), the Free State senators and representatives in Congress had openly and earnestly debated among themselves how the Nation could absorb the Africans into society, as citizens, while, at the same time, preventing the South from descending into the ugly abyss of economic disaster and social catastrophe, the Civil War might well not have happened. But, except for Daniel Webster once, the Free State members of Congress never even broached the subject of how to move the South from an economy dependent upon African slavery (an institution that still exists today) to an economy based on free labor. The reason for this, it seems to me, was their own feelings of racism. Had the majority of Free State members of Congress been of the mind-set of Charles Sumner, Salmon Chase, and John Brown, for example, this debate most certainly would have occurred, and the White people of the South might well have been soothed with the knowledge that, as the Africans became transformed into American citizens, their world would not collapse into chaos, and they would have probably stayed the course.
♦ Ray writes to say:
I can’t disagree with you as racism is defined as thinking your race is better than another, yet I’ve always thought of the civil war as the cause of racism as we know it today. Certainly the agricultural and industrial forces in the south and parts of the north that relied on slavery were the biggest cause of the war while slavery the engine that drove their production, was a resource. Using that scenario I’d have to lay the war on greed.
Joe Ryan replies:
We appreciate your view. Whether then or now, the meaning of racism hasn’t changed: It is the human attitude that influences a class of people, who perceive themselves superior in character or intelligence, to shun social contact with another class. Infected with this attitude it was as impossible for the whites of the North to live with blacks, in 1861, as it was for the whites of the South. The new policy of the Federal Government, to restrict slavery to the existing states, meant that the economic engine of slavery was doomed to sputter out; leaving the South saddled with an alien population that would have no means of supporting itself. Rather than accept the Government’s policy, South Carolina and the Gulf States chose the option of secession in the forlorn hope of maintaining the power of their class.
♦ Wayne writes:
You are so full of .... You twist the real history to fit your own agenda.
Joe Ryan replies:
No one can read my writing and not understand that I stand on General Lee’s side of the case, and on Virginia’s, the Mother of States. But not Alabama's, Alabama's case requires a different lawyer. Whose side do you stand on?
What do you viewers think?
Reply to this thread:
Battle of Gettysburg
More To Explore | http://www.americancivilwar.com/authors/Joseph_Ryan/Articles/Causes-Civil-War/What-Caused-the-American-Civil-War.html | 13 |
46 | (Redirected from Term Logic
Traditional logic, also known as term logic, is a loose term for the logical tradition that originated with Aristotle and survived broadly unchanged until the advent of modern predicate logic in the late nineteenth century.
It can sometimes be difficult to understand philosophy before the period of Frege and Russell without an elementary grasp of the terminology and ideas that were assumed by all philosophers until then. This article provides a basic introduction to the traditional system, with suggestions for further reading.
The fundamental assumption behind the theory is that propositions are composed of two terms - whence the name "two-term theory" or "term logic" – and that the reasoning process is in turn built from propositions:
- The term is a part of speech representing something, but which is not true or false in its own right, as "man" or "mortal".
- The proposition consists of two terms, in which one term (the "predicate") is "affirmed" or "denied" of the other (the "subject"), and which is capable of truth or falsity.
- The syllogism is an inference in which one proposition (the "conclusion") follows of necessity from two others (the "premises").
A proposition may be universal or particular, and it may be affirmative or negative. Thus there are just four kinds of propositions:
- A-type: universal and affirmative or ("All men are mortal")
- I-type: Particular and affirmative ("Some men are philosophers")
- E-type: Universal and negative ("No philosophers are rich")
- O-type: Particular and negative ("Some men are not philosophers").
This was called the fourfold scheme of propositions. (The origin of the letters A, I, E, and O are explained below in the section on syllogistic maxims.) The syllogistic is a formal theory explaining which combinations of true premises yield true conclusions.
A term (Greek horos) is the basic component of the proposition. The original meaning of the horos and also the Latin terminus is "extreme" or "boundary". The two terms lie on the outside of the proposition, joined by the act of affirmation or denial.
For Aristotle, a term is simply a "thing", a part of a proposition. For early modern logicians like Arnauld (whose Port Royal Logic is the most well-known textbook of the period) it is a psychological entity like an "idea" or "concept". Mill thought it is a word. None of these interpretations are quite satisfactory. In asserting that something is a unicorn, we are not asserting anything of anything. Nor does "all Greeks are men" say that the ideas of Greeks are ideas of men, or that word "Greeks" is the word "men". A proposition cannot be built from real things or ideas, but it is not just meaningless words either. This is a problem about the meaning of language that is still not entirely resolved. (See the book by Prior below for an excellent discussion of the problem).
In term logic, a "proposition" is simply a form of language: a particular kind of sentence, in subject and predicate are combined, so as to assert something true or false. It is not a thought, or an abstract entity or anything. The word "propositio" is from the Latin, meaning the first premise of a syllogism. Aristotle uses the word premise (protasis) as a sentence affirming or denying one thing of another (AP 1. 1 24a 16), so a premise is also a form of words.
However, in modern philosophical logic, it now means what is asserted as the result of uttering a sentence, and is regarded as something peculiar mental or intentional. Writers before Frege-Russell, such as Bradley, sometimes spoke of the "judgment" as something distinct from a sentence, but this is not quite the same. As a further confusion the word "sentence" derives from the Latin, meaning an opinion or judgment, and so is equivalent to "proposition".
The quality of a proposition is whether it is affirmative (the predicate is affirmed of the subject) or negative(the predicate is denied of the subject). Thus "every man is a mortal" is affirmative, since "mortal" is affirmed of "man". "No men are immortals" is negative, since "immortal" is denied of "man".
The quantity of a proposition is whether it is universal (the predicate is affirmed or denied of "the whole" of the subject) or particular (the predicate is affirmed or denied of only "part of" the subject).
The distinction between singular and universal is fundamental to Aristotle's metaphysics, and not merely grammatical. A singular term for Aristotle is that which is of such a nature as to be predicated of only one thing, thus "Callias". (De Int 7). It is not predicable of more than one thing: "Socrates is not predicable of more than one subject, and therefore we do not say every Socrates as we say every man". (Metaphysics D 9, 1018 a4). It may feature as a grammatical predicate, as in the sentence "the person coming this way is Callias". But it is still a logical subject.
He contrasts it with "universal" (katholou - "of a whole"). Universal terms are the basic materials of Aristotle's logic, propositions containing singular terms do not form part of it at all. They are mentioned briefly in the De Interpretatione. Afterwards, in the chapters of the Prior Analytics where Aristotle methodically sets out his theory of the syllogism, they are entirely ignored.
The reason for this omission is clear. The essential feature of term logic is that, of the four terms in the two premises, one must occur twice. Thus
- All greeks are men
- All men are mortal.
What is subject in one premise, must be predicate in the other, and so it is necessary to eliminate from the logic, any terms which cannot function both as subject and predicate. Singular terms do not function this way, so they are omitted from Aristotle's syllogistic.
In later versions of the syllogistic, singular terms were treated as universals. See for example (where it is clearly stated as received opinion) Part 2, chapter 3, of the Port Royal Logic. Thus
- All men are mortals
- All Socrates are men
- All Socrates are mortals
This is clearly awkward, and is a weakness exploited by Frege in his devastating attack on the system (from which, ultimately, it never recovered). See concept and object.
The famous syllogism "Socrates is a man ...", is frequently quoted as though from Aristotle. See for example Kapp, Greek Foundations of Traditional Logic, New York 1942, p.17, Copleston A history of Philosophy Vol. I. P. 277, Russell, A History of Western Philosophy London 1946 p. 218. In fact it is nowhere in the Organon. It is first mentioned by Sextus Empiricus (Hyp. Pyrrh. ii. 164).
- Main article: Syllogism
There can only be three terms in the syllogism, since the two terms in the conclusion are already in the premises, and one term is common to both premises. This leads to the following definitions:
- The predicate in the conclusion is called the major term, "P"
- The subject in the conclusion is called the minor term, "S"
- The common term is called the middle term "M"
- The premise containing the major term is called the 'major premise
- The premise containing the minor term is called the 'minor premise
The syllogism is always written major premise, minor premise,conclusion. Thus the syllogism of the form AII is written as
- A M-P All cats are carnivorous
- I S-M Some mammals are cats
- I S-P Some mammals are carnivorous
Mood and figure
The mood of a syllogism is distinguished by the quality and quantity of the two premises. There are eight valid moods: AA, AI, AE, AO, IA, EA, EI, OA.
The figure of a syllogism is determined by the position of the middle term. In figure 1, which Aristotle thought the most important, since it reflects our reasoning process most closely, the middle term is subject in the major, predicate in the minor. In figure 2, it is predicate in both premises. In figure 3, it is subject in both premises. In figure 4 (which Aristotle did not discuss, however), it is predicate in the major, subject in the minor. Thus
|Figure 1||Figure 2||Figure 3||Figure 4
Conversion and reduction
Conversion is the process of changing one proposition into another simply by re-arranging the terms. Simple conversion is a change which preserves the meaning of the proposition. Thus
- Some S is a P converts to Some P is an S
- No S are P converts to no P are S
Conversion per accidens involves changing the proposition into another which is implied it,but not the same. Thus
- All S are P converts to Some S are P
(Notice that for conversion per accidens to be valid, there is an existential assumption involved in "all S are P")
As explained, Aristotle thought that only in the first or perfect figure was the process of reasoning completely transparent. Their validity of an imperfect syllogism is only evident, when by conversion of its premises, it can be turned into some mood of the first figure. This was called reduction by the scholastic philosophers.
It is easiest to explain the rules of reduction, using the so-called mnemonic lines first introduced by William of Shyreswood (1190 - 1249) in a manual written in the first half of the thirteenth century.
- Barbara, Celarent, Darii, Ferioque, prioris
- Cesare, Camestres, Festino, Baroco, secundae
- Tertia, Darapti, Disamis, Datisi, Felapton, Bocardo, Ferison, habet
- Quarta insuper addit Bramantip, Camenes, Dimaris, Fesapo, Fresison.
Each word represents the formula of a valid mood and is interpreted according to the following rules:
- The first three vowels indicate the quantity and quality of the three propositions, thus Barbara: AAA, Celarent, EAE and so on.
- The initial consonant of each formula after the first four indicates that the mood is to be reduced to that mood among the first four which has the same initial
- "s" immediately after a vowel signifies that the corresponding proposition is to be converted simply during reduction,
- "p" in the same position indicates that the proposition is to be converted partially or per accidens,
- "m" between the first two vowels of a formula signifies that the premises are to be transposed,
- "c" appearing after one of the first two vowels signifies that the premise is to be replaced by the negative of the conclusion for reduction per impossibile.
There are a number of maxims and verses associated with the syllogistic. Their origin is mostly unknown. For example
The letters A, I, E, and O are taken from the vowels of the Latin Affirmo and Nego.
- Asserit A, negat E, sed universaliter ambae
- Asserit I, negat O, sed particulariter ambo
Shyreswood's version of the "Barbara" verses is as follows:
- Barbara celarent darii ferio baralipton
- Celantes dabitis fapesmo frisesomorum;
- Cesare campestres festino baroco; darapti
- Felapton disamis datisi bocardo ferison.
- Barbara, Celarent, Darii, Ferioque prioris
- Cesare, Camestres, Festino, Baroco secundae
- Tertia grande sonans recitat Darapti, Felapton
- Disamis, Datisi, Bocardo, Ferison.
- Quartae Sunt Bamalip, Calames, Dimatis, Fesapo, Fresison.
Decline of term logic
Term logic dominated logic throughout most of its history until the advent of modern or predicate logic a century ago, in the late nineteenth and early twentieth century, which led to its eclipse.
The decline was ultimately due to the superiority of the new logic in the mathematical reasoning for which it was designed. Term logic cannot, for example, explain the inference from "every car is a vehicle", to "every owner of a car is an owner of an vehicle ", which is elementary in predicate logic. It is confined to syllogistic arguments, and cannot explain inferences involving multiple generality. Relations and identity must be treated as subject-predicate relations, which makes the identity statements of mathematics difficult to handle, and of course the singular term and singular proposition, which is essential to modern predicate logic, does not properly feature at all.
Note, however, that the decline was a protracted affair. It is simply not true that there was a brief "Frege Russell" period 1890-1910 in which the old logic vanished overnight. The process took more like 70 years. Even Quine's Methods of Logic devotes considerable space to the syllogistic, and Joyce's manual, whose final edition was in 1949, does not mention Frege or Russell at all.
The innovation of predicate logic led to an almost complete abandonment of the traditional system. It is customary to revile or disparage it in standard textbook introductions. However, it is not entirely in disuse. Term logic was still part of the curriculum in many Catholic schools until the late part of the twentieth century, and taught in places even today. More recently, some philosophers have begun work on a revisionist programme to reinstate some of the fundamental ideas of term logic. Their main complaint about modern logic is
- that Predicate Logic is in a sense unnatural, in that its syntax does not follow the syntax of the sentences that figure in our everyday reasoning. It is, as Quine acknowledges, "Procrustean" employing an artificial language of function and argument, quantifier and bound variable.
- that there are still embarrassing theoretical problems faced by Predicate Logic. Possibly the most serious are of empty names, and of identity statements .
Even orthodox and entirely mainstream philosophers such as Gareth Evans have voiced discontent:
- "I come to semantic investigations with a preference for homophonic theories; theories which try to take serious account of the syntactic and semantic devices which actually exist in the language ...I would prefer [such] a theory ... over a theory which is only able to deal with [sentences of the form "all A's are B's"] by "discovering" hidden logical constants ... The objection would not be that such [Fregean] truth conditions are not correct, but that, in a sense which we would all dearly love to have more exactly explained, the syntactic shape of the sentence is treated as so much misleading surface structure" Evans (1977)
Fred Sommers has designed a formal logic which he claims is consistent with our innate logical abilities, and which resolves the philosophical difficulties. See, for example, his seminal work The Logic of Natural Language. The problem, as Sommers says, is that "the older logic of terms is no longer taught and modern predicate logic is too difficult to be taught". School children a hundred years ago were taught a usable form of formal logic, today – in the information age – they are taught nothing.
- Joyce, G.H. Principles of Logic, 3rd edition, London 1949. This was amanual written for (Catholic) schools, probably in the early 1910's. It is spendidly out of date, there being no hint even of the existence of modern logic, yet it is completely authoritative within its own subject area. There are also many useful references to medieval and ancient sources.
- Lukasiewicz, J., Aristotle's Syllogistic, Oxford 1951. An excellent, meticulously researched book by one of the founding fathers of modern logic, though his propaganda for the modern system comes across, these days, as a little strident.
- Prior, A.N. The Doctrine of Propositions & Terms London 1976. An excellent book that covers the philosophy around the syllogistic.
- Mill, J.S. A System of Logic, (8th edition) London 1904. The eighth edition is the best, containing all the original plus later notes written by Mill. Much of it is propaganda for Mill's philosophy, but it contains many useful thoughts on the syllogistic, and it is a historical document, as it was so widely read in Europe and America. It may have been an influence on Frege.
- Aristotle, Analytica Posteriora Books I & II, transl. G.R.G.Mure, in The Works of Aristotle ed. Ross Oxford 1924. Ross's edition is still (in this writers view) the best English translation of Aristotle. There are still many copies available on the second hand market, handsomely bound and beautiful.
- Evans, G. "Pronouns, Quantifiers and Relative Clauses" Canadian Journal of Philosophy 1977
- Sommers, F. The Logic of Natural Language, Oxford 1982. An overview and analysis of the history of term logic, and a critique of the logic of Frege. | http://www.biologydaily.com/biology/Term_Logic | 13 |
19 | 2.1 Prefix trie and string matching
The prefix trie for string X is a tree where each edge is labeled with a symbol and the string concatenation of the edge symbols on the path from a leaf to the root gives a unique prefix of X. On the prefix trie, the string concatenation of the edge symbols from a node to the root gives a unique substring of X, called the string represented by the node. Note that the prefix trie of X is identical to the suffix trie of reverse of X and therefore suffix trie theories can also be applied to prefix trie.
With the prefix trie, testing whether a query W
is an exact substring of X
is equivalent to finding the node that represents W
, which can be done in O
|) time by matching each symbol in W
to an edge, starting from the root. To allow mismatches, we can exhaustively traverse the trie and match W
to each possible path. We will later show how to accelerate this search by using prefix information of W
. gives an example of the prefix trie for ‘GOOGOL
’. The suffix array (SA) interval in each node is explained in Section 2.3
Fig. 1. Prefix trie of string ‘GOOGOL’. Symbol ∧ marks the start of the string. The two numbers in a node give the SA interval of the string represented by the node (see Section 2.3). The dashed line shows the route of the brute-force (more ...)
2.2 Burrows–Wheeler transform
Let Σ be an alphabet. Symbol $ is not present in Σ and is lexicographically smaller than all the symbols in Σ. A string X=a0a1…an−1 is always ended with symbol $ (i.e. an−1=$) and this symbol only appears at the end. Let X[i]=ai, i=0, 1,…, n−1, be the i-th symbol of X, X[i, j]=ai …aj a substring and Xi=X[i, n−1] a suffix of X. Suffix array S of X is a permutation of the integers 0…n−1 such that S(i) is the start position of the i-th smallest suffix. The BWT of X is defined as B[i]=$ when S(i)=0 and B[i]=X[S(i)−1] otherwise. We also define the length of string X as |X| and therefore |X|=|B|=n. gives an example on how to construct BWT and suffix array.
Fig. 2. Constructing suffix array and BWT string for X=googol$. String X is circulated to generate seven strings, which are then lexicographically sorted. After sorting, the positions of the first symbols form the suffix array (6, 3, 0, 5, 2, 4, 1) and the concatenation (more ...)
The algorithm shown in is quadratic in time and space. However, this is not necessary. In practice, we usually construct the suffix array first and then generate BWT. Most algorithms for constructing suffix array require at least n
bits of working space, which amounts to 12 GB for human genome. Recently, Hon et al.
) gave a new algorithm that uses n
bits of working space and only requires <1 GB memory at peak time for constructing the BWT of human genome . This algorithm is implemented in BWT-SW (Lam et al.
). We adapted its source code to make it work with BWA.
2.3 Suffix array interval and sequence alignment
If string W
is a substring of X
, the position of each occurrence of W
will occur in an interval in the suffix array. This is because all the suffixes that have W
as prefix are sorted together. Based on this observation, we define:
In particular, if W
is an empty string, R
. The interval
is called the SA interval
and the set of positions of all occurrences of W
. For example in , the SA interval of string ‘go
’ is [1, 2]. The suffix array values in this interval are 3 and 0 which give the positions of all the occurrences of ‘go
Knowing the intervals in suffix array we can get the positions. Therefore, sequence alignment is equivalent to searching for the SA intervals of substrings of X that match the query. For the exact matching problem, we can find only one such interval; for the inexact matching problem, there may be many.
2.4 Exact matching: backward search
) be the number of symbols in X
−2] that are lexicographically smaller than a
Σ and O
) the number of occurrences of a
]. Ferragina and Manzini (2000
) proved that if W
is a substring of X
if and only if aW
is a substring of X
. This result makes it possible to test whether W
is a substring of X
and to count the occurrences of W
|) time by iteratively calculating R
from the end of W
. This procedure is called backward search
It is important to note that Equations (3
) and (4
) actually realize the top-down traversal on the prefix trie of X
given that we can calculate the SA interval of a child node in constant time if we know the interval of its parent. In this sense, backward search is equivalent to exact string matching on the prefix trie, but without explicitly putting the trie in the memory.
2.5 Inexact matching: bounded traversal/backtracking
gives a recursive algorithm to search for the SA intervals of substrings of X that match the query string W with no more than z differences (mismatches or gaps). Essentially, this algorithm uses backward search to sample distinct substrings from the genome. This process is bounded by the D(·) array where D(i) is the lower bound of the number of differences in W[0, i]. The better the D is estimated, the smaller the search space and the more efficient the algorithm is. A naive bound is achieved by setting D(i)=0 for all i, but the resulting algorithm is clearly exponential in the number of differences and would be less efficient.
Fig. 3. Algorithm for inexact search of SA intervals of substrings that match W. Reference X is $ terminated, while W is A/C/G/T terminated. Procedure InexactSearch(W, z) returns the SA intervals of substrings that match W with no more than z differences (mismatches (more ...)
The CalculateD procedure in gives a better, though not optimal, bound. It is conceptually equivalent to the one described in , which is simpler to understand. We use the BWT of the reverse (not complemented) reference sequence to test if a substring of W is also a substring of X. Note that to do this test with BWT string B alone would make CalculateD an O(|W|2) procedure, rather than O(|W|) as is described in .
Equivalent algorithm to calculate D(i).
To understand the role of D, we come back to the example of searching for W =LOL in X=GOOGOL$ (). If we set D(i)=0 for all i and disallow gaps (removing the two star lines in the algorithm), the call graph of InexRecur, which is a tree, effectively mimics the search route shown as the dashed line in . However, with CalculateD, we know that D(0)=0 and D(1)=D(2)=1. We can then avoid descending into the ‘G’ and ‘O’ subtrees in the prefix trie to get a much smaller search space.
The algorithm in guarantees to find all the intervals allowing maximum z
differences. It is complete in theory, but in practice, we also made various modifications. First, we pay different penalties for mismatches, gap opens and gap extensions, which is more realistic to biological data. Second, we use a heap-like data structure to keep partial hits rather than using recursion. The heap-like structure is prioritized on the alignment score of the partial hits to make BWA always find the best intervals first. The reverse complemented read sequence is processed at the same time. Note that the recursion described in effectively mimics a depth-first search (DFS) on the prefix trie, while BWA implements a breadth-first search (BFS) using this heap-like data structure. Third, we adopt an iterative strategy: if the top interval is repetitive, we do not search for suboptimal intervals by default; if the top interval is unique and has z
difference, we only search for hits with up to z
+ 1 differences. This iterative strategy accelerates BWA while retaining the ability to generate mapping quality. However, this also makes BWA's speed sensitive to the mismatch rate between the reads and the reference because finding hits with more differences is usually slower. Fourth, we allow to set a limit on the maximum allowed differences in the first few tens of base pairs on a read, which we call the seed
sequence. Given 70 bp simulated reads, alignment with maximum two differences in the 32 bp seed is 2.5× faster than without seeding. The alignment error rate, which is the fraction of wrong alignments out of confident mappings in simulation (see also Section 3.2
), only increases from 0.08% to 0.11%. Seeding is less effective for shorter reads.
2.6 Reducing memory
The algorithm described above needs to load the occurrence array O
and the suffix array S
in the memory. Holding the full O
arrays requires huge memory. Fortunately, we can reduce the memory by only storing a small fraction of the O
arrays, and calculating the rest on the fly. BWT-SW (Lam et al.
) and Bowtie (Langmead et al.
) use a similar strategy which was first introduced by Ferragina and Manzini (2000
Given a genome of size n
, the occurrence array O
(·, ·) requires 4n
bits as each integer takes
bits and there are 4n
of them in the array. In practice, we store in memory O
) for k
that is a factor of 128 and calculate the rest of elements using the BWT string B
. When we use two bits to represent a nucleotide, B
bits. The memory for backward search is thus 2n
/32 bits. As we also need to store the BWT of the reverse genome to calculate the bound, the memory required for calculating intervals is doubled, or about 2.3 GB for a 3 Gb genome.
Enumerating the position of each occurrence requires the suffix array S
. If we put the entire S
in memory, it would use n
bits. However, it is also possible to reconstruct the entire S
when knowing part of it. In fact, S
and inverse compressed suffix array (inverse CSA) Ψ−1
(Grossi and Vitter, 2000
denotes repeatedly applying the transform Ψ−1
times. The inverse CSA Ψ−1
can be calculated with the occurrence array O
In BWA, we only store in memory S
) for k
that can be divided by 32. For k
that is not a factor of 32, we repeatedly apply Ψ−1
until for some j
) is a factor of 32 and then S
)) can be looked up and S
) can be calculated with Equation (5
In all, the alignment procedure uses 4n
/8 bits, or n
bytes for genomes <4 Gb. This includes the memory for the BWT string, partial occurrence array and partial suffix array for both original and the reversed genome. Additionally, a few hundred megabyte of memory is required for heap, cache and other data structures.
2.7 Other practical concerns for Illumina reads
2.7.1 Ambiguous bases
Non-A/C/G/T bases on reads are simply treated as mismatches, which is implicit in the algorithm (). Non-A/C/G/T bases on the reference genome are converted to random nucleotides. Doing so may lead to false hits to regions full of ambiguous bases. Fortunately, the chance that this may happen is very small given relatively long reads. We tried 2 million 32 bp reads and did not see any reads mapped to poly-N regions by chance.
2.7.2 Paired-end mapping
BWA supports paired-end mapping. It first finds the positions of all the good hits, sorts them according to the chromosomal coordinates and then does a linear scan through all the potential hits to pair the two ends. Calculating all the chromosomal coordinates requires to look up the suffix array frequently. This pairing process is time consuming as generating the full suffix array on the fly with the method described above is expensive. To accelerate pairing, we cache large intervals. This strategy halves the time spent on pairing.
In pairing, BWA processes 256K read pairs in a batch. In each batch, BWA loads the full BWA index into memory, generates the chromosomal coordinate for each occurrence, estimates the insert size distribution from read pairs with both ends mapped with mapping quality higher than 20, and then pairs them. After that, BWA clears the BWT index from the memory, loads the 2 bit encoded reference sequence and performs Smith–Waterman alignment for unmapped reads whose mates can be reliably aligned. Smith–Waterman alignment rescues some reads with excessive differences.
2.7.3 Determining the allowed maximum number of differences
Given a read of length m, BWA only tolerates a hit with at most k differences (mismatches or gaps), where k is chosen such that <4% of m-long reads with 2% uniform base error rate may contain differences more than k. With this configuration, for 15–37 bp reads, k equals 2; for 38–63 bp, k=3; for 64–92 bp, k=4; for 93–123 bp, k=5; and for 124–156 bp reads, k=6.
2.7.4 Generating mapping quality scores
For each alignment, BWA calculates a mapping quality score, which is the Phred-scaled probability of the alignment being incorrect. The algorithm is similar to MAQ's except that in BWA we assume the true hit can always be found. We made this modification because we are aware that MAQ's formula overestimates the probability of missing the true hit, which leads to underestimated mapping quality. Simulation reveals that BWA may overestimate mapping quality due to this modification, but the deviation is relatively small. For example, BWA wrongly aligns 11 reads out of 1 569 108 simulated 70 bp reads mapped with mapping quality 60. The error rate 7 × 10−6 (= 11/1 569 108) for these Q60 mappings is higher than the theoretical expectation 10−6.
2.8 Mapping SOLiD reads
For SOLiD reads, BWA converts the reference genome to dinucleotide ‘color’ sequence and builds the BWT index for the color genome. Reads are mapped in the color space where the reverse complement of a sequence is the same as the reverse, because the complement of a color is itself. For SOLiD paired-end mapping, a read pair is said to be in the correct orientation if either of the two scenarios is true: (i) both ends mapped to the forward strand of the genome with the R3 read having smaller coordinate; and (ii) both ends mapped to the reverse strand of the genome with the F3 read having smaller coordinate. Smith–Waterman alignment is also done in the color space.
After the alignment, BWA decodes the color read sequences to the nucleotide sequences using dynamic programming. Given a nucleotide reference subsequence b1b2
and a color read sequence c1c2
mapped to the subsequence, BWA infers a nucleotide sequence
such that it minimizes the following objective function:
′ is the Phred-scaled probability of a mutation, qi
is the Phred quality of color ci
and function g
) gives the color corresponding to the two adjacent nucleotides b
′. Essentially, we pay a penalty q
and a penalty qi
This optimization can be done by dynamic programming because the best decoding beyond position i
only depends on the choice of
be the best decoding score up to i
. The iteration equations are
BWA approximates base qualities as follows. Let
. The i
-th base quality
, is calculated as:
BWA outputs the sequence
and the quality
as the final result for SOLiD mapping. | http://pubmedcentralcanada.ca/pmcc/articles/PMC2705234/?lang=en-ca | 13 |
71 | [1.5.] “Deduction and Induction.”
[1.5.1.] Three Ways to Evaluate an Argument.
There are three aspects of an argument, and it is possible to evaluate each of these aspects separately (i.e., to judge how good or bad each aspect is, independent of the others):
· logical: the logical connection between premises and conclusion, i.e., the degree to which the premises provide evidential support to the conclusion (if the premises are all true, then what is the likelihood that the conclusion is true?).
· factual (a.k.a. material): the truth or falsity of the premises.
· rhetorical: the persuasiveness of the argument to a given audience; if the argument succeeds in persuading the audience to accept its conclusion, then it is rhetorically effective, even if its logic is bad and it includes false premises.
In this course, we will usually focus only on the logical evaluation of arguments.
As we saw last time, validity is a desirable logical characteristic of arguments. Here is a more complete definition:
validity (df.): A valid argument is one in which
1. the truth of the premises would guarantee the truth of the conclusion;
2. it is impossible for the premises to be all true and the conclusion to be false at the same time;
3. if the premises were true, then the conclusion would have to be true as well.
(These are three equivalent definitions of “validity”; they all mean the same thing.)
An argument that is not valid is invalid:
invalidity (df.): an invalid argument is one in which
1. the truth of the premises would not guarantee the truth of the conclusion;
2. it is possible for the premises to be all true and the conclusion to be false at the same time;
(These are two equivalent definitions of “invalidity”; they mean the same thing)
1. All wars are started by miscalculation.
2. The Iraq conflict was a war.
\ 3. The Iraq conflict was started by miscalculation.
1. If Bonny has had her appendix taken out, then she doesn’t have to worry about getting appendicitis.
2. She has had her appendix taken out.
\ 3. She doesn’t have to worry about getting appendicitis.
These arguments are valid, whether or not they have true premises. As we saw last time, an argument can have false premises and still be valid. Remember that validity involves only the logical connection between premises and conclusion (i.e., the logical aspect of the argument); it does not involve the actual truth or falsity of the premises (i.e., the factual aspect of the argument). So the following argument is valid, even though it has two false premises:
1. All cars run on peanut butter.
2. Giraffes are cars.
\ 3. Giraffes run on peanut butter.
IMPORTANT: In logic and philosophy, the word “valid” does not mean exactly the same thing that it does in ordinary English. For the purposes of this class, “valid” means only what it is defined to mean above: if the premises were true, then the conclusion would have to be true, as well.
A note about terminology: Validity is frequently called “deductive validity.” In fact, the names “validity” and “deductive validity” are interchangeable. Similarly, the names “invalidity” and “deductive invalidity” are interchangeable.
[1.5.3.] Inductive Strength.
Your text describes validity in terms of information “contained” in the premises and conclusion. (p.5) This seems to be a correct description of many valid arguments, including the two given above. But it doesn’t seem correct about all valid arguments, e.g.,
1. Barack Obama is President of the United States.
\ 2. Either Barack Obama is President of the United States or the moon is made of cheese.
This is a valid argument, but it does not seem correct to say that all the information contained in the conclusion is contained in the premise, not even implicitly.
The textbook describes an “inductive argument” as an argument that has a conclusion that goes beyond what is contained in its premises (p.6). But as we just saw, the conclusions of some deductively valid arguments also go beyond what is contained in their premises.
It is better not to think of arguments as divided into two classes, deductive and inductive. It is more accurate to think of two standards which we can apply in evaluating the logical aspect of an argument.
We have already seen one standard: we can ask whether the truth of the premises would guarantee the truth of the conclusion. If so, the argument is valid; if not, it is invalid.
Here is the second standard: we can ask whether the truth of the premises would (not guarantee the truth of the conclusion, but) make the truth of the conclusion likely, or probable. If so, then the argument is inductively strong:
inductive strength (df.): an inductively strong argument is one in which the truth of all the premises would make it probable/likely that the conclusion is true, but would not guarantee that the conclusion is true.
Examples of inductively strong arguments given in the text on p.6:
1. The sun has always risen every morning so far.
\ 2. The sun will rise tomorrow (or every morning).
1. All of the movies produced in recent years by George Lucas have been successful.
\ 2. The latest film produced by Lucas will be successful.
Two important points to notice about inductive strength:
A. it comes in degrees, unlike validity, which is all-or-nothing.
B. the conclusion of an inductively strong argument can be false, even if all its premises are true (unlike the conclusion of a valid argument, which cannot be false if the argument’s premises are all true)
So you should think of arguments as being divisible into three groups:
· deductively valid
· inductively strong (to some degree)
· neither deductively valid nor inductively strong (to any degree)
This is less problematic than the two-way division suggested by the textbook, into deductive and inductive arguments.
[1.6.] “Argument Forms.”
An argument form is the logical structure or “skeleton” of an argument.
See the argument about Art on p.7, which uses the following form:
1. ____________ or ........................
2. It’s not true that ________________
\ 3. ........................
The Art argument is valid because of its form. Any argument with this form is valid.
· arguments 1 through 4 are all valid; they have the same form as the Art argument
· arguments 5 through 8 are all valid; they share this form:
\ 3. ..........................
Any argument with this form is valid.
We will be learning a number of different valid argument forms. (Most of the rules on the inside front cover of your textbook correspond to some valid argument form or other.)
[1.7.] “Truth and Validity.”
The only impossible combination of (a) validity/invalidity, (b) truth/falsehood and (c) premises/conclusion is: a valid argument with all true premises and a false conclusion. Every other combination is possible.
See EXAMPLES on pp.9-10.
· Notice that four arguments (the unnumbered example about reading this book (immediately precedes no.6); 6; 7 and 10) have the same invalid form.
So, from the fact that an argument is invalid, you cannot infer anything about the truth or falsity of its premises or conclusions.
EXERCISE 1-2 (pp.10-11)
§ complete ALL problems: for the even-numbered questions, come up with different examples than those in the back of the book; we will go through these at the beginning of the next class.
soundness (df.): A sound argument is an argument that (1) is valid and (2) has all true premises. If an argument lacks either of these characteristics, it is unsound.
Notice that in evaluating an argument as sound or unsound, you are commenting on both its logical aspect and its factual aspect.
consistency (df.): A consistent set of statements is one in which it is possible for all statements to be true (it does not matter whether any of them are actually true; what matters is only that it is possible that they all be true).
inconsistency (df.): An inconsistent set of statements is one in which it is impossible for all statements to be true; at least one of them must be false.
EXAMPLES, p.12-13. One of these is particularly difficult: “Harry, the barber, shaves all of those, and only those, who do not shave themselves.” This is an inconsistent sentence: it cannot possibly be true. This is revealed when we consider whether Harry shaves himself (and is thus not one of those who do not shave themselves.) If he does shave himself, then he doesn’t (since he only shaves people who don’t shave themselves); and if he doesn’t shave himself, then he does (since he shaves all of those who don’t shave themselves).
Although consistency and validity are different, there is an intimate connection between them:
An argument is valid if and only if the set consisting of the argument’s premises and the denial of its conclusion is inconsistent. If that set is consistent, then it is possible for all the premises to be true and the conclusion false, and this is exactly what makes an argument invalid.
EXERCISE 1-3 (p.15)
§ complete ALL of these; check your answers to the even questions against the back of the book; we will go through the odd problems at the beginning of the next class.
Stopping point for Thursday January 12. For next time: complete exercises 1-2 and 1-3, then read pp.19-32 (chapter 2 sections 1 through 9).
Haack, Philosophy of Logics (1978, ch.2). Haack also notes that if humans were perfectly rational, there would be no need to attend to the rhetorical aspect of an argument separately from the other two aspects, since only arguments with true premises having an appropriate connection between premises and conclusion would persuade anyone.
The textbook authors might be half-conscious of the problems that arise when we think of arguments as being either deductive or else inductive. Although they give a definition of “inductive argument” (in terms of information “contained” in the conclusion), they never offer a definition of “deduction” or “deductive argument”—instead they start by defining deductive validity.
Here I follow Haack 1978, p.12. She says she’s following B. Skyrms, e.g. Choice and Chance, 1966, ch.1.
The textbook gives a more precise, technical definition in chapter four: “A group of sentence forms, all of whose substitution instances are arguments.” (p.124) We will need to cover more material before absorbing this definition.
This is a variation on a famous problem discovered by Bertrand Russell (1872-1970), now called Russell’s Paradox. For more information, see A. D. Irvine, “Russell’s Paradox,” The Stanford Encyclopedia of Philosophy (Summer 2009 Edition), Edward N. Zalta (ed.), URL = <http://plato.stanford.edu/archives/sum2009/entries/russell-paradox/>.
This page last updated 1/12/2012.
Copyright © 2012 Robert Lane. All rights reserved. | http://www.westga.edu/~rlane/symbolic/lecture02_intro-to-arguments2.html | 13 |
30 | Building Logical Arguments
When people say "Let's be logical" about a given situation or problem, they usually mean "Let's follow these steps:"
1. Figure out what we know to be true.
2. Spend some time thinking about it.
3. Determine the best course of action.
In logical terms, this three-step process involves building a logical argument. An argument contains a set of premises at the beginning and a conclusion at the end. In many cases, the premises and the conclusion will be linked by a series of intermediate steps. In the following sections, these steps are discussed in the order that you're likely to encounter them.
The premises are the facts of the matter: The statements that you know (or strongly believe) to be true. In many situations, writing down a set of premises is a great first step to problem solving.
For example, suppose you're a school board member trying to decide whether to endorse the construction of a new school that would open in September. Everyone is very excited about the project, but you make some phone calls and piece together your facts, or premises.
- The funds for the project won't be available until March.
- The construction company won't begin work until they receive payment.
- The entire project will take at least eight months to complete.
So far, you only have a set of premises. But when you put them together, you're closer to the final product — your logical argument. In the next section, you'll discover how to combine the premises together.
Bridging the gap with intermediate steps
Sometimes an argument is just a set of premises followed by a conclusion. In many cases, however, an argument also includes intermediate steps that show how the premises lead incrementally to that conclusion.
Using the school construction example from the previous section, you may want to spell things out like this:
According to the premises, we won't be able to pay the construction company until March, so they won't be done until at least eight months later, which is November. But, school begins in September. Therefore. . .
The word therefore indicates a conclusion and is the beginning of the final step.
Forming a conclusion
The conclusion is the outcome of your argument. If you've written the intermediate steps in a clear progression, the conclusion should be fairly obvious. For the school construction example, here it is:
The building won't be complete before school begins.
If the conclusion isn't obvious or doesn't make sense, something may be wrong with your argument. In some cases, an argument may not be valid. In others, you may have missing premises that you'll need to add.
Deciding if the argument is valid
After you've built an argument, you need to be able to decide if it's valid, which is to say if it's a good argument.
To test an argument's validity, assume that all of the premises are true and then see if the conclusion follows automatically from them. If the conclusion automatically follows, you know it's a valid argument. If not, the argument is invalid.
The school construction example argument may seem valid, but you also may have a few doubts. For example, if another source of funding became available, the construction company may start earlier and perhaps finish by September. Thus, the argument has a hidden premise called an enthymeme (pronounced EN-thi-meem), as follows:
There is no other source of funds for the project.
Logical arguments about real-world situations (in contrast to mathematical or scientific arguments) almost always have enthymemes. So, the clearer you become about the enthymemes hidden in an argument, the better chance you have of making sure your argument is valid. | http://www.dummies.com/how-to/content/building-logical-arguments.html | 13 |
70 | Deductive vs Inductive
Deductive reasoning uses given information, premises or accepted general rules to reach a proven conclusion. On the other hand, inductive logic or reasoning involves making generalizations based upon behavior observed in specific cases. Deductive arguments are either valid or invalid. But inductive logic allows for the conclusions to be wrong even if the premises upon which it is based are correct. So inductive arguments are either strong or weak.
|Improve this chart||Deductive||Inductive|
|Introduction (from Wikipedia):||Deductive reasoning, also called deductive logic, is the process of reasoning from one or more general statements regarding what is known to reach a logically certain conclusion.||Inductive reasoning, also called induction or bottom-up logic, constructs or evaluates general propositions that are derived from specific examples.|
|Arguments:||Arguments in deductive logic are either valid or invalid. Invalid arguments are always unsound. Valid arguments are sound only if the premises they are based upon are true.||Arguments in inductive reasoning are either strong or weak. Weak arguments are always uncogent. Strong arguments are cogent only if the premises they are based upon are true.|
|Validity of conclusions:||Conclusions can be proven to be valid if the premises are known to be true.||Conclusions may be incorrect even if the argument is strong and the premises are true.|
For example: All men are mortal. John is a man. Therefore John is mortal. This is an example of valid deductive reasoning. On the other hand, here's an example of inductive reasoning: Most men are right-handed. John is a man. Therefore, John must be right-handed. The strength of this inductive argument depends upon the percentage of left-handed people in the population. In any case, the conclusion may well end up being invalid because inductive reasoning does not guarantee validity of the conclusions.
edit What is Deductive Reasoning?
Deductive reasoning (top-down logic) contrasts with inductive reasoning (bottom-up logic), and generally starts with one or more general statements or premises to reach a logical conclusion. If the premises are true, the conclusion must be valid. Deductive resasoning is used by scientists and mathematicians to prove their hypotheses.
edit Sound or Unsound arguments
With deductive reasoning, arguments may be valid or invalid, sound or unsound. If the logic is correct, i.e. the conclusion flows from the premises, then the arguments are valid. However, valid arguments may be sound or unsound. If the premises used in the valid argument are true, then the argument is sound otherwise it is unsound.
- All men have ten fingers.
- John is a man.
- Therefore, John has ten fingers.
This argument is logical and valid. However, the premise "All men have ten fingers." is incorrect because some people are born with 11 fingers. Therefore, this is an unsound argument. Note that all invalid arguments are also unsound.
edit Types of deductive logic
edit Law of detachment
A single conditional statement is made, and a hypothesis (P) is stated. The conclusion (Q) is then deduced from the statement and the hypothesis. For example, using the law of detachment in the form of an if-then statement: (1.) If an angle A>90°, then A is an obtuse angle. (2.) A=125°. (3.) Therefore, A is an obtuse angle.
edit The law of Syllogism
The law of syllogism takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another. For example, (1.) If the brakes fail, the car will not stop. (2.) If the car does not stop, there will be an accident. (3.) Therefore, If the brakes fail, there will be an accident.
We deduced the final statement by combining the hypothesis of the first statement with the conclusion of the second statement.
edit What is Inductive Reasoning?
Inductive reasoning, or induction, is reasoning from a specific case or cases and deriving a general rule. This is against the scientific method. It makes generalizations by observing patterns and drawing inferences that may well be incorrect.
edit Cogent and Uncogent Arguments
Strong arguments are ones where if the premise is true then the conclusion is very likely to be true. Conversely, weak inductive arguments are such that they may be false even if the premises they are based upon are true.
If the argument is strong and the premises it is based upon are true, then it is said to be a cogent argument. If the argument is weak or the premises it flows from are false or unproven, then the argument is said to be uncogent.
For example, here is an example of a strong argument.
- There are 20 cups of ice cream in the freezer.
- 18 of them are vanilla flavored.
- Therefore, all cups of ice cream are vanilla.
If in the previous argument premise #2 was that 2 of the cups are vanilla, then the conclusion that all cups are vanilla would be based upon a weak argument. In either case, all premises are true and the conclusion may be incorrect, but the strength of the argument varies.
edit Types of Inductive Reasoning
A generalization proceeds from a premise about a sample to a conclusion about the population. For example, (1.) A sample S from population P is chose. Q percentage of the sample S has attribute A. (2.) Therefore, Q percentage of the population P has attribute A.
edit Statistical Syllogisms
A statistical syllogism proceeds from a generalization to a conclusion about an individual. For example, (1.) A proportion Q of population P has attribute A. (2.) An individual X is a member of P. (3.) Therefore, there is a probability which corresponds to Q that X has an attribute A.
edit More Examples
edit Examples of Deductive Reasoning
Quadrilateral ABCD has sides AB ll CD (parallel) and sides BC ll AD. Prove that it is a parallelogram. In order to prove this, we have to use the general statements given about the quadrilateral and reach a logical conclusion.
Another example of deductive logic is the following reasoning:
edit Examples of Inductive Reasoning
If the three consecutive shapes are triangle, square and pentagon which would be the next shape? If the reasoner observes the pattern, she will observe that the number of sides in the shape increase by one and so a generalization of this pattern would lead her to conclude that the next shape in the sequence would be a hexagon.
edit Applications of Inductive and Deductive Reasoning
- Deduction can also be temporarily used to test an induction by applying it elsewhere.
- A good scientific law is highly generalized like that in Inductive reasoning and may be applied in many situations to explain other phenomena.
- Deductive reasoning is used to deduce many experiments and prove a general rule.
Inductive reasoning is also known as hypothesis construction because any conclusions made are based on current knowledge and predictions. As with deductive arguments, biases can distort the proper application of inductive argument, which prevents the reasoner from forming the most logical conclusion based on the clues.
edit Availability Heuristic
The availability heuristic causes the reasoner to depend primarily upon information that is readily available. People have a tendency to rely on information that is easily accessible in the world around them. This can introduce bias in inductive reasoning.
edit Confirmation bias
The confirmation bias is based on the natural tendency to confirm, rather than to deny a current hypothesis. For example, for several centuries it was believed that the sun and planets orbit the earth. | http://www.diffen.com/difference/Deductive_vs_Inductive | 13 |
17 | Ray tracing (graphics)
||This article needs additional citations for verification. (March 2008)|
In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television visual effects, and more poorly suited for real-time applications like video games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and dispersion phenomena (such as chromatic aberration).
Optical ray tracing describes a method for producing visual images constructed in 3D computer graphics environments, with more photorealism than either ray casting or scanline rendering techniques. It works by tracing a path from an imaginary eye through each pixel in a virtual screen, and calculating the color of the object visible through it.
Scenes in ray tracing are described mathematically by a programmer or by a visual artist (typically using intermediary tools). Scenes may also incorporate data from images and models captured by means such as digital photography.
Typically, each ray must be tested for intersection with some subset of all the objects in the scene. Once the nearest object has been identified, the algorithm will estimate the incoming light at the point of intersection, examine the material properties of the object, and combine this information to calculate the final color of the pixel. Certain illumination algorithms and reflective or translucent materials may require more rays to be re-cast into the scene.
It may at first seem counterintuitive or "backwards" to send rays away from the camera, rather than into it (as actual light does in reality), but doing so is many orders of magnitude more efficient. Since the overwhelming majority of light rays from a given light source do not make it directly into the viewer's eye, a "forward" simulation could potentially waste a tremendous amount of computation on light paths that are never recorded.
Therefore, the shortcut taken in raytracing is to presuppose that a given ray intersects the view frame. After either a maximum number of reflections or a ray traveling a certain distance without intersection, the ray ceases to travel and the pixel's value is updated.
Detailed description of ray tracing computer algorithm and its genesis
What happens in nature
In nature, a light source emits a ray of light which travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons traveling along the same path. In a perfect vacuum this ray will be a straight line (ignoring relativistic effects). Any combination of four things might happen with this light ray: absorption, reflection, refraction and fluorescence. A surface may absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. It might also reflect all or part of the light ray, in one or more directions. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Less commonly, a surface may absorb some portion of the light and fluorescently re-emit the light at a longer wavelength colour in a random direction, though this is rare enough that it can be discounted from most rendering applications. Between absorption, reflection, refraction and fluorescence, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, reflective and fluorescent properties again affect the progress of the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image.
Ray casting algorithm
The first ray tracing algorithm used for rendering was presented by Arthur Appel in 1968. This algorithm has since been termed "ray casting". The idea behind ray casting is to shoot rays from the eye, one per pixel, and find the closest object blocking the path of that ray. Think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modeling techniques and easily rendered.
Recursive ray tracing algorithm
The next important research breakthrough came from Turner Whitted in 1979. Previous algorithms traced rays from the eye into the scene until they hit an object, but determined the ray color without recursively tracing more rays. Whitted continued the process. When a ray hits a surface, it can generate up to three new types of rays: reflection, refraction, and shadow. A reflection ray is traced in the mirror-reflection direction. The closest object it intersects is what will be seen in the reflection. Refraction rays traveling through transparent material work similarly, with the addition that a refractive ray could be entering or exiting a material. A shadow ray is traced toward each light. If any opaque object is found between the surface and the light, the surface is in shadow and the light does not illuminate it. This recursive ray tracing added more realism to ray traced images.
Advantages over other rendering methods
Ray tracing's popularity stems from its basis in a realistic simulation of lighting over other rendering methods (such as scanline rendering or ray casting). Effects such as reflections and shadows, which are difficult to simulate using other algorithms, are a natural result of the ray tracing algorithm. Relatively simple to implement yet yielding impressive visual results, ray tracing often represents a first foray into graphics programming. The computational independence of each ray makes ray tracing amenable to parallelization.
A serious disadvantage of ray tracing is performance. Scanline algorithms and other algorithms use data coherence to share computations between pixels, while ray tracing normally starts the process anew, treating each eye ray separately. However, this separation offers other advantages, such as the ability to shoot more rays as needed to perform spatial anti-aliasing and improve image quality where needed.
Although it does handle interreflection and optical effects such as refraction accurately, traditional ray tracing is also not necessarily photorealistic. True photorealism occurs when the rendering equation is closely approximated or fully implemented. Implementing the rendering equation gives true photorealism, as the equation describes every physical effect of light flow. However, this is usually infeasible given the computing resources required.
The realism of all rendering methods can be evaluated as an approximation to the equation. Ray tracing, if it is limited to Whitted's algorithm, is not necessarily the most realistic. Methods that trace rays, but include additional techniques (photon mapping, path tracing), give far more accurate simulation of real-world lighting.
It is also possible to approximate the equation using ray casting in a different way than what is traditionally considered to be "ray tracing". For performance, rays can be clustered according to their direction, with rasterization hardware and depth peeling used to efficiently sum the rays.
Reversed direction of traversal of scene by the rays
The process of shooting rays from the eye to the light source to render an image is sometimes called backwards ray tracing, since it is the opposite direction photons actually travel. However, there is confusion with this terminology. Early ray tracing was always done from the eye, and early researchers such as James Arvo used the term backwards ray tracing to mean shooting rays from the lights and gathering the results. Therefore it is clearer to distinguish eye-based versus light-based ray tracing.
While the direct illumination is generally best sampled using eye-based ray tracing, certain indirect effects can benefit from rays generated from the lights. Caustics are bright patterns caused by the focusing of light off a wide reflective region onto a narrow area of (near-)diffuse surface. An algorithm that casts rays directly from lights onto reflective objects, tracing their paths to the eye, will better sample this phenomenon. This integration of eye-based and light-based rays is often expressed as bidirectional path tracing, in which paths are traced from both the eye and lights, and the paths subsequently joined by a connecting ray after some length.
Photon mapping is another method that uses both light-based and eye-based ray tracing; in an initial pass, energetic photons are traced along rays from the light source so as to compute an estimate of radiant flux as a function of 3-dimensional space (the eponymous photon map itself). In a subsequent pass, rays are traced from the eye into the scene to determine the visible surfaces, and the photon map is used to estimate the illumination at the visible surface points. The advantage of photon mapping versus bidirectional path tracing is the ability to achieve significant reuse of photons, reducing computation, at the cost of statistical bias.
An additional problem occurs when light must pass through a very narrow aperture to illuminate the scene (consider a darkened room, with a door slightly ajar leading to a brightly lit room), or a scene in which most points do not have direct line-of-sight to any light source (such as with ceiling-directed light fixtures or torchieres). In such cases, only a very small subset of paths will transport energy; Metropolis light transport is a method which begins with a random search of the path space, and when energetic paths are found, reuses this information by exploring the nearby space of rays.
To the right is an image showing a simple example of a path of rays recursively generated from the camera (or eye) to the light source using the above algorithm. A diffuse surface reflects light in all directions.
First, a ray is created at an eyepoint and traced through a pixel and into the scene, where it hits a diffuse surface. From that surface the algorithm recursively generates a reflection ray, which is traced through the scene, where it hits another diffuse surface. Finally, another reflection ray is generated and traced through the scene, where it hits the light source and is absorbed. The color of the pixel now depends on the colors of the first and second diffuse surface and the color of the light emitted from the light source. For example if the light source emitted white light and the two diffuse surfaces were blue, then the resulting color of the pixel is blue.
As a demonstration of the principles involved in raytracing, let us consider how one would find the intersection between a ray and a sphere. In vector notation, the equation of a sphere with center and radius is
Any point on a ray starting from point with direction (here is a unit vector) can be written as
where is its distance between and . In our problem, we know , , (e.g. the position of a light source) and , and we need to find . Therefore, we substitute for :
Let for simplicity; then
Knowing that d is a unit vector allows us this minor simplification:
This quadratic equation has solutions
The two values of found by solving this equation are the two ones such that are the points where the ray intersects the sphere.
Any value which is negative does not lie on the ray, but rather in the opposite half-line (i.e. the one starting from with opposite direction).
If the quantity under the square root ( the discriminant ) is negative, then the ray does not intersect the sphere.
Let us suppose now that there is at least a positive solution, and let be the minimal one. In addition, let us suppose that the sphere is the nearest object on our scene intersecting our ray, and that it is made of a reflective material. We need to find in which direction the light ray is reflected. The laws of reflection state that the angle of reflection is equal and opposite to the angle of incidence between the incident ray and the normal to the sphere.
The normal to the sphere is simply
where is the intersection point found before. The reflection direction can be found by a reflection of with respect to , that is
Thus the reflected ray has equation
Now we only need to compute the intersection of the latter ray with our field of view, to get the pixel which our reflected light ray will hit. Lastly, this pixel is set to an appropriate color, taking into account how the color of the original light source and the one of the sphere are combined by the reflection.
This is merely the math behind the line–sphere intersection and the subsequent determination of the colour of the pixel being calculated. There is, of course, far more to the general process of raytracing, but this demonstrates an example of the algorithms used.
Adaptive depth control
This means that we stop generating reflected/transmitted rays when the computed intensity becomes less than a certain threshold. You must always set a certain maximum depth or else the program would generate an infinite number of rays. But it is not always necessary to go to the maximum depth if the surfaces are not highly reflective. To test for this the ray tracer must compute and keep the product of the global and reflection coefficients as the rays are traced.
Example: let Kr = 0.5 for a set of surfaces. Then from the first surface the maximum contribution is 0.5, for the reflection from the second: 0.5 * 0.5 = 0.25, the third: 0.25 * 0.5 = 0.125, the fourth: 0.125 * 0.5 = 0.0625, the fifth: 0.0625 * 0.5 = 0.03125, etc. In addition we might implement a distance attenuation factor such as 1/D2, which would also decrease the intensity contribution.
For a transmitted ray we could do something similar but in that case the distance traveled through the object would cause even faster intensity decrease. As an example of this, Hall & Greenbergfound that even for a very reflective scene, using this with a maximum depth of 15 resulted in an average ray tree depth of 1.7.
We enclose groups of objects in sets of hierarchical bounding volumes and first test for intersection with the bounding volume, and then only if there is an intersection, against the objects enclosed by the volume.
Bounding volumes should be easy to test for intersection, for example a sphere or box (slab). The best bounding volume will be determined by the shape of the underlying object or objects. For example, if the objects are long and thin then a sphere will enclose mainly empty space and a box is much better. Boxes are also easier for hierarchical bounding volumes.
Note that using a hierarchical system like this (assuming it is done carefully) changes the intersection computational time from a linear dependence on the number of objects to something between linear and a logarithmic dependence. This is because, for a perfect case, each intersection test would divide the possibilities by two, and we would have a binary tree type structure. Spatial subdivision methods, discussed below, try to achieve this.
Kay & Kajiya give a list of desired properties for hierarchical bounding volumes:
- Subtrees should contain objects that are near each other and the further down the tree the closer should be the objects.
- The volume of each node should be minimal.
- The sum of the volumes of all bounding volumes should be minimal.
- Greater attention should be placed on the nodes near the root since pruning a branch near the root will remove more potential objects than one farther down the tree.
- The time spent constructing the hierarchy should be much less than the time saved by using it.
In real time
The first implementation of a "real-time" ray-tracer was credited at the 2005 SIGGRAPH computer graphics conference as the REMRT/RT tools developed in 1986 by Mike Muuss for the BRL-CAD solid modeling system. Initially published in 1987 at USENIX, the BRL-CAD ray-tracer is the first known implementation of a parallel network distributed ray-tracing system that achieved several frames per second in rendering performance. This performance was attained by means of the highly optimized yet platform independent LIBRT ray-tracing engine in BRL-CAD and by using solid implicit CSG geometry on several shared memory parallel machines over a commodity network. BRL-CAD's ray-tracer, including REMRT/RT tools, continue to be available and developed today as Open source software.
Since then, there have been considerable efforts and research towards implementing ray tracing in real time speeds for a variety of purposes on stand-alone desktop configurations. These purposes include interactive 3D graphics applications such as demoscene productions, computer and video games, and image rendering. Some real-time software 3D engines based on ray tracing have been developed by hobbyist demo programmers since the late 1990s.
The OpenRT project includes a highly optimized software core for ray tracing along with an OpenGL-like API in order to offer an alternative to the current rasterisation based approach for interactive 3D graphics. Ray tracing hardware, such as the experimental Ray Processing Unit developed at the Saarland University, has been designed to accelerate some of the computationally intensive operations of ray tracing. On March 16, 2007, the University of Saarland revealed an implementation of a high-performance ray tracing engine that allowed computer games to be rendered via ray tracing without intensive resource usage.
On June 12, 2008 Intel demonstrated a special version of Enemy Territory: Quake Wars, titled Quake Wars: Ray Traced, using ray tracing for rendering, running in basic HD (720p) resolution. ETQW operated at 14-29 frames per second. The demonstration ran on a 16-core (4 socket, 4 core) Xeon Tigerton system running at 2.93 GHz.
At SIGGRAPH 2009, Nvidia announced OptiX, a free API for real-time ray tracing on Nvidia GPUs. The API exposes seven programmable entry points within the ray tracing pipeline, allowing for custom cameras, ray-primitive intersections, shaders, shadowing, etc. This flexibility enables bidirectional path tracing, Metropolis light transport, and many other rendering algorithms that cannot be implemented with tail recursion. Nvidia has shipped over 350,000,000 OptiX capable GPUs as of April 2013. OptiX-based renderers are used in Adobe AfterEffects, Bunkspeed Shot, Autodesk Maya, 3ds max, and many other renderers.
Imagination Technologies offers a free API called OpenRL which accelerates tail recursive ray tracing-based rendering algorithms and, together with their proprietary ray tracing hardware, works with Autodesk Maya to provide what 3D World calls "real-time raytracing to the everyday artist".
- Beam tracing
- Cone tracing
- Distributed ray tracing
- Global illumination
- List of ray tracing software
- Parallel computing
- Specular reflection
- Appel A. (1968) Some techniques for shading machine rendering of solids. AFIPS Conference Proc. 32 pp.37-45
- Whitted T. (1979) An improved illumination model for shaded display. Proceedings of the 6th annual conference on Computer graphics and interactive techniques
- Tomas Nikodym (June 2010). "Ray Tracing Algorithm For Interactive Applications". Czech Technical University, FEE.
- A. Chalmers, T. Davis, and E. Reinhard. Practical parallel rendering, ISBN 1-56881-179-9. AK Peters, Ltd., 2002.
- GPU Gems 2, Chapter 38. High-Quality Global Illumination Rendering Using Rasterization, Addison-Wesley
- Eric P. Lafortune and Yves D. Willems (December 1993). "Bi-Directional Path Tracing". Proceedings of Compugraphics '93: 145–153.
- Péter Dornbach. "Implementation of bidirectional ray tracing algorithm". Retrieved 2008-06-11.
- Global Illumination using Photon Maps
- Photon Mapping - Zack Waters
- See Proceedings of 4th Computer Graphics Workshop, Cambridge, MA, USA, October 1987. Usenix Association, 1987. pp 86–98.
- "About BRL-CAD". Retrieved 2009-07-28.
- Piero Foscari. "The Realtime Raytracing Realm". ACM Transactions on Graphics. Retrieved 2007-09-17.
- Mark Ward (March 16, 2007). "Rays light up life-like graphics". BBC News. Retrieved 2007-09-17.
- Theo Valich (June 12, 2008). "Intel converts ET: Quake Wars to ray tracing". TG Daily. Retrieved 2008-06-16.
- Nvidia (October 18, 2009). "Nvidia OptiX". Nvidia. Retrieved 2009-11-06.
- "3DWorld: Hardware review: Caustic Series2 R2500 ray-tracing accelerator card". Retrieved 2013-04-23.3D World, April 2013
|Wikimedia Commons has media related to: Ray tracing|
- What is ray tracing ?
- Ray Tracing and Gaming - Quake 4: Ray Traced Project
- Ray tracing and Gaming - One Year Later
- Interactive Ray Tracing: The replacement of rasterization?
- A series of tutorials on implementing a raytracer using C++
- Tutorial on implementing a raytracer in PHP
- The Compleat Angler (1978) | http://en.wikipedia.org/wiki/Ray_tracing_(graphics) | 13 |
18 | A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.
Decision Tree is a flow-chart like structure in which internal node represents test on an attribute, each branch represents outcome of test and each leaf node represents class label (decision taken after computing all attributes). A path from root to leaf represents classification rules.
In decision analysis a decision tree and the closely related influence diagram is used as a visual and analytical decision support tool, where the expected values (or expected utility) of competing alternatives are calculated.
A decision tree consists of 3 types of nodes:
- Decision nodes - commonly represented by squares
- Chance nodes - represented by circles
- End nodes - represented by triangles
Decision trees are commonly used in operations research, specifically in decision analysis, to help identify a strategy most likely to reach a goal. If in practice decisions have to be taken online with no recall under incomplete knowledge, a decision tree should be paralleled by a probability model as a best choice model or online selection model algorithm. Another use of decision trees is as a descriptive means for calculating conditional probabilities.
Decision trees, influence diagrams, utility functions, and other decision analysis tools and methods are taught to undergraduate students in schools of business, health economics, and public health, and are examples of operations research or management science methods.
Decision tree building blocks
Decision tree elements
Drawn from left to right, a decision tree has only burst nodes (splitting paths) but no sink nodes (converging paths). Therefore, used manually, they can grow very big and are then often hard to draw fully by hand. Traditionally, decision trees have been created manually - as the aside example shows - although increasingly, specialized software is employed.
Decision tree using flow chart symbols
Commonly a decision tree is drawn using flow chart symbols as it is easier for many to read and understand.
The basic interpretation in this situation is that the company prefers B's risk and payoffs under realistic risk preference coefficients (greater than $400K—in that range of risk aversion, the company would need to model a third strategy, "Neither A nor B").
Decision trees can be used to optimize an investment portfolio. The following example shows a portfolio of 7 investment options (projects). The organization has $10,000,000 available for the total investment. Bold lines mark the best selection 1, 3, 5, 6, and 7, which will cost $9,750,000 and create a payoff of 16,175,000. All other combinations would either exceed the budget or yield a lower payoff.
A decision tree can be represented more compactly as an influence diagram, focusing attention on the issues and relationships between events.
The squares represent decisions, the ovals represent action, and the diamond represents results.
Advantages and disadvantages
Amongst decision support tools, decision trees (and influence diagrams) have several advantages. Decision trees:
- Are simple to understand and interpret. People are able to understand decision tree models after a brief explanation.
- Have value even with little hard data. Important insights can be generated based on experts describing a situation (its alternatives, probabilities, and costs) and their preferences for outcomes.
- Possible scenarios can be added
- Worst, best and expected values can be determined for different scenarios
- Use a white box model. If a given result is provided by a model.
- Can be combined with other decision techniques. The following example uses Net Present Value calculations, PERT 3-point estimations (decision #1) and a linear distribution of expected outcomes (decision #2):
Disadvantages of decision trees:
- For data including categorical variables with different number of levels, information gain in decision trees are biased in favor of those attributes with more levels.
- Calculations can get very complex particularly if many values are uncertain and/or if many outcomes are linked.
- Y. Yuan and M.J. Shaw, Induction of fuzzy decision trees. Fuzzy Sets and Systems 69 (1995), pp. 125–139
- Deng,H.; Runger, G.; Tuv, E. (2011). "Bias of importance measures for multi-valued attributes and solutions". Proceedings of the 21st International Conference on Artificial Neural Networks (ICANN).
- Cha, Sung-Hyuk; Tappert, Charles C (2009). "A Genetic Algorithm for Constructing Compact Binary Decision Trees". Journal of Pattern Recognition Research 4 (1): 1–13.
|Wikimedia Commons has media related to: decision diagrams| | http://en.wikipedia.org/wiki/Decision_tree | 13 |
15 | Science Fair Project Encyclopedia
In computer science and mathematics, a sorting algorithm is an algorithm that puts elements of a list in a certain order. The most used orders are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing human-readable output.
Sorting algorithms used in computer science are often classified by:
- computational complexity (worst, average and best behaviour) in terms of the size of the list (n). Typically, good behaviour is O(n log n) and bad behaviour is Ω(n2). Ideal behaviour for a sort is O(n). Sort algorithms which only use an abstract key comparison operation always need at least Ω(n log n) comparisons on average;
- memory usage (and use of other computer resources)
- stability: stable sorting algorithms maintain the relative order of records with equal keys. That is, a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, stability is not an issue. However, assume that the following pairs of numbers are to be sorted by their first coordinate:
(4, 1) (3, 1) (3, 7) (5, 6)
In this case, two different results are possible, one which maintains the relative order of records with equal keys, and one which does not:
(3, 1) (3, 7) (4, 1) (5, 6) (order maintained) (3, 7) (3, 1) (4, 1) (5, 6) (order changed)
Unstable sorting algorithms may change the relative order of records with equal keys, stable sorting algorithms never do so. Unstable sorting algorithms can be specially implemented to be stable. One way of doing this is to artificially extend the key comparison, so that comparisons between two objects with otherwise equal keys are decided using the order of the entries in the original data order as a tie-breaker. Remembering this order, however, often involves an additional space penalty.
List of sorting algorithms
In this table, n is the number of records to be sorted, k is the number of distinct keys, and u is the number of unique records.
- Bubble sort — O(n2)
- Cocktail sort (bidirectional bubble sort) — O(n2)
- Insertion sort — O(n2)
- Bucket sort — O(n); requires O(k) extra memory
- Counting sort — O(n+k); requires O(n+k) extra memory
- Merge sort — O(n log n); requires O(n) extra memory
- In-place merge sort — O(n2)
- Binary tree sort — O(n log n); requires O(n) extra memory
- Pigeonhole sort — O(n+k); requires O(k) extra memory
- Radix sort — O(n·k); requires O(n) extra memory
- Gnome sort — O(n2)
- Selection sort — O(n2)
- Shell sort — O(n log n) if best current version used
- Comb sort — O(n log n)
- Heapsort — O(n log n)
- Smoothsort — O(n log n)
- Quicksort — O(n log n) expected time, O(n2) worst case
- Introsort — O(n log n)
- Patience sorting — O(n log log n + k) worst case time, requires additional O(n + k) space, also finds the longest increasing subsequences
Impractical sort algorithms
- Bogosort — O(n × n!) expected time, unbounded worst case.
- Stupid sort — O(n3); recursive version requires O(n2) extra memory
- Bead Sort — O(n) or O(√n), but requires specialized hardware
- Pancake sorting — O(n), but requires specialized hardware
Summaries of the popular sorting algorithms
Bubble sort is the most straightforward and simplistic method of sorting data that could actually be considered for real world use. The algorithm starts at the beginning of the data set. It compares the first two elements, and if the first is greater than the second, it swaps them, then repeats until no swaps have occurred on the last pass. The algorithm does this for each pair of adjacent elements until there are no more pairs to compare. This algorithm, however, is vastly inefficient, and is rarely used except in education (i.e., beginning programming classes). A slightly better variant is generally called cocktail sort, and works by inverting the ordering criteria and the pass direction on alternating passes.
Insertion sort is similar to bubble sort, but is more efficient as it reduces element comparisons somewhat with each pass. An element is compared to all the prior elements until a lesser element is found. In other words, if an element contains a value less than all the previous elements, it compares the element to all the previous elements before going on to the next comparison. Although this algorithm is more efficient than the Bubble sort, it is still inefficient compared to many other sort algorithms since it, and bubble sort, move elements only one position at a time. However, insertion sort is a good choice for small lists (about 30 elements or fewer), and for nearly-sorted lists. These observations can be combined to create a variant of insertion sort which works efficiently for larger lists. This variant is called shell sort (see below).
Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a two-dimensional array (in reality, the array is an appropriately indexed one dimensional array) and then sorting the columns of the array using the Insertion sort method. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.
See in-place algorithm for a list of sorting algorithms that can be written to work in-place.
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e. 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list.
Heapsort is a member of the family of selection sorts. This family of algorithms works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list. Straight selection sort runs in O(n2) time, but Heapsort accomplishes its task efficiently by using a data structure called a heap, which is a binary tree where each parent is larger than either of its children. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. It is removed and placed at the end of the list, then the remaining list is "heapified" again.
Some radix sort algorithms are counterintuitive, but they can be surprisingly efficient. If we take the list to be sorted as a list of binary strings, we can sort them on the least significant bit, preserving their relative order. This "bitwise" sort must be stable, otherwise the algorithm will not work. Then we sort them on the next bit, and so on from right to left, and the list will end up sorted. This is most efficient on a binary computer (which nearly all computers are). If we had a ternary computer, we would regard the list as a list of base 3 strings and proceed in the same way. Most often, the bucket sort algorithm is used to accomplish the bitwise sorting.
Radix sort can also be accomplished from left to right, but this makes the algorithm recursive . On a binary (radix 2) computer, we would have to sort on the leftmost bit, and then sort the sublist with 0 as the leftmost bit, and then sort the sublist with 1 as the leftmost bit, and so on.
Microsoft's "Quick" programming languages (such as QuickBASIC and QuickPascal ) have a file named "sortdemo" (with extension BAS and PAS for QB and QP, respectively) in the examples folder that provides a graphical representation of several of the various sort procedures described here, as well as performance ratings of each.
Also, a program by Robb Cutler called The Sorter for classic Mac OS performs a similar function. It illustrates Quick Sort, Merge Sort, Heap Sort, Shell Sort, Insertion Sort, Bubble Sort, Shaker Sort, Bin Sort, and Selection Sort.
External links and references
- D. E. Knuth, The Art of Computer Programming, Volume 3: Sorting and Searching.
- Introduction to Algorithms, Second Edition, by Cormen, Leiserson, Rivest, and Stein.
- has explanations and analyses of many of these algorithms.
- has information on many of these algorithms.
- Ricardo Baeza-Yates' sorting algorithms on the Web
- 'Dictionary of Algorithms, Data Structures, and Problems'
- For some slides and PDFs see Manchester university's course notes
- For a repository of algorithms with source code and lectures, see The Stony Brook Algorithm Repository
- Graphical Java illustrations of the Bubble sort, Insertion sort, Quicksort, and Selection sort
- xSortLab - An interactive Java demonstration of Bubble, Insertion, Quick, Select and Merge sorts, which displays the data as a bar graph with commentary on the workings of the algorithm printed below the graph.
- Sorting Algorithms Demo - Java applets that chart the progress of several common sorting algorithms while sorting an array of data using in-place algorithms.
- - An applet visually demonstrating a contest between a number of different sorting algorithms
- sortchk - a sort algorithm test suite released under the terms of the BSD License (original)
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Sort_algorithm | 13 |
92 | An algorithm is a clearly described procedure that tells you how to
solve a well defined problem.
- Your knowledge will be tested in CSci 202, and all dependent CSci classes.
- Solving problems is a bankable skill and algorithms help you solve problems.
- Job interviews test your knowledge of algorithms.
- You can do things better if you know a better algorithm.
- Inventing and analyzing algorithms is a pleasant hobby.
- Publishing a novel algorithm can make you famous. Several Computer Scientists
started their career with a new algorithm.
A clearly described procedure is made of simple clearly difined steps. It often includes
making decisions and repeating earlier steps. The procedure can be complex, but the steps
must be simple and unambiguous. An algorithm describes how to solve a
problem in steps that a child could follow without understanding the
Algorithms solve problems. Without a well defined problem an algorithm is not much use.
For example, if you are given a sock and the problem is to find a matching sock in a pile of similar socks,
then an algorithm is to
take every sock in turn and see if it matches the given sock, and if so stop. This
algorithm is called a linear search. If the problem is that you are given a pile of
sockes and you have to sort them all into pairs then a different algorithm is needed.
Exercise: get two packs of cards an shuffle each one. Take one card from the first pack
and search through the other pack to find the matching card. Repeat this until
you get a feel for how it performs.
If the problem is putting 14 socks away in matching pairs then one
algorithm is to
- clear a place to layout the unmatched socks,
- for each sock in turn,
- compare it to each unmatched sock and if they match put
them both away as a pair. But, if the new sock matches none of the unmatched
socks, add it to the unmatched socks.
I call this the "SockSort" algorithm.
Exercise. Get a pack of card and extract the Ace,2,3, .. 7 of the hearts and
clubs. Shuffle them together. Now run the SockSort algorithm to pair them
up by rank: Ace-Ace, 2-2, ... .
If you had an array of 7 numbered boxes you could sort the cards
sorting the socks. Perhaps I should put numbered tags on my socks?
Note: some times a real problem is best avoided rather than solved.
So, changing what we are given, changes the problem, and so changes the
best algorithm to solve it.
Here is another example. Problem: to find a person's name in a telephone
directory. Algorithm: use your finger to
split the phone book in half. Pick the half that
contains the name. Repeatedly, split the piece of the phone book
that contains the name in half, ... This is a binary search algorithm.
Here is another example problem using the same data as the previous one: Find my
neighbor's phone number. Algorithm: look up my address using binary search,
then read the whole directory looking for the address that is closest to mine.
This is a linear search.
If we change the given data, the problem of my neighbor's phone number has a
much faster algorithm. All we need is copy of the census listing of people by
street and it is easy to find my neighbor's name and so phone number.
A problem best expressed in terms of
We either use a structured form of English with numbered steps or a
pseudocode that is based on a programming language.
- Givens: What is there before the algorithm?
- Goals: What is needed?
- Operations: What operations are permitted?
The word is a corruption of the name of a mathematician born in Khwarizm in
Uzbekistan in the 820's(CE). Al-Khwarizmi (as he was known) was one of the
first inducted into the "House of Wisdom" in Baghdad where he worked on algebra,
arithmetic, astronomy, and solving equations. His work had a strongly
computational flavor. Later, in his honor, medieval mathematicians in Europe
called similar methods "algorismic" and then, later, "algorithmic".
[pages 113-119, "The Rainbow of Mathematics", Ivor Grattan-Guiness, W W Norton
& Co, NY, 1998]
If you are using C++ and understand the ideas used in the C++ <algorithm>
library you can save a lot of time. Otherwise you will have to reinvent the
wheel. As a rule, the standard algorithm will be faster than anything you could
quickly code to do the same job. However, you still need to understand the
theory of algorithms and data structures to be able to use them well.
- Know the definition of an algorithm above.
- Know how to express a problem.
- Recognize well known problems.
- Recognize well known algorithms.
- Match algorithms to problems.
- Describe algorithms informally and in structured English/pseudocode
- Walk through an algorithm, step by step, for a particular problem by hand.
- Code an algorithm expressed in structured English.
- Know when to use an algorithm and where they relate to objects and classes,
Bjarne Stroustroup, the developer of C++, has written a very comprehensive and deep introduction to
the standard C++ library as part of his classic C++ book. This book is in the CSUSB library.
As a general rule: a practical programmer starts searching the library
of their language for known algorithms before "reinventing the wheel".
No... unless the algorithm is simple or the author very good at writing.
It helps if you know many algorithms. The same ideas turn up in many
I find that a deck of playing cards is very helpful for sorting and searching
A pencil and paper is useful for doing a dry run of an algorithm. With a group,
chalk and chalk board is very helpful.
Programming a computer just to test an algorithm tends to waste time unless you use the program
to gather statistics on the performance of the algorithm.
When there are loops it is well worth looking for things that the body
of the loop does not change. They are called invariants. If an invariant
is true before the loop, then it will be true afterward as well. You can
often figure out precisely what an algorithm does by noting what it does not
Probably the simplest algorithm worth know solve the problem pf swapping the
values of two locations or variables. Here you are given two variables of loctations
p and q that hold the same type of data, and you need to swap them. This looks trivial
but in fact we need to use an extra temporary variable t:
Algorithm to Swap p and q:
- SET t = p
- SET p = q
- Set q = t
The two classic problems are called
given a collection of objects and need to find one that passes some test. For
example: finding the student whose student Id number ends "1234". In
we are given a collection and need to reorganize it according to some rule. An
example: placing a grade roster in order of the last four digits of student
Finding the root of an equation: is this a search or a sort?
You can't find things in your library so you decide to place the books in a
particular order: is this a search or a sort?
Optimization problems are another class of problems that have attracted a lot of
attention. Here we have a large number of possibilities and want to pick the
best. For example: what dress is the most attractive in a given price range?
What is the shortest route home from the campus? What is the smallest amount of
work needed to get a B in this class?
- (linear): a linear algorithm does something, once, to every object in turn in a
collection. Examples: looking for words in the dictionary that fit into a
crossword. Adding up all the numbers in an array.
- (divide_and_conquer): the algorithms divide the problem's data into pieces and
work on the pieces. For example conquering Gaul by dividing it into three
pieces and then beating rebellious pieces into submission. Another example is
the Stroud-Warnock algorithm for displaying 3D scenes: the view is divided into four
quadrants, if a quadrant is simple, it is displayed by a simple process, but if
complex, the same Stroud-Warnock algorithm is reapplied to it. Divide-and-conquer algorithms
work best when their is a fast way to divide the problem into sub-problems that
are of about the same difficulty. Typically we aim to divide the data into
equal sized pieces. The closer that we get to this ideal the better the algorithm.
As an example, merge-sort splits an array into nearly-equal halves, sorts each of them and then
merges the two into a single one. On the other hand, Tony Hoare's Quicksort
and Treesort algorithms make a rough split into two parts that can be sorted and rapidly
joined together. On average each split is into equal halves and the algorithm
performs well. But in the worst case, QuickSort split the data into a single
element vs all the rest, and so performs slowly. So, divid_and_conquer
algorithms are faster with precise
divisions, but can perform very badly on some data if you can not guarantee a 50-50
- (binary): These are a special divide_and_conquer
algorithm where we divide the data into two equal halves.
The classic is binary search for hunting lions: divide the
area into two and pick a piece that contains a lion.... repeat. This leads to
an elegant way to find roots of equations.
ALGORITHM to find the integer lo that is just below the square root
of an integer n (√n).
Note: this algorithm needs checking out!
- SET low = 0 and high = n, (now low<= √ n < high).
- SET mid = (low + high )/ 2, (integer division)
- IF mid * mid > n THEN SET high = mid
- ELSE SET lo =mid.
- END IF (again low<=√ n < high)
- IF low < high -1 THEN REPEAT from step 2 above.
- (Greedy algorithms): try to solve problems by selecting the best piece first and
then working on the other pieces later. For example, to pack a knapsack, try
putting in the biggest objects first and add the smaller one later. Or to find
the shortest path through a maze, always take the shortest next step that you
can see. Greedy algorithms don't always produce optimal solutions, but often
give acceptable approximations to the optimal solutions.
- (Iterative algorithms): start with a value and repeatedly change it in the
direction of the solution. We get a series of approximations to the answer. The
algorithm stops when two successive values get close enough. For example:
algorithm for calculating approximate square roots.
ALGORITHM Given a positive number a and error epsilon calculate the square root of a:
- SET oldv=a
- SET newv=(1+a)/2
- WHILE | oldv - newv | > epsilon
- SET oldv =newv
- SET newv =(a+oldv * oldv)/(2*oldv)
- END WHILE
END ALGORITHM (newv is now within epsilon of the square root of a)
Exercise. Code & test the above.
NO! Take CSCI546 to see what problems are unsolvable and why this is so.
As a quick example: No algorithm can exist for finding the bugs in any given
Optimization problems often prove difficult to solve: consider finding the
highest point in Seattle without a map in dense fog...
First they were expressed in natural language: Arabic, Latin, English, etc.
In the 1950s we used flow charts. These where reborn as Activity Diagrams in
the Unified Modeling Language in the 2000s.
Boehm and Jacopini proved in the 1960's that all algorithms can constructed
using three structures
From 1964 onward we used "Structured English" or Pseudo-code. Structured
English is English
with "Algol 60" structures. "Algol" is the name of the "Algorithmic Language"
of that decade. I have a page
[ algorithms.html ]
of algorithms written in a C++based Pseudo-code.
- Selection -- if-then-else, switch-case, ...
- Iteration -- while, do-while, ...
Here is a sample of structured English:
clear space for unmatched socks
FOR each sock in the pile,
FOR each unmatched sock UNTIL end or a match is found
IF the unmatched sock matches the sock in your hand THEN
form a pair and put it in the sock drawer
IF sock in hand is unmatched THEN
put it down with the unmatched socks
Algorithms often appear inside the methods in a class. However some
algorithms are best expressed in terms of interacting objects. So,
a method may be defined in terms of a traditional algorithm or as
a system of collaborating objects.
You need an algorithm any time there is no simple sequence of steps
to code the solution to a problem inside a function.
It is wise to either write at an algorithm or use the UML to sketch out
the messages passing between the objects.
Algorithms can be encapsulated inside objects. If you
create an inheritance hierarchy, you can organize a set of
objects each knowing a different algorithm (method). You can
then let the program choose its algorithm at run time.
Algorithms are used in all upper division computer science classes.
I write my algorithm, in my program, as a sequence of comments.
I then add code that carries out each step -- under the comment
that describes the step.
First there is no algorithm for writing algorithms! So here are some hints.
- The more algorithms you know the easier it is to pick one that fits, and the
more ideas you have to invent a new one.
Take CSci classes and read books.
- Look on the WWW or in the Library
- Try doing several simple cases by hand, and observing what you do.
- Work out a 90% correct solution, and add IF-THEN-ELSEs to fix up the special
- Go to see a CSCI faculty member: this is free when you are a student.
Once in the real world you have to hire us as consultants.
- Often a good algorithm may need a special device or data structure to work.
For example, in my office I have multi-pocket folder with slots labeled with
dates. I put all the papers for a day in its slot, and when the day comes I
have all the paperwork to hand. CSCI330 teaches a lot of useful data
structures and the C++ library has a dozen or so.
- Think! This is hard work but rewarding when you succeed.
First, try doing it by hand. Second, discuss it with a colleague.
Third, try coding and running it in a program.
Fourth, go back and prove that your algorithm must work.
You should use known algorithms whenever you can and always state where they came from.
Put this citation as a comment in your code. First this is honest. Second this
makes it easier to understand what the code is all about.
Check out the text books in the Data Structures and Algorithms classes
in the upper division of our degree programs.
John Mongan and Noah Suojanen
have written "Programming Interviews exposed:
Secrets to landing your next job" (Wiley 2000, ISBN 0-471-38356-2). Most
chapters are about problem solving and the well known algorithms involved.
's multi-volume "Art of Computer Programming" founded the study of
algorithms and how to code and analyze them. It remains an incredible resource
for computer professionals. All three volumes are in our library.
's two books of "Programming Pearls" are lighter than Knuth's work
but have lots of good examples and advice for programmers and good discussion of
algorithms. They are in the library.
G H Gonnet and R. Baeza-Yates
wrote a very comprehensive "Handbook of
Algorithms and Data Structures in Pascal and C" (Addison-Wesley 1991)
which still my favorite resource for detailed analysis of known
algorithms. There is a copy in my office.... along with other
The Association for Computing Machinery (ACM)
started publishing algorithms in a special
supplement (CALGO) in the 1960s. These are algorithms involving numbers. So
they tend to be written in FORTRAN.
Yes -- lots. The Wikipedia, alone, has dozens of articles on
particular algorithms, on the theory of algorithms, and on classes of algorithms.
Step by step you translate each line of the algorithm into your target language.
Ideally each line in the algorithm becomes two or three lines of code. I like
to leave the steps in my algorithm visible in my code as comments.
Do this in pencil or in an editor. You will need to make changes in it!
You know that movies are rated by using stars or "thumbs-up". Rating an
algorithm is more complex, more scientific, and more mathematical.
In fact, we label algorithms with a formula, like this, for example:
The linear search algorithm is order big_O of n
or in short
Linear search is O(n)
This means: for very large values of n the search can not take more than
a linear time (plus some small ignored terms) to run.
On the other hand we can show:
A Binary search is O(log n)
The above formulas tell us that if we make n large enough then binary
search will be faster than linear search.
As another example, a linear search is in O(n) and the Sock-Sort (above)
is in O(n squared). This means that the linear search is better than
the sock_search. But in what way? This takes a little explanation.
Originally (in the 40's through to the mid-60's)
we worked out the average number of steps to solve a problem. This
tended to be correlated with the time. However we found (Circa 1970) two
problems with this measure.
First, it is often hard to calculate the averages. Knuth spends pages deriving the
average performnce of Euclid's algorithm -- which is only 5 lines long!
Second, the average depends on how the data is
distributed. For example, a linear search is fast if we are likely to find the
target at the start of the search rather than at the end. Worse,
certain sorting algorithms work quickly on average but sometimes are
example the beginner's Bubble Sort is fast when most of the data
is in sequence. But Quick Sort can be horribly slow on the same data.... but
for other re-orderings of the same data quick sort is much better than
bubble sort. So before you can work out an average you have to know
the frequencies with which different data appear. Typically we
don't know this distribution.
These days a computer scientist judges an algorithm by its worst case behavior,
We are pessimistic, with good reason. Many of us have implemented an algorithm
with a good average performance on random data and had users complain because
their data is not random but in the worst case possible. This happened, for
example, to Jim Bentley of AT&T [Programming Pearls above]. In other words we choose
the algorithm that gives the best guarantee of speed, not the best average
performance. It is also easier to figure out the worst case
performance than the average -- the math is easier.
The second complication for comparing algorithms is how much data to consider.
Nearly all algorithms have different times depending on the amount of data.
Typically the performance on small amounts of data is more erratic than for large
amounts of data. Further, small amounts of data don't make a big delays
that the users will notice. So, it is best to consider large amounts of data.
Luckily, mathematicians have a tool kit for working with larger numbers.
It is called
This is the calculus of what functions look like for large
values of their arguments. It simplifies the calculations
because we only need to look at the higher order terms in the formula.
Finally, to get a standard "measure" of an algorithm, independent
of its hardware, we need to eliminate the speed of the processor from our comparison.
We do this by removing all constant factors out of our formula to
give the algorithm its rating.
We talk about the order of an algorithm and use a big letter O to symbolize
To summarize: To work out the order of an algorithm, calculate
- the number of simple steps
- in the worst case
- as a formula including the amount of data
- for only large amounts of data
- including only the most important term
- and ignoring constants
For example, when I do a sock-search, with n socks, in the worst case I would
have to layout n/2 unmatched socks before I found my first match, and finding
the match I would have to look at all n/2 socks. Then I'd have to look
at the remaining n/2 -1 socks to match the next one, and then n/2-2, ...
So the total number of "looks" would be
1 + 2 + 3 + ... + n/2
(1 + n/2) * ( n/ 4)
n/4 + n^2/8
Simplifying by ignoring the n/4 term and the constant (1/8) we get
Here is listing of typical ratings from best to worst...:
|Logarithmic||O(log n)||Good search algorithm
|Linear||O(n)||Bad Search algorithms
|n log n||O(n * log n)||Good sort algorithms
|n squared||O(n^2)||Simple sort algorithms
|Cubic||O(n^3)||Normal matrix multiplication
|Polynomial||O(n^p)||good solutions to bad problems
|Exponential||O(2^n)||Not a good solution to a bad problem
|Factorial||O(n!)||Checking all permutations of n objects.
Note: I wrote n^2, n^3, n^p, 2^n, etc. to indicate powers/superscripts.
p is some power > 1.
Here is a picture graphing some typical Os:
Notice how the worst functions may start out less than the better
ones, but that they always end up being bigger.
Most algorithms for simple problems have a worst case times that are a power of n
times a power of log n. The lower the powers, the better the algorithm is.
There exist thousands of problems where the best solutions we have found,
however, are exponential -- O(2^n)! Why we have failed to improve on this is one of the
big puzzles of Computer Science.
Please read and study this page:
[ 000957.html ]
A formula like O(n^2.7) names a large family of similar functions that are
smaller than the formula for very large n (if we ignore the scale). n and
n*n are both in O(n^2.7). n^3 and exp(n) are not in O(n^2.7). Now, a clever
divide and conquer matrix multiplication algorithm is O(n^2.7) and so better
than the simple O(n^3) one for large matrices.
Big_O (asymptotic) formula are simpler and easier to work with than the
because we can ignore so much: instead of 2^n +n^2.7+123.4n we write
2^n or exponential.
Timing formula are expressed in terms of the size n of the data. To simplify
the formulas we remove all constant factors: 123*n*n is replaced by n*n.
We also ignore the lower order terms: n*n + n + 5 becomes n*n. So
123*n^2 +200*n+5 is in O(n^2).
To be very precise and formal, here is the classic text book definition:
f(n) is in O( g(n) ) iff for some constant c>0, some number n0, and all n > n0 ( f(n) <= c * g(n) ).
This means that to show that f is in O(g) then you have to find a constant
multiplier c and an number n0 so that for n bigger than n0, f(n) is less than or equal to c*g(n).
For example 123n is in O(n^2) because for all n> 123, 123n <= n^2.
So, by choosing
n0=123 and c=1 we have 123n <= 1 * n^2.
We say f and g are
if and only if both (1) f(n) is in O(g(n)) and (2) g(n) is in O(f(n)).
So n^2-3 is asymptotically equivalent to 1+2n+123n`^2.
There is another way to look at the ordering of these functions. We can look at
the limit L of the ratio of the functions for large values:
- If L=0 then f(n) in O(g(n)).
- If L=∞ then g(n) in O(f(n)).
- If L is a finite non-zero constant then f(n) is asymptotically equivalent to g(n).
In CSCI202 I expect you to take these facts on trust. Proofs will follow in
the upper division courses.
Exercise: Reduce each formula to one the classic "big_O"s listed.
- log(n) + 3n
- 200 n + n * log(n).
- n log(n)
Classes like CSCI431 an MATH372. Or hit the stacks and Wikipedia.
There are everyday problems that force you to find needles in haystacks.
Problems like this force a computer to do a lot of work to solve them. You
need to learn to spot these, and try to avoid them if possible.
We normally consider polynomial algorithms as efficient and non-polynomial
ones as hard. Computer scientists have discovered a large class of problems
that don't seem to have polynomial solutions, even though we can check the
correctness of the answer efficiently. A common example is to find the
shortest route that visits every city in a country in the shortest time.
This is the famous Traveling Salesperson's Problem. Warning: you can waste
a lot of time trying to find an efficient solution to this problem.
Each discipline (Mathematics, Physics, . . . ) has its own "Classic" algorithms. In
computer science the most famous algorithms fall into to two types: sorting
and searching. Other classes of algorithm include these involved in
graphs, optimization, graphics, compiling, operating systems, etc.
Here is my personal selection and classification of Searching and Sorting
We give searching algorithms a collection of data and a key. Their goal is to find an item in the
collection that matches the key. The algorithms that work best depend on the structure of the given
- (Direct Access): The algorithm calculate the address of the data from a given data value. Example:
arrays. Example: getting direct access data from a disk. This avoids the need to look for the data! O(1).
- (Linear Search): The algorithm tries each item in turn until the key matches it. Finds unique items
or can create a set of matching items. O(n).
- (Binary Search): The collection of data must be sorted, Look at the middle one, if it is too big, try the
first half, but if it is too small, try the other half. Repeat. O(log n).
- (Hashing): The algorithm calculates a value from the key that gives the address of a data structure that
has many items of data in it. Then it searches the data structure. Works well (O(1) ) when you have at
least twice as much storage as the data and the rate of increase is small. For very large n, O(log n).
- (Indexes): An Index is a special additional data structure that records where each key value can be
found in the main data structure. It takes space and time to maintain but lets you retrieve the data faster.
Indexes may be direct or need searching. If the index is optimal, the time is O(√ n).
- (Trees): These are special data structures that can speed up searches. A tree has nodes. Each node
has an item of data. Some nodes are leaves, and the rest have branches leading to another tree. The
algorithm chooses one branch to follow to find the data. If the tree is balanced (all branches have nearly
the same length) this gives O(log n) time. If the tree is not balanced, the worst case is O(n). There exist
special forms that use O(log n) to maintain O(log n) search.
- (Data Bases): These are large, complex data structures stored on disk. They have a complex and rigid
structure that takes careful planning. They use all the searching and sorting algorithms and data
structures to help users store and retrieve the data they want. Take CSCI350 and CSCI580? to learn
- Combinations: you can design data so that two or three different searches
are used to find data. For example: A hash code gives you the starting
point for a linear search. Direct access to an index gives you the address
that directs you to the block of disk storage that contains the data, in
this block is another index that has the number of the data record, and the
actual byte is then calculated directly.
We give a sorting algorithm a collection of items. It changes the collection so that the data items are (in
some sense or other) increasing. The algorithm needs a comparison operation that compares two items
and tells whether one in smaller than the other. An example for numbers is "a < b" but in general we
might have to create a Boolean function "compare" to do the comparison.
- (Slow Sort): Throw all the data in the air . . . If it comes down in the right order, stop, else repeat.
O(n!). Example of a very bad algorithm.
- (Bubble Sort): Scan the data from first to last, if two adjacent items are out of order, swap them. Repeat
until they make no swaps. The beginner's favorite sort. Works OK if the data is almost in the right order.
There were many clever optimizations that often slowed this algorithm down! O(n*n)
- (Cocktail shaker Sort): a variation on bubble sort.
- (Insertion Sort): Like a Bridge player sorts a hand of cards. Mentally split the hand into a sorted and
unsorted part. Take each unsorted item and search the sorted items to see where it fits. O(n^2). Is
simple enough that for small amounts of data (n: 1..10) it is fast enough.
- (Selection Sort): Find the maximum item and swap it with the last item. Repeat with all but the last item.
Gives a fixed number of swaps. Easy to understand. Always executes n swaps. Time O(n^2).
- (Shell Sort): Mr. Shell improved on bubble sort by swapping items that are
not adjacent. Complex and clever. Basic idea: For a sequence of
decreasing ps, take every p'th item starting with the first and sort
them, next every p'th item starting with the second, repeat with
3rd..(p-1)th. Then decrease p and do it again. Different speeds for
different sequences of p's. Average speed O(n^p) where 1<p<=2. The
divide-and-conquer algorithms (below) are better.
- (Quick Sort): Tony Hoare's clever idea. Partition the data in two so that every item in one part is
less than any item in the other part. Sort each part (recursively) . . . Good performance on random
data: O(n log n) but has a bad O(n^2) worst case performance.
- (Merge Sort): Divide data into two equally sized halves. Sort each half. Merge to two halves. Good
performance: worst case is O(n log n). However, needs clever programming and extra storage to handle
the merge. For small amounts of data, this tends to be slower than other algorithms because it copies
data into spare storage and then merges it back where it belongs.
- (Heap Sort): Uses Robert Floyd's clever data structure called a heap. A
heap is a binary tree structure (each node has two children) that has the
big items on top of the small ones (parents > children), and is always
balanced (all branches have nearly equal lengths). It is stored without
pointers. Instead the data is in an array and node a is the parent of
2*a and 2*a+1. Floyd worked out a clever way to insert new data in the
array so that it remains a heap in O(log n). He also found a way to remove
the biggest items and heapify the rest in O(log n). First insert all
n items in a heap (O(n log n) ) and then extract them (O(n log n) ),
top down, we get a worst and average case O(n log n) algorithm. However, on random data, Quick sort
will often be faster.
- (Radix Sort): The data needs to be expressed as a decimal number or character string. First sort with
respect to the first digit/character. Then sort each part by the second digit/character. This is a neat
algorithm for short keys. But because size of key is O( log n) the time is O(n * log n).
- Combinations: We often combine different algorithms to suit a particular
circumstance. For example, when I have to manually sort 20 or more pieces
of work (2 or more times a week...), I don't have room to handle the recursion
needed for Merge or Quick sort, or table space for a heap. So I take
each 10 pieces of work and sort using insertion sort, and then merge
the resulting sorted stacks. But sometimes I use a manual sort
based on techniques developed for sorting magnetic tape data, and now
obsolete. Here you take the items and place them into sorted
runs by adding them on the top or bottom of sorted piles... and then
merge the result. You might call this
because I use a Double-Ended Queue to hold the runs. This is just
a curiosity and not a famous piece of computer science.
. . . . . . . . . ( end of section What are the important algorithms of Computer Science) <<Contents | End>>
Go back to the start of this document and look at the list of questions ...
try to write down, from memory, a short, personal, answer to each one?
- Define what an algorithm is.
- What is a searching algorithm?
- Name two algorithms often used to search for things.
If you have a choice, which is the faster of your two searching algorithms.
- What is a sorting algorithm?
- Name four algorithms often used for sorting.
If you have a large number of random items which of these
sorting algorithm is likely to be fastest?
- Find the Big_O of 2000+300*n+2*n^2.
- What is the Big_O worst time with n items for a linear search, binary search,
bubble sort, and merge sort.
- Name a sorting algorithm that has a good average behaviour on randome data
but a slow behavior on some data.
. . . . . . . . . ( end of section Review Questions) <<Contents | End>>
. . . . . . . . . ( end of section Algorithms) <<Contents | End>>
Dr. Botting wants to acknowledge the excellent
help and advice given by Dr. Zemoudeh
on this document. Whatever errors remain are Dr. Botting's mistakes.
accessor::=`A Function that accesses information in an object with out changing the object in any visible way".
In C++ this is called a "const function".
In the UML it is called a query.
Algorithm::=A precise description of a series of steps to attain a goal,
[ Algorithm ]
class::="A description of a set of similar objects that have similar data plus the functions needed to manipulate the data".
constructor::="A Function in a class that creates new objects in the class".
Data_Structure::=A small data base.
destructor::=`A Function that is called when an object is destroyed".
Function::programming=A selfcontained and named piece of program that knows how to do something.
Gnu::="Gnu's Not Unix", a long running open source project that supplies a
very popular and free C++ compiler.
mutator::="A Function that changes an object".
object::="A little bit of knowledge -- some data and some know how". An
object is instance of a class.
objects::=plural of object.
Current paradigm for programming.
Semantics::=Rules determining the meaning of correct statements in a language.
a previous paradigm for programming.
STL::="The standard C++ library of classes and functions" -- also called the
"Standard Template Library" because many of the classes and functions will work
with any kind of data.
Syntax::=The rules determining the correctness and structure of statements in a language, grammar.
Q::software="A program I wrote to make software easier to develop",
TBA::="To Be Announced", something I should do.
TBD::="To Be Done", something you have to do.
UML::="Unified Modeling Language".
void::C++Keyword="Indicates a function that has no return". | http://www.csci.csusb.edu/dick/cs202/alg.html | 13 |
15 | In the following sections, we'll examine the standard library operations used to create and manipulate strings.
The simplest form of declaration for a string simply names a new variable, or names a variable along with the initial value for the string. This form was used extensively in the example graph program given in Section 9.3.2. A copy constructor also permits a string to be declared that takes its value from a previously defined string.
string s1; string s2 ("a string"); string s3 = "initial value"; string s4 (s3);
In these simple cases the capacity is initially exactly the same as the number of characters being stored. Alternative constructors let you explicitly set the initial capacity. Yet another form allows you to set the capacity and initialize the string with repeated copies of a single character value.
string s6 ("small value", 100);// holds 11 values, can hold 100 string s7 (10, '\n'); // holds ten newline charactersInitializing from Iterators
Finally, like all the container classes in the standard library, a string can be initialized using a pair of iterators. The sequence being denoted by the iterators must have the appropriate type of elements.
string s8 (aList.begin(), aList.end());
As with the vector data type, the current size of a string is yielded by the size() member function, while the current capacity is returned by capacity(). The latter can be changed by a call on the reserve() member function, which (if necessary) adjusts the capacity so that the string can hold at least as many elements as specified by the argument. The member function max_size() returns the maximum string size that can be allocated. Usually this value is limited only by the amount of available memory.
cout << s6.size() << endl; cout << s6.capacity() << endl; s6.reserve(200); // change capacity to 200 cout << s6.capacity() << endl; cout << s6.max_size() << endl;
The member function length() is simply a synonym for size(). The member function resize() changes the size of a string, either truncating characters from the end or inserting new characters. The optional second argument for resize() can be used to specify the character inserted into the newly created character positions.
s7.resize(15, '\t'); // add tab characters at end cout << s7.length() << endl; // size should now be 15
The member function empty() returns true if the string contains no characters, and is generally faster than testing the length against a zero constant.
if (s7.empty()) cout << "string is empty" << endl;
A string variable can be assigned the value of either another string, a literal C-style character array, or an individual character.
s1 = s2; s2 = "a new value"; s3 = 'x';
The operator += can also be used with any of these three forms of argument, and specifies that the value on the right hand side should be appended to the end of the current string value.
s3 += "yz"; // s3 is now xyz
The more general assign() and append() member functions let you specify a subset of the right hand side to be assigned to or appended to the receiver two arguments, pos and n, indicate that the n values following position pos should be assigned/appended.
s4.assign (s2, 0, 3); // assign first three characters s4.append (s5, 2, 3); // append characters 2, 3 and 4
The addition operator + is used to form the catenation of two strings. The + operator creates a copy of the left argument, then appends the right argument to this value.
cout << (s2 + s3) << endl; // output catenation of s2 and s3
As with all the containers in the standard library, the contents of two strings can be exchanged using the swap() member function.
s5.swap (s4); // exchange s4 and s5
An individual character from a string can be accessed or assigned using the subscript operator. The member function at() is almost a synonym for this operation except an out_of_range exception will be thrown if the requested location is greater than or equal to size().
cout << s4 << endl; // output position 2 of s4 s4 = 'x'; // change position 2 cout << s4.at(2) << endl; // output updated value
The member function c_str() returns a pointer to a null terminated character array, whose elements are the same as those contained in the string. This lets you use strings with functions that require a pointer to a conventional C-style character array. The resulting pointer is declared as constant, which means that you cannot use c_str() to modify the string. In addition, the value returned by c_str() might not be valid after any operation that may cause reallocation (such as append() or insert()). The member function data() returns a pointer to the underlying character buffer.
char d; strcpy(d, s4.c_str()); // copy s4 into array d
The member functions begin() and end() return beginning and ending random-access iterators for the string. The values denoted by the iterators will be individual string elements. The functions rbegin() and rend() return backwards iterators.Invalidating Iterators
The string member functions insert() and erase() are similar to the vector functions insert() and erase(). Like the vector versions, they can take iterators as arguments, and specify the insertion or removal of the ranges specified by the arguments. The function replace() is a combination of erase and insert, in effect replacing the specified range with new values.
s2.insert(s2.begin()+2, aList.begin(), aList.end()); s2.erase(s2.begin()+3, s2.begin()+5); s2.replace(s2.begin()+3, s2.begin()+6, s3.begin(), s3.end());
In addition, the functions also have non-iterator implementations. The insert() member function takes as argument a position and a string, and inserts the string into the given position. The erase function takes two integer arguments, a position and a length, and removes the characters specified. And the replace function takes two similar integer arguments as well as a string and an optional length, and replaces the indicated range with the string (or an initial portion of a string, if the length has been explicitly specified).
s3.insert (3, "abc"); // insert abc after position 3 s3.erase (4, 2); // remove positions 4 and 5 s3.replace (4, 2, "pqr"); // replace positions 4 and 5 with pqr
The member function copy() generates a substring then assigns this substring to the char* target given as the first argument. The range of values for the substring is specified either by an initial position, or a position and a length.
s3.copy (s4, 2); // assign to s4 positions 2 to end of s3 s5.copy (s4, 2, 3); // assign to s4 positions 2 to 4 of s5
The member function substr() returns a string that represents a portion of the current string. The range is specified by either an initial position, or a position and a length.
cout << s4.substr(3) << endl; // output 3 to end cout << s4.substr(3, 2) << endl; // output positions 3 and 4
The member function compare() is used to perform a lexical comparison between the receiver and an argument string. Optional arguments permit the specification of a different starting position or a starting position and length of the argument string. See Section 13.6.5 for a description of lexical ordering. The function returns a negative value if the receiver is lexicographically smaller than the argument, a zero value if they are equal and a positive value if the receiver is larger than the argument.
The relational and equality operators (<, <=, ==, !=, >= and >) are all defined using the comparison member function. Comparisons can be made either between two strings, or between strings and ordinary C-style character literals.
The member function find() determines the first occurrence of the argument string in the current string. An optional integer argument lets you specify the starting position for the search. (Remember that string index positions begin at zero.) If the function can locate such a match, it returns the starting index of the match in the current string. Otherwise, it returns a value out of the range of the set of legal subscripts for the string. The function rfind() is similar, but scans the string from the end, moving backwards.
s1 = "mississippi"; cout << s1.find("ss") << endl; // returns 2 cout << s1.find("ss", 3) << endl; // returns 5 cout << s1.rfind("ss") << endl; // returns 5 cout << s1.rfind("ss", 4) << endl; // returns 2
The functions find_first_of(), find_last_of(), find_first_not_of(), and find_last_not_of() treat the argument string as a set of characters. As with many of the other functions, one or two optional integer arguments can be used to specify a subset of the current string. These functions find the first (or last) character that is either present (or absent) from the argument set. The position of the given character, if located, is returned. If no such character exists then a value out of the range of any legal subscript is returned.
i = s2.find_first_of ("aeiou"); // find first vowel j = s2.find_first_not_of ("aeiou", i); // next non-vowel
©Copyright 1996, Rogue Wave Software, Inc. | http://www.math.hkbu.edu.hk/parallel/pgi/doc/pgC++_lib/stdlibug/str_7474.htm | 13 |
15 | Prolog is a programming language particularly well suited to logic and artificial intelligence programming. "Prolog" in fact stands for "Programming in Logic." In this brief introduction we will try to give you a little taste of Prolog without bogging you down with a great deal of technical jargon. By the end of this section you should be able to use Prolog and write some little programs that give you a feel for the language. Feel free to consult the books and Web sites on Prolog mentioned later should you want to go further.
A person uses a computer programming language to direct a computer to perform desired tasks. You might be familiar already with the names of some common programming languages, such as C, Basic, Pascal, or Cobol.
Not all programming languages work the same way. In languages you might be familar with already, such as C, the programmer commonly tells the computer that some letters or words are to be variables, for example that the letter "X" is to be considered a variable that will stand for an integer number (such as 1, 2, etc.). The C program might then involve a loop mechanism (a repeating set of commands) in which the computer assigns different integer values to X every time it goes through the loop. In fact a large part of the program could consist of variable declarations, "iterative" loops, calculations of values, and assignments of values to the variables. The programmer tells the programmer not only what to do, but how to do it.
A Prolog program is likely to look a little different than a typical C program. The programmer tells the computer less the "how" than the "what." For example, in Prolog the programmer might begin by telling the computer a bunch of "facts." The facts can be about characteristics of people or things in the world ("Spot is frisky"), about relations of such things ("John is the father of Susan"), and about "rules" pertaining to such facts ("Scott is the grandfather of Susan" is true if "Scott is father the John" is true and "John is the father of Susan" is true).
The Prolog program could then be used to ask the computer about the facts already given and the computer would be able to provide answers. Given the facts in the previous paragraph, if the computer is asked "Is John the father of Susan?" it would reply "Yes." If asked, "Is John the father of James" it would reply "No" because it was not given that fact or other facts from which it could be deduced.
Of course, Prolog, like any other common programming language, will not be able to process ordinary English sentences such as those given above; rather it requires the programmer to write statements in a particular way . The programmer must know how to phrase Prolog sentences properly. In a moment we will show you how to write some simple Prolog statements, so that you can directly use the Prolog facility (PT-Prolog) within PT-Thinker.
But let's point out that you don't have to know Prolog in order to use its logical abilities - that is where PT-Thinker can help you. PT-Thinker is a Prolog program that can take ordinary English sentences and translate them into a form suitable for processing by Prolog. So you can see the kind of logical inferences PT-Thinker can make by telling PT-Thinker facts and then seeing how it deduces conclusions from those facts. As already mentioned, should you decide to do so you can also use the Prolog facility within PT-Thinker to give the computer such facts and questions phrased in the Prolog language.
How to Write Prolog Statements Let's start off with some examples of Prolog statements. Note that Prolog is case sensitive, that is, capital letters are considered different than lower-case letters, so don't just substitute an "A" for an "a" and assume it will mean the same thing.
Let's tell the computer some facts about a family we know. We'll put the Prolog statement after the English statement.
|John is the father of Susan.||father(john,susan).|
|John is the husband of Martha.||husband(john,martha).|
|John eats pizza.||eats(john,pizza).|
|Susan eats pizza.||eats(susan,pizza).|
|Martha eats pizza.||eats(martha,pizza).|
|Susan eats oranges.||eats(susan,oranges).|
|John bought pizza for Martha.||bought(john,pizza,martha).|
|Susan is tired.||tired(susan).|
Now let's talk about these statements. We can define a fact in the Prolog language by placing the verb or adjective first and the relevant related nouns in parentheses. The period defines the end of the statement. (Notice that we aren't using capital letters to start the names; capital letters or terms starting with them we reserve for variables.)
We would type the above facts and then load them into Prolog. The exact command here may vary with the specific Prolog program/version with which one is working. (In PT-Thinker Prolog, one first types "assert" to enter the mode for fact loading, types in the statements, and then enters F10 to load the facts.) Think of this loading process as teaching Prolog the above facts. Prolog then responds with a "prompt," (such as "?-"), that is, the opportunity for the user to ask questions.
Let's continue with examples, but this time with questions you can ask Prolog. Below are sample English questions, their Prolog equivalent, and the answer from Prolog. Assume we have loaded the above facts into Prolog already.
|English question||Prolog (at prompt ?-)||Prolog responds|
|Is John the father of Susan?||father(john,susan).||yes (or true)|
|Is Susan tired?||tired(susan).||yes|
|Is Martha tired?||tired(martha).||no (or false)|
|Who is tired?||tired(X).||X = susan|
|Who is the husband of Martha?||husband(X,martha).||X = john|
|Who eats pizza?||eats(X,pizza).||X = john|
|Who else eats pizza?||merely hit spacebar||X = martha|
|Who else eats pizza?||hit spacebar again||X = susan|
|Who else eats pizza?||hit spacebar yet again||no|
When we ask Prolog above about John being the father of Susan, we do not have to change the form of the Prolog statement from that which we used to tell Prolog that John is the father of Susan. This same statement may be used to tell the computer the fact initially or inquire about it later. The computer knows it is an inquiry in the second circumstance because we have typed it in response to the Prolog prompt, rather than loading it in as a fact.
Note that we use the variable letter "X" to ask the computer who is tired ("tired(X).") and who Martha's husband is ("husband(X,martha)."). Another way to translate these Prolog statements is as "Give me all those who are tired." and "Give me all those who are a husband of Martha." We use the same strategy when asking about who eats pizza ("eats(X,pizza).". In such cases, Prolog's response ends by asking us in effect "Do you want me to look for another?" In PT-Prolog, we signal yes by hitting our space bar, and Prolog attempts to find another. When there are no more to find, it responds "no." (In other Prolog facilities, one may press the "return" key rather than the spacebar, and a semi-colon may end the line rather than a question asking us if we want to find another.)
At this point you can go to the PT-Prolog facility within PT-Thinker, enter some facts, and ask some questions. For a start you might put in facts about countries and capitals ("capital(london,england") and ask Prolog to give them back to you ("capital(X,france)") or tell you the truth of statements ("capital(london,germany)"). Or type in some facts about particular people and the foods they like or the colors they prefer.
When we ask the computer a question, it looks back through the facts it has been given and tries to find a match. Prolog will search the list of facts you gave it (entered in an earlier step) from the top with the goal of "satisfying" the question. If a question such as "eats(john,pizza)" is satisfied by finding it as a fact, Prolog will report "yes" or "true" (depending on the particular Prolog system). If a sentence such as "eats(X,pizza)." is satisfied by finding a facts about someone who eats pizza, Prolog responds with the name of the X who satisfies it. Note that the computer only knows what you've told it (or loaded into it). If you ask it whether Cincinnati is a city in Ohio, it looks to see if it has been given that information or other information from which it can deduce this. If it cannot find any such information, it reports "no." It will report this even though Cincinnati really is a city in Ohio. The computer has no way of knowing Cincinnati is an Ohio city apart from having been given that fact earlier. So "no" here from the computer should be interpreted more as "was not able to verify" or "did not find a match for this claim" rather than "this statement is false in the real world."Consider the following:
|English||Prolog (at prompt ?-)||Prolog response|
|Who eats pizza and oranges?||eats(X,pizza), eats (X,oranges).||susan|
This statement, in Prolog, asks for Prolog to find an X such that it true both that X eats pizza and that X eats oranges. The comma acts as an "and" here. Prolog starts at the top of the list of facts it has been given and tries to satisfy the first part, and it finds that John eats pizza. It then tries to satisfy the second part with john for X (X is instantiated to john), starting at the top of the list, since this is a new goal. But it fails to find that John eats oranges. (Recall that we told it that John eats pizza but not that John eats oranges.) So it backtracks to where it previously found that John eats pizza and tries to resatisfy the statement with a different X. It finds that Susan eats pizza (X is instantiated to susan now), and so it then tries to satisfy "susan eats oranges" by starting at the top of the list. It finds this as a fact, so it then gives the response of "susan."
Due to the way Prolog satisfies goals, sometimes Prolog will engage in a search that starts at the top of the list (if it takes it to be a new goal) and other times backtrack to where it last satisfied the goal (if it is trying to resatisfy a goal). This needs to be taken into account when designing a complex program; sometimes Prolog can engage in too much backtracking in attempting to resatisfy goals, and in this case a "cut" ("!") can be introduced into the statement to prevent backtracking beyond the point of the cut. To find out more about backtracking and the cut please consult a Prolog text or tutorial.
Now that we understand the structure of simple Prolog statements and queries, we can investigate some more complicated statements. We can give Prolog some facts about conditionals, that is, what happens if something else is the case.
|Martha is the wife of John, if John is the husband of Martha.||wife (martha, john) :- husband (john, martha).|
|John converses with anyone, if that person fancies baseball and is smart||converses (john, X) :- fancies (X, baseball), smart (X).|
A good way to look at Prolog's handling of such statements is the following. To Prolog, the first statement is interpreted as a rule to the effect that "If "husband (john, martha)" succeeds, then "wife (martha, john)" succeeds. The second statement is "converses (john, X)" succeeds if "fancies (X, baseball)" succeeds and "smart (X)" succeeds. The ":-" symbols tell Prolog how to read the rule.
As we've said, in Prolog we give the computer facts and rules about facts and then we can ask it questions. All these facts and rules we give it are stored in an internal database, and we can save this database in the form of a file that we can use in a later session. If we don't save it, the computer will not "remember" it if we turn off the computer and run Prolog later.
We've already talked about variables, which are letters or phrases that can assume any value. Conventions vary, but traditionally variables are capital letters, words starting with capital letters, words starting with an underscore (_), or an underscore itself (called the "anonymous" variable).
Prolog recognizes ordinary words (beginning with a lower case letter, with no spaces), or words surrounded by single quotation marks (these can include spaces and start with capital letters). All these are taken as constants, not variables.
Prolog can recognize integers (such as -244, 0, 3, etc.), and some versions can recognize floating point numbers (such as 3.45, -22.005, etc.).
Another type of data recognized by Prolog are structures, such as "state(Illinois)" and "shirt(brand(izod))." We've already seen these. And Prolog can handle lists. You give Prolog a list of elements by enclosing it in brackets, as in
Prolog can handle a list by treating it as having a head and a tail. The head is the first element in the list, and the tail is everything after the head. So in the above example of the list [a,b,c], the head is "a" and the tail is the remaining list "b,c." The division between head and tail can be indicated by using "|" as in the case of a list with a head of "a" and a tail of the rest of the list: [a|b,c,d,e]. Prolog also allows one to signify the head and the tail of a list using variables and the symbol "|" as in "[X|Y]." Here the X denotes the head of the list and the Y denotes the tail of the list. Or consider the use of "[a|T]" to indicate all of the lists with "a" as the head and anything as the tail; "[H|b]" indicates all of the lists with anything as the head and "b" as the tail.
In this way we can ask the computer to search for lists that match a particular pattern, and it will find lists that instantiate that pattern. Here are some examples:
|List using "|"||Lists that instantiate it (match its pattern)|
|[X|Y]||[a,b,c,d] X = a Y = [b,c,d]|
|[X,Y|Z]||[a,b,c,d], X = a, Y = b, Z = [c,d]|
|[X,Y,Z|A]||[dogs,cats,birds,fish,lizards], X = dogs, Y = cats, Z = birds, A = [fish,lizards]|
The predefined predicate "append" can be used in Prolog to join lists together. For example, "append([1,2,3],[4,5,6],X)." will cause Prolog to reply "X = [1,2,3,4,5,6]".
Prolog can handle arithmetic in a pretty normal fashion. Note the following Prolog symbols:
|\=||indicates "is not equal"|
|<||indicates "is less than"|
|>||indicates "is greater than"|
|=<||indicates "is less than or equal to"|
|>=||indicates "is greater than or equal to"|
|/||indicates "divided by"|
|*||indicates "multiplied by"|
To do arithmetic, Prolog accepts notation in a variety of orders, the easiest to follow being the traditional "infix" manner such as appears in the following:
5*6 10/2 2+3 7-5
If you leave out clarifying parentheses in complex expressions, Prolog solves them the way we learned in elementary school, with multiplication evaluated before addition, etc. Note that to get Prolog to actually evaluate each of the above expressions to an answer, you have to use "is."
A is 5*6 B is 10/2 C is 2+3 D is 7-5
Prolog then gives you the value of the variable A, for example. You might try some arithmetic with Prolog - it's really easy.
What is 5*651+4-77+34*46+333*22-5642?
To help you get a better feel for Prolog, below are some additional examples of Prolog interpretations of English sentences.
|John eats a pepperoni pizza.||eats (john, pizza (pepperoni)).|
|X and Y are parents of Z if X is the mother of Z and Y is the father of Z||parent(X,Y,Z) :- mother(X,Z) , father(Y,Z).|
|Sparky barks at anyone who walks quickly.||barks(sparky,X) :- walks(X,quickly).|
|A equals B||A = B.|
|A does not equal B||A /= B.|
We can briefly mention some other Prolog matters. One can specify that an output appear on the computer screen by using the predefined predicate "write." For example, suppose that the variable X has been instantiated to "socrates." To get the computer to show "socrates" on the screen, type "write(X)." "nl" takes you to the next line. "tab(14)" moves the cursor 14 spaces to the right; substitute any desired integer for 14 to move that many spaces. The predefined predicate "read" will read in any term that you type (follow the term with a period and a space). "read(X)" reads in the next term and instantiates it to the variable X.
Deductive logic concerns itself with valid arguments in the sense of one or more statements (premises) from which another statement (the conclusion) follows necessarily. To capture the logic of the kinds of statements used in arguments, logicians use propositional logic and prediciate logic. Propositional logic involves the way in which statements are connected into more complex compound statements using "truth-functional" connectives such as "and," "or," "not," and "if...then." Some types of arguments are valid for reasons other than the operation of these connectives, and so logicians rely on predicate logic to get inside of each statement and symbolize the subjects and predicates with logical symbols. For a fuller discussion of logic see the logic section on these Web pages.
Prolog is ideally suited to dealing with logical arguments. However, the syntax that must be used in Prolog to handle many types of logical argument is called "clausal" form. Translating most propositional logic and first-order predicate logic statements and arguments into clausal form is too complicated for this introduction to Prolog; you are urged to consult a book on Prolog for more information.
We can still get a feel for how Prolog can handle logic by seeing how it deals with some simple arguments whose validity is truth-functional (propositional logic) and some simple syllogisms whose predicate logic translation is simple enough to avoid the clausal form translation issue.
Some of the inferences in the above discussion have shown you simple truth-functional arguments already. For example, Modus Ponens is the form of the following argument
The premises and conclusion can be put into a format Prolog can process by the following:
deepyogurt(we) :- duetomorrow(assignment). duetomorrow(assignment). deepyogurt(we).
Thus Prolog can be given the premises as facts, and if you ask it about the conclusion, it will respond "yes" or "true" (depending on the version of Prolog used). If you like you can make these predicate names shorter or longer, of course.
Syllogisms with affirmative, universal statements are also easily translated and tested for validity in Prolog. Consider the following syllogism:
It's easy to see that this is a valid argument. How can we put it into Prolog? The way Prolog is constructed allows it to interpret variables as universally quantified already. That is, the variable "X" will stand for all X's automatically. So the above argument in a form Prolog can understand will be:
mammal(X) :- cat(X). animal(X) :- mammal(X). animal(X) :- cat(X).
If the premises are given to Prolog as facts, and you ask it about the conclusion, it will reply in the affirmative, thus showing the argument's validity.
In a fashion similar to the above, you may wish to construct your own simple arguments and test their validity using the PT-Prolog facility within PT-Thinker.
Now that you have an idea of how Prolog programming works, try out some exercises to get a better feel for Prolog. The exercises below are more like little games, with the answers available on another Web page.
If John feels hungry, then he eats quickly. If he eats quickly, he gets heartburn. If he gets heartburn, he takes medicine. John feels hungry.
Conclusion: Given the above facts, what can we conclude about John taking medicine? Does he or doesn't he?
The above argument is a sorites argument, or it can be interpreted as a series of modus ponens arguments. This one is easy! Of course John takes medicine. Now can you get Prolog to tell you this?Answer: give Prolog the following facts:
eats(john,quickly) :- feels(john,hungry). gets(john,heartburn) :- eats(john,quickly). takes(john,medicine) :- gets(john,heartburn). feels(john,hungry).
ask Prolog "takes(john,medicine)." the answer should be "yes"
John is dating Nancy, but right now he is wondering if he must leave the country. Now John will take a vacation in either of two circumstances: if the IRS is after him, or if he is doomed. He is also dating Susan. If he is dating both of them, then Nancy knows this. Now John is in fact doomed if both Nancy and Susan know he is dating both of them. If John will take a vacation, then he must leave the country. So must John leave the country?Conclusion:
What do you think? This one is a little more difficult.Answer: give Prolog the following facts:
dates(john,nancy). takes(john,vacation) :- (after(irs,john) ; is(john,doomed)). dates(john,susan). knows(nancy, (dates(john,nancy) , dates(john,susan))):-(dates(john,nancy),dates(john,susan)). is(john,doomed):- (knows(nancy, (dates(john,nancy) , dates(john,susan))) , knows(susan, (dates(john,nancy) , dates(john,susan)))). leaves(john,country) :- takes(john,country).
ask Prolog "leaves(john,country)." the answer should be "no" because the computer hasn't been told that Susan knows that John is dating both Nancy and Susan, and so it interprets this statement as false (unproved) and so the inference doesn't go through.
X is 4 + 5 * 10.Prolog responds:
X is 54Type:
X is (4 + 5) * 10.Prolog responds:
Many Prolog textbooks use family relationships to illustrate Prolog. Can you construct a family?Given:
(We could go on with aunts, uncles, grandparents, etc. but you get the idea.)Ask Prolog:
parent(pete,mike). parent(pete,julie). parent(mary,mike). parent(mary,julie). parent(pete,amanda). parent(mary,amanda). female(mary). female(julie). female(amanda). male(mike). male(pete). sibling(mike,julie). sibling(mike,amanda). sibling(amanda,mike). sibling(julie,mike). sibling(julie,amanda). sibling(amanda,julie). father(X) :- male(X) , parent(X). mother(X) :- female(X) , parent(X). brother(X,Y):- (male(X) , sibling (X,Y), X /= Y. sister(X,Y):- (female(X) , sibling (X,Y), X /= Y.Ask Prolog:
sister(X,mike). father(X,julie). sister(amanda,julie). sister(mike,X). | http://www.mind.ilstu.edu/curriculum/protothinker/prolog_intro.php | 13 |
15 | Deduction is taught through the study of formal logic. Logic (both inductive and deductive logic) is the science of good reasoning. It is called formal because its main concern is with creating forms that serve as models to demonstrate both correct and incorrect reasoning. The difference is that, unlike induction, where an inference is drawn from an accumulation of evidence, deduction is a process that reasons about relationships between classes, characteristics and individuals. Deductive arguments start with one or more premises and then reasons to consider what conclusions must necessarily follow from them.
In order to understand logic, it is crucial to grasp and analyze key terms that are linked with it and explain its basics. First of all, an argument appears both in inductive and deductive reasoning. Deductive arguments involve premises that lead to a conclusion, whereas inductive ones establish premises based on experience and general evidence. Reasoning is another term linked with logic, and it describes the process of drawing conclusions, judgments or inferences from facts or premises.
Logic arranges deductive arguments in standardized forms that make the structure of the argument clearly visible for study and review. These forms are called syllogisms.
Syllogisms are useful for testing the reliability of a deduction according to the rules of logic. A syllogism usually contains two premises and a conclusion. The first one is called major and the second is called minor. They are claims made in an argument that provide the reasons for believing in the conclusion. A syllogism present claims concerning a relationship between the terms given in the premises and those in the conclusion. Their purpose is to clarify the claims of the premises, to discover and expose any hidden premises and to find out if one thought follows logically from the previous one. In inductive thinking, if the premises are true, the conclusion is... [continues]
Cite This Essay
(2009, 12). Short Essay on Deductive Reasoning. StudyMode.com. Retrieved 12, 2009, from http://www.studymode.com/essays/Short-Essay-On-Deductive-Reasoning-268034.html
"Short Essay on Deductive Reasoning" StudyMode.com. 12 2009. 12 2009 <http://www.studymode.com/essays/Short-Essay-On-Deductive-Reasoning-268034.html>.
"Short Essay on Deductive Reasoning." StudyMode.com. 12, 2009. Accessed 12, 2009. http://www.studymode.com/essays/Short-Essay-On-Deductive-Reasoning-268034.html. | http://www.studymode.com/essays/Short-Essay-On-Deductive-Reasoning-268034.html | 13 |
17 | Psychology, Sixth Edition
Cognition and Language
Cognitive psychology is the study of the mental processes people use to modify, make meaningful,
store, retrieve, use, and communicate to others the information they receive from the environment. (see introductory section)
- An information-processing system receives information, represents information through symbols, and manipulates
those symbols. (see The Circle of Thought)
REMEMBER: Psychologists consider people similar to information-processing systems
in the way they take in information, pass it through several stages, and
finally act on it.
Thinking can be described as part of an information-processing system in which mental representations are manipulated
in order to form new information. (see The Circle of Thought)
- A reactiontime is the amount of elapsed time between the presentation of a physical stimulus and an overt reaction to that stimulus. (see Mental Chronometry)
Example: Susan and several of her friends are standing in her office looking for
her keys. Suddenly, Dave calls out, "Hey!" and throws her the keys. The reaction time is the time it takes Susan to look up and get ready to catch the keys after hearing Dave
Evoked brain potentials are small temporary changes in voltage that occur in the brain in response
to stimuli. Psychologists can study information processing and can look for abnormal functioning in the brain
by examining evoked potentials. (see Evoked Brain Potentials)
REMEMBER:Evoke means to "cause" or "produce." Stimuli evoke, or produce, small changes in the brain. Psychologists have instruments that allow them to record these changes for study.
Example: About 300 milliseconds after a stimulus is presented, a large positive peak--the P300--occurs. The timing can be affected by how long sensory processing and perception
Concepts are basic units of thought or categories with common properties. Artificial
and natural concepts are examples. (see Concepts)
Formal concepts are concepts that are clearly defined by a set of rules or properties. Each member of the concept meets all the rules or
has all the defining properties, and no nonmember does. (see Concepts)
Example: A square is a formal concept. All members of the concept are shapes with
four equal sides and four right-angle corners. Nothing that is not a square shares these properties.
Natural concepts are defined by a general set of features, not all of which must be present for an object to be considered
a member of the concept. (see Concepts)
Example: The concept of vegetable is a natural concept. There are no rules or lists
of features that describe every single vegetable. Many vegetables are difficult
to recognize as such because this concept is so "fuzzy." Tomatoes are not vegetables, but most people think they are. Rhubarb is a vegetable, but most
people think it is not.
- A prototype is the best example of a natural concept. (see Concepts)
Example: Try this trick on your friends. Have them sit down with a pencil and paper. Tell them to write down all the numbers that
you will say and the answers to three questions that you will ask. Recite
about fifteen numbers of at least three digits each, and then ask your friends
to write down the name of a tool, a color, and a flower. About 60 to 80 percent of them will write down "hammer," "red," and "rose" because these are common prototypes of the concepts tool, color, and flower.
Prototypes come to mind most easily when people try to think of a concept.
Propositions are the smallest units of knowledge that can stand as separate assertions.
Propositions are relationships between concepts or between a concept and
a property of the concept. Propositions can be true or false. (see Propositions)
Example:Carla (concept) likes to buy flowers (concept) is a proposition that shows a relationship between two concepts. Dogs bark is a proposition that shows a relationship between a concept (dog) and a
property of that concept (bark).
Schemas are generalizations about categories of objects, events, and people. (see
Schemas, Scripts, and Mental Models)
Example: Dana's schema for books is that they are a bound stack of paper with stories or
other information written on each page. When her fifth-grade teacher suggests that each
student read a book on the computer, Dana is confused until she sees that
the same information could be presented on a computer screen. Dana has now
revised her schema for books to include those presented through electronic media.
Scripts are mental representations of familiar sequences, usually involving activity.
(see Schemas, Scripts, and Mental Models)
Example: As a college student, you have a script of how events should transpire in the classroom: students enter the
classroom, sit in seats facing the professor, and take out their notebooks.
The professor lectures while students take notes, until the bell rings and
they all leave.
Mental models are clusters of propositions that represent people's understanding of how things work. (see Schemas, Scripts, and Mental Models)
Example: There is a toy that is a board with different types of latches, fasteners,
and buttons on it. As children play with it, they form a mental model of how these
things work. Then, when they see a button, perhaps a doorbell, they will
have an understanding of how it works.
Images are visual pictures represented in thought. Cognitive maps are one example. (see Images and Cognitive
Cognitive maps are mental representations of familiar parts of your world. (see Images
and Cognitive Maps)
Example: Lashon's friend asks, "How do you get to the mall from here?" To answer the question, Lashon pictures the roads and crossroads between
their location and the mall and is able to describe the route for his friend
Reasoning is the process whereby people make evaluations, generate arguments, and reach conclusions.
(see Thinking Strategies)
Formal reasoning (also called logical reasoning) is the collection of mental procedures that
yield valid conclusions. An example is the use of an algorithm. (see Thinking Strategies)
Algorithms are systematic procedures that always produce solutions to problems. In
an algorithm, a specific sequence of thought or set of rules is followed to solve the problem. Algorithms can be very time-consuming. (see
Example: To solve the math problem 3,999,999 1,111,111 using an algorithm, you would multiply the numbers out:
This computation takes a long time. You could, however, use a heuristic to
solve the problem: round the numbers to 4,000,000 1,000,000, multiply 4 1, and add the appropriate number of zeros (000,000,000,000). Although simpler
and faster, this heuristic approach yields a less accurate solution than
that produced by the algorithmic approach.
Rules of logic are sets of statements that provide a formula for drawing valid conclusions
about the world. (see Thinking Strategies)
Syllogisms, components of the reasoning process, are arguments made up of two propositions, called premises, and conclusions based on those premises.
Syllogisms may be correct or incorrect. (see Thinking Strategies)
Example: Here is an incorrect syllogism: All cats are mammals (premise), and all
people are mammals (premise). Therefore, all cats are people (conclusion).
Informal reasoning is used to assess the credibility of a conclusion based on the evidence available
to support it. There are no foolproof methods in informal reasoning. (see Informal Reasoning)
Heuristics are mental shortcuts or rules of thumb used to solve problems. (see Informal
Example: You are trying to think of a four-letter word for "labor" to fill in a crossword puzzle. Instead of thinking of all possible four-letter combinations (an algorithmic
approach), you think first of synonyms for labor--job, work, chore--and choose the one with four letters.
- The anchoring heuristic is a biased method of estimating an event's probability by adjusting a preliminary estimate in light of new information
rather than by starting again from scratch. Thus, the preliminary value biases
the final estimate. (see Informal Reasoning)
Example: Jean is getting ready to move to the city. Her parents lived there ten years
ago and were familiar with the area that she wants to move into now. Ten
years ago it was an exceedingly dangerous neighborhood. Since that time,
however, many changes have taken place, and the area now has one of the lowest crime rates in the
city. Jean's parents think that the crime rate may have improved a little, but, despite
the lower crime rate, they just cannot believe that the area is all that safe.
- The representativeness heuristic involves judging that an example belongs to a certain class of items by
first focusing on the similarities between the example and the class and
then determining whether the particular example has essential features of the class. However, many times people do not consider the
frequency of occurrence of the class (the base-rate frequency), focusing
instead on what is representative or typical of the available evidence. (see
Example: After examining a patient, Dr. White recognizes symptoms characteristic of
a disease that has a base-rate frequency of 1 in 22 million people. Instead
of looking for a more frequently occurring explanation of the symptoms, the
doctor decides that the patient has this very rare disease. She makes this decision based on the similarity
of this set of symptoms (example) to those of the rare disease (a larger
class of events or items).
- The availability heuristic involves judging the probability of an event by how easily examples of the event come to
mind. This leads to biased judgments when the probability of the mentally
available events does not equal the actual probability of their occurrence.
(see Informal Reasoning)
Example: A friend of yours has just moved to New York City. You cannot understand
why he has moved there since the crime rate is so high. You hear from a mutual
acquaintance that your friend is in the hospital. You assume that he was
probably mugged because this is the most available information in your mind about New York City.
Mental sets occur when knowing the solution to an old problem interferes with recognizing
a solution to a new problem. (see Obstacles to Problem Solving)
Example: The last time his CD player door wouldn't open, Del tapped the front of it and it popped open. This time when it won't open, Del does the same thing--not noticing that the power isn't even on!
Functional fixedness occurs when a person fails to use a familiar object in a novel way in order
to solve a problem. (see Obstacles to Problem Solving)
Example: Lisa is very creative in her use of the objects in her environment. One
day she dropped a fork down the drain of the kitchen sink. She took a small refrigerator magnet and tied it to a chopstick.
She then put the chopstick down the drain, let the fork attach itself to
the magnet, and carefully pulled the fork out of the drain. If Lisa had viewed the magnet as being capable only of holding material against the refrigerator, and the chopstick
as being useful only for eating Chinese food, she would have experienced
Confirmation bias is a form of the anchoring heuristic. It involves a reluctance to abandon an initial hypothesis and a tendency
to ignore information inconsistent with that hypothesis. (see Obstacles to
Artificial intelligence (AI) is the study of how to make computers "think" like humans, including how to program a computer to use heuristics in problem
solving. (see Problem Solving by Computer)
Example: Lydia plays chess against a computer that has been programmed with rules,
strategies, and outcome probabilities.
- The utility of an attribute is its subjective, personal value. (see Evaluating Options)
Example: Juan prefers large classes because he likes the stimulation of hearing many
opposing viewpoints. In choosing courses, Juan decides whether the positive utility of the preferred class
size is greater than the negative utility of the inconvenient meeting time.
Expected value is the likely benefit a person will gain if he or she makes a particular decision several times. (see Evaluating Options)
Example: Sima doesn't have enough money for this month's rent. She knows that going on a shopping spree would be a wonderful stress-reliever
in the short run, but the increase in her amount of debt would outweigh the
enjoyment in the long run.
Language is composed of two elements: symbols, such as words, and a grammar. (see
The Elements of Language)
Example: The German and English languages use the same symbols (Roman characters),
but each has a different set of rules for combining those symbols. The Russian language has different symbols (Cyrillic
characters) as well as different rules of grammar.
Grammar is the set of rules for combining symbols, or words, into sentences in a
language. (see The Elements of Language)
Phonemes are the smallest units of sound that affect the meaning of speech. (see
From Sounds to Sentences)
Example: Phonemes are sounds that make a difference in the meaning of a word. By changing the beginning phoneme, the meanings of the following words are changed:
bin, thin, win.
REMEMBER:Phono means "sound." Phonemes are sounds that change the meaning of a word.
Morphemes are the smallest units of language that have meaning. (see From Sounds to Sentences)
Example: Any prefix or suffix has meaning. The suffix s means "plural," as in the words bats or flowers. The prefix un means "not," as in unhappy or unrest. S and un are morphemes for the words bat, flower, happy, and rest.
Words are made up of one or more morphemes. (see From Sounds to Sentences)
Example: The word unwise is made up of two morphemes: un and wise.
Syntax is the set of rules that dictates how words are combined to make phrases and sentences.
(see From Sounds to Sentences)
REMEMBER:Syn means "together" (as in synchronized). Syntax is the set of rules that determines the order
of words when they are put together.
Semantics is the set of rules that governs the meaning of words and sentences. (see
From Sounds to Sentences)
Example: The sentence, "Wild lamps fiddle with precision" has syntax, but incorrect semantics.
Surface structures of sentences are the order in which the words are arranged. (see Surface
Structure and Deep Structure)
- The deep structure of a sentence is an abstract representation of the relationships expressed in a sentence, or, in other words, its various meanings. (see
Surface Structure and Deep Structure)
Example:The eating of the animal was grotesque. The surface structure of this sentence is the order of the words. The deep
structure contains at least two meanings: The way the animal is eating could be grotesque,
and the way people are eating an animal could be grotesque.
Babblings are the first sounds infants make that resemble speech. Babbling begins at about four months of age. (see The Development of Language)
Example: While Patrick plays, he says, "ba-ba-ba."
- The one-word stage of speech is that period when children use one word to cover a number of
objects and frequently make up new words. This stage lasts about six months. (see The Development
Example: Laura says "ba" to stand for bottle, ball, or anything else that starts with a b. Amy always asks for milk, even if she wants something else to drink, such as water or juice. | http://college.cengage.com/psychology/bernstein/psychology/6e/students/key_terms/ch08.html | 13 |
18 | ENCRYPTION AND DECRYPTION ALGORITHM
Encryption is a process of coding information which could either be a file or mail message in into cipher text a form unreadable without a decoding key in order to prevent anyone except the intended recipient from reading that data. Decryption is the reverse process of converting encoded data to its original un-encoded form, plaintext.
A key in cryptography is a long sequence of bits used by encryption / decryption algorithms. For example, the following represents a hypothetical 40-bit key:
00001010 01101001 10011110 00011100 01010101
A given encryption algorithm takes the original message, and a key, and alters the original message mathematically based on the key's bits to create a new encrypted message. Likewise, a decryption algorithm takes an encrypted message and restores it to its original form using one or more keys. An Article by your Guide Bradley Mitchell
When a user encodes a file, another user cannot decode and read the file without the decryption key. Adding a digital signature, a form of personal authentication, ensures the integrity of the original message
“To encode plaintext, an encryption key is used to impose an encryption algorithm onto the data. To decode cipher, a user must possess the appropriate decryption key. A decryption key consists of a random string of numbers, from 40 through 2,000 bits in length. The key imposes a decryption algorithm onto the data. This decryption algorithm reverses the encryption algorithm, returning the data to plaintext. The longer the encryption key is, the more difficult it is to decode. For a 40-bit encryption key, over one trillion possible decryption keys exist.
There are two primary approaches to encryption: symmetric and public-key. Symmetric encryption is the most common type of encryption and uses the same key for encoding and decoding data. This key is known as a session key. Public-key encryption uses two different keys, a public key and a private key. One key encodes the message and the other decodes it. The public key is widely distributed while the private key is secret.
Aside from key length and encryption approach, other factors and variables impact the success of a cryptographic system. For example, different cipher modes, in coordination with initialization vectors and salt values, can be used to modify the encryption method. Cipher modes define the method in which data is encrypted. The stream cipher mode encodes data one bit at a time. The block cipher mode encodes data one block at a time. Although block cipher tends to execute more slowly than stream cipher, block”
Platform Builder for Microsoft Windows CE 5.0
BACKGROUND OF ENCRYPTION AND DECRYPTION ALGORITHM
CRYPTOGRAPHY is an algorithmic process of converting a plain text or clear text message to a cipher text or cipher message based on an algorithm that both the sender and receiver know, so that the cipher text message can be returned to its original, plain text form. In its cipher form, a message cannot be read by anyone but the intended receiver. The act of converting a plain text message to its cipher text form is called enciphering. Reversing that act (i.e., cipher text form to plain text message) is deciphering. Enciphering and deciphering are more commonly referred to as encryption and decryption, respectively.
There are a number of algorithms for performing encryption and decryption, but comparatively few such algorithms have stood the test of time. The most successful algorithms use a key. A key is simply a parameter to the algorithm that allows the encryption and decryption process to occur. There are many modern key-based cryptographic techniques . These are divided into two classes: symmetric and asymmetric (also called public/private) key cryptography. In symmetric key cryptography, the same key is used for both encryption and decryption. In asymmetric key cryptography, one key is used for encryption and another, mathematically related key, is used for decryption.
TYPES OF CRYPTOGRAPHIC ALGORITHMS
There are several ways of classifying cryptographic algorithms. For purposes of this report they will be categorized based on the number of keys that are employed for encryption and decryption, and further defined by their application and use. The following are the three types of Algorithm that are disscused
· Secret Key Cryptography (SKC): Uses a single key for both encryption and decryption
· Public Key Cryptography (PKC): Uses one key for encryption and another for decryption
· Hash Functions: Uses a mathematical transformation to irreversibly "encrypt" information
FIGURE 1: Three types of cryptography: secret-key, public key, and hash function
Symmetric Key Cryptography
The most widely used symmetric key cryptographic method is the Data Encryption Standard (DES) , published in 1977 by the National Bureau of Standards. DES It is still the most widely used symmetric-key approach. It uses a fixed length, 56-bit key and an efficient algorithm to quickly encrypt and decrypt messages. It can be easily implemented in hardware, making the encryption and decryption process even faster. In general, increasing the key size makes the system more secure. A variation of DES, called Triple-DES or DES-EDE (encrypt-decrypt-encrypt), uses three applications of DES and two independent DES keys to produce an effective key length of 168 bits [ANSI 85].
The International Data Encryption Algorithm (IDEA) was invented by James Massey and Xuejia Lai of ETH Zurich, Switzerland in 1991. IDEA uses a fixed length, 128-bit key (larger than DES but smaller than Triple-DES). It is also faster than Triple-DES. In the early 1990s, Don Rivest of RSA Data Security, Inc., invented the algorithms RC2 and RC4. These use variable length keys and are claimed to be even faster than IDEA. However, implementations may be exported from the U.S. only if they use key lengths of 40 bits or fewer.
Despite the efficiency of symmetric key cryptography , it has a fundamental weak spot-key management. Since the same key is used for encryption and decryption, it must be kept secure. If an adversary knows the key, then the message can be decrypted. At the same time, the key must be available to the sender and the receiver and these two parties may be physically separated. Symmetric key cryptography transforms the problem of transmitting messages securely into that of transmitting keys securely. This is an improvement , because keys are much smaller than messages, and the keys can be generated beforehand. Nevertheless, ensuring that the sender and receiver are using the same key and that potential adversaries do not know this key remains a major stumbling block. This is referred to as the key management problem.
Public/Private Key Cryptography
Asymmetric key cryptography overcomes the key management problem by using different encryption and decryption key pairs. Having knowledge of one key, say the encryption key, is not sufficient enough to determine the other key - the decryption key. Therefore, the encryption key can be made public, provided the decryption key is held only by the party wishing to receive encrypted messages (hence the name public/private key cryptography). Anyone can use the public key to encrypt a message, but only the recipient can decrypt it.
RSA is a widely used public/private key algorithm is, named after the initials of its inventors, Ronald L. Rivest, Adi Shamir, and Leonard M. Adleman [RSA 91]. It depends on the difficulty of factoring the product of two very large prime numbers. Although used for encrypting whole messages, RSA is much less efficient than symmetric key algorithms such as DES. ElGamal is another public/private key algorithm [El Gamal 85]. This uses a different arithmetic algorithm than RSA, called the discrete logarithm problem.
The mathematical relationship between the public/private key pair permits a general rule: any message encrypted with one key of the pair can be successfully decrypted only with that key's counterpart. To encrypt with the public key means you can decrypt only with the private key. The converse is also true - to encrypt with the private key means you can decrypt only with the public key.
“Is a type of one-way function this are fundamental for much of cryptography. A one way function - is a function that is easy to calculate but hard to invert. It is difficult to calculate the input to the function given its output. The precise meanings of "easy" and "hard" can be specified mathematically. With rare exceptions, almost the entire field of public key cryptography rests on the existence of one-way functions
In this application, functions are characterized and evaluated in terms of their ability to withstand attack by an adversary. More specifically, given a message x, if it is computationally infeasible to find a message y not equal to x such that H(x) = H(y) then H is said to be a weakly collision-free hash function. A strongly collision-free hash function H is one for which it is computationally infeasible to find any two messages x and y such that H(x) = H(y).
The requirements for a good cryptographic hash function are stronger than those in many other applications (error correction and audio identification not included). For this reason, cryptographic hash functions make good stock hash functions--even functions whose cryptographic security is compromised, such as MD5 and SHA-1. The SHA-2 algorithm, however, has no known compromises”
hash function ca also be referred to as a function with certain additional security properties to make it suitable for use as a primitive in various information security applications, such as authentication and message integrity. It takes a long string (or message) of any length as input and produces a fixed length string as output, sometimes termed a message digest or a digital fingerprint.
A hash function at work
In various standards and applications, the two most-commonly used hash functions are MD5 and SHA-1; however, as of 2005, security flaws have been identified in both algorithms.
Cryptographic hash function
Hash functions (a type of one-way function) are fundamental for much of cryptography. In this application, functions are characterized and evaluated in terms of their ability to withstand attack by an adversary. More specifically, given a message x, if it is computationally infeasible to find a message y not equal to x such that H(x) = H(y) then H is said to be a weakly collision-free hash function. A strongly collision-free hash function H is one for which it is computationally infeasible to find any two messages x and y such that H(x) = H(y).
The requirements for a good cryptographic hash function are stronger than those in many other applications (error correction and audio identification not included). For this reason, cryptographic hash functions make good stock hash functions--even functions whose cryptographic security is compromised, such as MD5 and SHA-1. The SHA-2 algorithm, however, has no known compromises
Fig 1 illustrates the proper and intended used of public/private key cryptography for sending confidential messages. In the illustration, a user, Bob, has a public/private key pair. The public portion of that key pair is placed in the public domain (for example in a Web server). The private portion is guarded in a private domain, for example, on a digital key card or in a password-protected file.
Figure 1: Proper Use of Public Key Cryptography
For Alice to send a secret message to Bob, the following process needs to be followed:
Bob can be assured that Alice's encrypted secret message was not seen by anyone else since only his private key is capable of decrypting the message
Both Are Used Together
Secret key and public key systems are often used together, such as the AES secret key and the RSA public key. The secret key method provides the fastest decryption, and the public key method provides a convenient way to transmit the secret key. This is called a "digital envelope." For example, the PGP e-mail encryption program uses one of several public key methods to send the secret key along with the message that has been encrypted with that secret key (see PGP).
Get Faster - Get Stronger
It has been said that any encryption code can be broken given enough time to compute all permutations. However, if it takes months to break a code, the war could already be lost, or the thief could have long absconded with the money from the forged financial transaction. As computers get faster, to stay ahead of the game, encryption algorithms have to become stronger by using longer keys and more clever techniques.
See XOR, AES, DES, RSA, plaintext, digital signature, digital certificate, steganography and chaff and winnow.
Next Back to Home | http://homepages.uel.ac.uk/u0430614/Encryption%20index.htm | 13 |
18 | Why We Can't Wait finds Martin Luther King, Jr. confident, poised and
prepared to combat segregation in Birmingham, AL. In this account, MLK details
the brutality of mayor Bull Conner, infamous for turning water hoses on unarmed
protestors, and the bravery of ordinary citizens who were undeterred in their
commitment to justice. This volume contains "Letter from Birmingham Jail,"
one of MLK's most famous declarations about racial inequality. MLK also notes
the wisest decision he made during the Birmingham struggle, that of involving
young people who invigorated the protests and reminded everyone about the importance
of involving youth in working for social change. Drawing on the importance of
youth enables teachers to make visible the lineage between advocating for racial
and social injustice from 1963 to today, and the power-and importance-of young
people to assume that mantle.
Why We Can't Wait is useful for all curriculum units, discussions, and
investigations that grapple with the issues of justice and injustice, and this
text encourages students to think deeply about what it means to pursue nonviolence
in words and in action. Though written in the 1960s, it is impossible to read
Why We Can't Wait and not draw parallels to today. It is relevant for
today's students, as they find their way and seek to add their own voices to
the world. Why We Can't Wait provides a compelling rationale for helping
students think through how to effect substantive change.
How to Use This Guide
Why We Can't Wait is appropriate for grades 9-12, and for the English
and History classrooms. This guide is divided into four parts: pre-reading activities;
summaries of the chapters and teaching suggestions; post-reading activities;
and resources. Pre-reading activities are intended to build students' prior
knowledge and provide points of entry prior to reading the text. Summaries and
teaching suggestions include what happens in the chapter as well as various
activities that teachers can use to engage students in critical thinking about
the chapter. Post-reading activities are designed to help students synthesize
their reading and make connections to other aspects of their learning. Finally,
resources are included for extended study about the text. Teachers can break
up the reading based on their allocated time periods. The chapters can be broken
up to be adapted to classroom instructional time.
Predictions: Visit the Birmingham, AL Civil Rights Institute online (http://www.bcri.org/index.html)
and select 10-12 images from the resource gallery. In pairs, students will
look at the images and predict connections between the images and the text.
Writing Prompt: Ask students to write what they know about Martin Luther
King, Jr. Teachers can ask additional questions: How did they learn about
MLK? Do they think he is still important today? Allow students to share their
responses in small groups and/or whole class discussions.
Analysis: Teachers will give students a copy of a freedom song (see Bernice
Johnson Reagon's reflection about freedom songs on the PBS website Eyes on
the Prize: http://www.pbs.org/wgbh/amex/eyesontheprize/reflect/r03_music.html)
and ask students to analyze the song for meaning. Additional questions teachers
might ask: why would a group of people sing this song? If you were singing
this song, how would it make you feel, particularly if you were singing it
with a group of your friends? Would it make you feel brave? Afraid? Teachers
might also choose to play the song for students to accompany their reading
of the song.
Setting the Scene: Teachers and students will read Birmingham segregation
laws (available at: http://www.crmvet.org/info/seglaws.htm).
Teachers will ask students to either discuss or write their reactions to the
laws and discuss their responses.
Concept Map Activity: In groups of four, students discuss the relationships
among these words: justice, nonviolence, boycott, racism, segregation, freedom,
and resistance. What connections do these words have with one another? Students
will create a visual to show how these words interact.
Technology Incorporation: Using either Wordle (http://www.wordle.net/)
or Tagxedo (http://www.tagxedo.com/),
teachers may copy and paste parts of the text into a program. Once the words
are displayed, teachers will lead students in a discussion and exploration
about which words are displayed the most and what that might suggest about
The text contains words that teachers might use to assist students in their
vocabulary development. A list of words, including page numbers, is included
in the appendix of this guide. Some ways teachers might incorporate these vocabulary
Selecting words and grouping in "families" encourages students
to learn the words on a continuum. Teachers are encouraged to help students
understand the meaning and relationship of the words in connection with each
Decide which words are crucial for students' understanding of the text and
pre-teach those words.
Relate the new words to ones students might already know. Teachers might
press students to explain the connection between the words.
Teachers might want students to learn what the word means as well as what
it does not mean.
Teachers might also encourage students to actively use the words they are
learning (i.e. in writing assignments, during discussions, etc.) to increase
their comfort and familiarity with the words.
Teachers might encourage students to use a vocabulary journal for the new
words they learn. Potential journal entries could include word, part of speech,
usage, synonyms, antonyms, sentences, etc. Teachers should encourage students
to draw on their vocabulary journal regularly.
Summaries and Teaching Suggestions
Throughout the reading of the text, it is important to help students keep track
of the names and locations mentioned throughout the text. The following activities
can be used to help students deepen their understanding as they read.
Timeline: Instruct students to keep a timeline of events as they read. Teachers
might wish to provide students with a graphic organizer that allows them to
keep track of times and dates, or else students can be instructed to keep
track independently. The teacher might also want to create a bulletin board
where students can add events, pictures, newspaper clippings, etc. that helps
them make connections between the text and the present.
History/Story Maps: Teachers may advise students to keep history maps as
they read to improve their understanding of events. These maps ask students
to identity key historical events, what caused the event, the important people
involved, and how the event was resolved.
Problem-Solution Charts: MLK discusses several reasons he chooses nonviolent
action and civil disobedience. Teachers may assign students to create a chart
(two-column notes) where they list the problems on one side and solutions
on the other side to provide students with a visual representation of consequences,
causes and solutions.
Summarizing: Teachers might encourage frequent comprehension checks for
students as they read the text. Some summarizing strategies include think-write-pair-shares,
quick writing, turning passages from the text into summaries that are concise
and accurate, etc.
Double Entry Journals: Students use a notebook to record textual impressions
of what they read. Using two columns, the student records a quotation in the
left hand column and responds to the quotation in the right-hand column.
Summary:Dorothy Cotton authors the introduction to the text. Cotton,
who worked closely with King, was the Education Director for the Southern Christian
Leadership Conference (SCLC) and explains being present when King decided to
proceed with a protest that would land him in prison. Cotton notes, "Martin's
decision to go to jail was a crucial turning point for the civil rights struggle."
Yet, as King himself explains, the decision to be incarcerated allowed him to
demonstrate his belief in the importance of freedom and justice. Cotton explains
how Freedom Songs bolstered the hope of her and other supporters, and concludes
with the assertion that the messages from Why We Can't Wait are relevant
and as urgent today as they were in Birmingham in 1963.
Cotton is a woman who was involved in the Civil Rights Movement (CRM). What
does it suggest about her role, and her importance, that she authors this
introduction rather than one of the male voices of the CRM?
Think about the relationship that Cotton describes between freedom songs
and the decision of MLK to press forward with the boycott and go to jail.
How did she, and others, draw strength from these songs?
Critical Thinking Activities
Research Dorothy Cotton. What was her connection to MLK? Why is she appropriate
to write the introduction?
Create a glossary of people who are mentioned in each chapter and research
their importance. Contrast what you learn in your research about them to King's
comments and interactions with them.
Watch the Spike Lee documentary Four Little Girls (1997). Write a
review of the documentary that also expands on the ideas from Why We Can't
Cotton discusses how freedom songs sustained nonviolent efforts. Listen
to a selection of freedom songs (http://www.folkways.si.edu/albumdetails.aspx?itemid=2269).
What do you notice? What words are repeated frequently? What do you notice
about rhythm? How do you feel listening to the songs? How might those who
sang these songs regularly have felt, particularly as they prepared for upcoming
Introduction by Martin Luther King, Jr. (1-4)
Summary:MLK sets the stage for Birmingham, Al in 1963. He describes
the racial disparities besetting African Americans in the United States and
argues, "Equality had never arrived. Equality was a hundred years late"
Why does MLK evoke an image of a boy and a girl when he talks about racial
disparities? What effect does this have on the reader? Why doesn't he talk
about men and women instead?
Compare the problems MLK outlines in 1963 to our current world. What similarities
and differences do you notice, particularly in your own community?
Chapter I: The Negro Revolution: Why 1963? (5-19)
Summary:MLK makes the case for why African Americans are ready to demand
equality. King provides important historical context for the upcoming nonviolent
action through detailing the failure of Brown v. Board of Education to end segregated
schools because of the Pupil Placement Law. He concludes that despite empty
promises, African Americans had been denied equality for too long, and the only
way to create that equality was to demand it and use nonviolence resistance
as the tool.
What is the mood that MLK creates in the beginning of this chapter when
he talks about the summer? How does he contrast the pleasantness of the summer
to the Negro Revolution?
How does MLK build suspense and tension in this chapter? What words do you
find particularly powerful? Why?
Why were most White Americans unprepared for a Negro uprising?
Consider how MLK contrasts personal life-threatening injury to national
violence. Are his comparisons more powerful because they are more personal?
Why or why not?
Describe how states were able to use the Pupil Placement Law to avoid integration.
What did states and the federal government do to avert African American
equality? Explain the different excuses these entities gave and MLK's responses
to their hedging.
Why was nonviolent resistance the best action to use, according to MLK?
Describe MLK's outlook at the end of this chapter. Is he reluctant? Hesitant?
What does his outlook suggest about the struggle ahead?
Critical Thinking Activities
Research Brown vs. Board of Education. What were the major points of the
case? What was supposed to occur as a result of the case? Why was the phrase
"all deliberate speed" difficult to enact?
Then and Now: MLK describes the conditions of African Americans in 1963
that demanded attention. What are the current conditions of African Americans
in the United States? Create a two-column chart that depicts the differences
and similarities between the two time periods. What can you determine based
on what has changed and what remains the same?
Venn Diagram: MLK also discusses the lack of economic opportunities that
beset African Americans. Create a venn diagram that records the challenges
to African Americans in 1963 to those challenging African Americans today.
What similarities and differences can you extract based on your diagram? How
much has changed? What accounts for changes? What accounts for factors that
remain unchanged? As an extension, what similarities do you notice about current
economic conditions for working-class adults, African American men, older
adults, and adolescents in comparison to 1963?
Begin to create a timeline of events. The first events should include historical
events that MLK mentions prior to 1963. Update this timeline as you read.
Chapter II: The Sword That Heals (21-45)
Summary: MLK begins with a description of ways African Americans were thwarted
in their attempts for parity, and then moves to an examination of different
leaders who attempted to uplift Blacks. Despite efforts by Black Muslims to
use violence, MLK contends that nonviolent resistance, accompanied by legal
action, was the most appropriate method of assuring change. He concludes that
because Birmingham remained one of the most segregated cities in the country,
it was the best place to stage a nonviolent resistance movement.
List the different ways some attempted to "keep the Negro in his place"
historically. What do you think would be some of the psychological effects
from these attempts on African Americans?
Explain how police brutality functioned as a threat to African Americans.
How did African American youth subvert police intentions? What was the message
these young people delivered with their actions?
What is "soul force"? How did soul force differ from physical
force? Why did MLK contend that soul force was more important to have than
What is tokenism? Why did MLK find it unacceptable?
MLK describes three men-Booker T. Washington, W.E.B. DuBois, and Marcus
Garvey-who combated racial inequality. What does MLK think are the flaws of
each man's goals?
The National Association for the Advancement of Colored People (NAACP) assumed
prominence in this chapter. Describe the significance of this organization,
particularly as related to the challenges as described by MLK.
MLK explains his rationale for nonviolent resistance (NVR). What are the
pros and cons of nonviolence? Why was NVR difficult for some to accept?
How does MLK tie NVR to military protest? How does he link it to Christianity?
What do these associations do for his argument? Who is his audience? What
do such associations also suggest about his skill as an orator and as a leader
of a movement?
What is the difference between a violent army and a nonviolent army? Why
does MLK say a nonviolent army is more powerful? Do you agree?
For what reasons did NVR fail to achieve national acceptance? What does
this failure suggests about the movement, the leadership, the message? Do
you think that the movement was unsuccessful? Support your reasons with evidence.
What lessons did MLK learn from previous NVR efforts in Albany? How did
those successes and challenges from Albany influence his actions in Birmingham?
MLK writes, "The united power of southern segregation was the hammer.
Birmingham was the anvil" (p. 45). Why was Birmingham an appropriate
location for NVR?
Critical Thinking Activities
Research Booker T. Washington, W.E.B. DuBois and Marcus Garvey. Create a
graphic organizer that compares and contrasts each man's accomplishments.
Research the Black Muslims. Produce a short presentation (PowerPoint, Prezi,
etc.) that informs an audience about this group and includes why this group
was opposed by MLK. Think, too, about MLK's indictment of militancy. For what
reasons did he consider militancy an unsuccessful tactic?
Debate: Define violent action and moral force. Which is more powerful: violent
action or moral force? Are there situations would one be more powerful than
Extension: Read (or reread) or watch To Kill A Mockingbird by Harper Lee,
paying particular attention to the character of Atticus Finch. Why is he an
example of moral force? Write a short response that demonstrates your answer
to this question.
Examine the lyrics to Phil Ochs's song, "Talking Birmingham Jam"
What is the vision of Birmingham that Ochs portrays in his song? What similarities
do you notice between Ochs's song and MLK's words? Evaluate song as an effective
medium for delivering a political message.
Chapter III: Bull Connor's Birmingham (47-61)
Summary: MLK describes the setting of Birmingham, under the rule of segregationist
mayor Bull Connor. Connor wanted to limit national exposure that would inform
others about the injustices that occurred, but MLK and Fred Shuttlesworth moved
forward with integration efforts. MLK notes, "This city had been the country's
chief symbol of racial intolerance" (p. 56). The chapter concludes with
a thorough description of the strategies MLK and supporters incorporated in
preparation for ending segregation in the city.
As MLK details the inequalities in Birmingham, he focuses on the plight
of children. Why do you think he makes children the focus of his appeal?
Discuss the role of fear. What effect did it have on action (i.e. what did
fear do to influence how White and Black citizens advocated for change in
Birmingham?) and what effect did fear have on shaping citizens' response to
Analyze the role of Fred Shuttlesworth in agitating for social change. How
did he help citizens mobilize against Connor's efforts to stop integration?
Chart the steps Shuttlesworth and MLK took before deciding to move to a
Why was Birmingham such an appropriate location for NVR?
What was "Project C"? What were the goals of this initiative?
How did MLK and his supporters learn from their mistakes in Albany? What
changes did they make in Birmingham as a result of what they'd learned?
The initial focus of NVR was on the business community, specifically stores
with lunch counters. Why were these places selected first?
Evaluate the Project C strategy. What were the strengths and challenges?
How were boycott plans changed by the elections? Why did the plans have
Discuss how MLK and the SCLC garnered national support. Why was Harry Belafonte
What did MLK do in the time between the election and the run-off?
Critical Thinking Activities
In the first part of this chapter, MLK recounts what it was like for a
child to accompany his/her parents on segregated outings in Birmingham. Select
one of these "outings" for a journal entry. Be sure to write from
your five senses and accurately and thoroughly describe your outing, including
interactions with White residents of Birmingham.
Research Bull Connor. Who was he? What was his background? How did he become
mayor of Birmingham? Create a web page that includes information, video footage,
and a critique of his leadership.
Select one of the many people mentioned in this chapter for further research.
Present your findings in a Power Point, short speech, or podcast about their
importance bringing integration to Birmingham.
Wyatt Walker was another integral member of the movement in Birmingham.
Make a chart that lists his responsibilities. Then, determine how important
his role was to the overall efforts.
Harry Belafonte was an integral supporter for MLK. Find out more about his
Civil Rights contributions and his artistic performances and present in a
The run-off election is scheduled for April 2. MLK and Shuttlesworth have
sent out word about a meeting for all volunteers to prepare for launching
a direct-action campaign. Write a journal entry from the perspective of MLK,
Shuttlesworth, one of the people who attended the meeting or one of the people
who received word but did not attend the meeting.
Chapter IV: New Day in Birmingham (63-84)
Summary: Despite an unclear resolution about who was going to be the new mayor,
MLK galvanizes volunteers and begins a direct-action campaign to end segregation
in Birmingham. Through extensive training and bolstered by Freedom Songs, MLK
and his supporters prepared themselves for arrests and opposition. In a planned
act of civil disobedience, MLK and Fred Shuttlesworth are arrested and MLK was
placed in solitary confinement. After intervention from President Kennedy and
Harry Belafonte, MLK is freed, and realizes the strength of his faith.
How did the two governments threaten MLK's attempts?
Discuss the strategies and actions of the direct-action campaign. Focus
on the amount of planning required for these actions.
Who were the people who organized and sustained the effort? What were their
As you read through the people involved, what omissions do you notice? Where
are the women, for example? What do you think of these omissions?
Why was a nonviolent army different from a traditional army?
What did training sessions entail? Why was this training necessary?
How did MLK involve everyone in NVR, even those not able to demonstrate?
What does this ability to find roles for everyone suggest about his leadership
and the importance of involving many people within the movement?
Read the Commitment Card (p. 69). Which of the Ten Commandments do you think
would be easiest for you to follow? Would you have been able to sign the card?
Why did MLK face opposition from other African Americans in Birmingham?
How did this opposition impact MLK and SCLC efforts to organize? How does
MLK regard the Black leaders who opposed him? How does MLK detail the opposition,
and what explanations does he offer? How did they rebuild/regain support?
What is an outsider? Explore MLK's contention on page 74. Do you agree or
disagree with his contention?
The lunch-counter sit-ins were only the first forms of NVR. What were some
others? How did these multiple forms of resistance strengthen MLK's efforts?
How did Bull Connor attempt to respond to the arrests of Black protesters?
Why was this response out of character for him?
Discuss how the injunction was an attempt to end NVR. How did MLK subvert
the one issued by Bull Connor?
What is civil disobedience? Describe MLK's use of civil disobedience, his
decision to employ it, and the impact of his use of it.
How does MLK convince himself that, despite having no money for bail, he
must go to jail?
What happened when Coretta King intervened? What does this suggest about
how MLK was regarded nationally?
How is MLK reassured that he is really not in solitary confinement?
Critical Thinking Activities
Conduct research on "freedom songs." What was their importance?
Why was singing freedom songs such an important part of the movement? Present
your findings and include songs that were sung.
You are 21-years old and have a sister who is a year younger. You have been
selected to demonstrate, but your sister has not. She is saddened, making
you question if you should accept the offer to begin the training. What do
you do? What advice do you give to your sister?
Write a dialogue that might have happened between MLK attempting to convince
one of the Birmingham business and professional people to support him.
MLK's faith is tested in this chapter and he recounts feeling "alone
in that crowded room" (p. 80) when faced with the possibility of not
having enough money for bail. Write an interior monologue that captures MLK's
Find the lyrics of "We Shall Overcome." Write an analysis that
draws parallels between the lyrics and the use of the song by MLK and his
MLK describes the march that landed him in jail on page 81. Write a "You
Are There" account as a participant in what MLK called "a beautiful
As an extension, read excerpts from Thoreau's essay "On Civil Disobedience"
and compare to MLK's reasons for invoking civil disobedience. What similarities
and differences do you notice?
Chapter V: Letter from Birmingham Jail (85-109)
Summary: "Letter from Birmingham Jail" is one of MLK's most
famous entreaties. Written while awaiting bail on the margins of a newspaper,
MLK, who addresses his fellow clergymen, details his reasons for nonviolent
action in Birmingham. He broadens his appeal to include the larger importance
of justice and injustice, and strengthens his claims and rationale by invoking
history. Additionally, MLK expresses his disappointment with White moderates
and the White church, noting the dearth of White allies before concluding with
a hope for solidarity and healing upon his release.
One of MLK's most famous lines is on page 87, when he declares, "Injustice
anywhere is a threat to justice everywhere." What does this line mean,
and why is it still relevant?
What rhetorical devices are at work in this letter? How does MLK garner
support for his cause?
MLK explains the four principles of a nonviolent campaign on page 87. How
was each step enacted in Birmingham?
How does MLK explain the need for direct action?
What is "violent tension"? Why does MLK say it is necessary?
Read the paragraph on pages 91-92 aloud. What effect does reading it aloud
have? Look closely at the punctuation. What is the impact of using semicolons
rather than periods?
What is required to break an unjust law? Why is breaking an unjust law "expressing
the highest respect for the law"?
How does MLK strengthen his claims about civil disobedience by invoking
history? Do you think these invocations strengthen or weaken his argument?
Why is MLK disappointed with White moderates?
Think about how MLK describes time on page 98. How is time used as a metaphor?
Do you find this use of time effective?
How does MLK reconcile himself to being labeled an extremist? What other
extremists does he name? On page 101, he writes, "So the question is
not whether we will be extremists, but what kind of extremists we will be."
Reflect on this statement.
Why would MLK commend the police for brutality?
MLK draws on the weak and innocent (children, Black women) to evoke sympathy.
What are the reasons he does this?
Critical Thinking Activities
Socratic Seminar: What is the difference between a just and an unjust law?
Are there examples of just and unjust laws today? Explain.
Select 3-5 quotations from this chapter and illustrate them. Find examples
that relate to Birmingham and today.
Write an essay that contrasts conditions in MLK's letter to an issue and
its conditions within your world. If you were to create a form of nonviolent
resistance to address those conditions, what would you do?
Chapter VI: Black and White Together (111-128)
Summary: Released from incarceration, MLK ratchets up the intensity of civil
disobedience and NVR by involving young people, citing it as "one of the
wisest moves we made." He details "D" Day, May 2, when more than
1,000 young people demonstrated and went to jail in the Children's March. Violence
escalates in this chapter, and MLK details how protestors marched against vicious
dogs, fire hoses and police opposition, placing Birmingham firmly in the national
spotlight. With a tentative agreement to end segregation finally forged, White
opposition returns before being defeated, leaving MLK to remark, "Once
on a summer day, a dream came true. The city of Birmingham discovered a conscience"
Did MLK want to come out of prison? What ultimately compelled him to leave?
Explain MLK's decision to involve young people.
How were students recruited?
Think about the young people who participated in the movement. What words
can you come up with to describe their importance? What character qualities
do you think they had?
Describe the various ways young people got involved. Be sure to note the
ages of young people.
What role did humor play in the youth's nonviolent resistance? Why was humor
important for MLK to note?
Bull Conner was losing support from White citizens. Why was this loss of
support so amazing? What were the reasons? What did White neutrality indicate?
Describe civil contempt. Why did MLK define it as "figuratively hold[ing]
the jailhouse keys in the palm of your hand"?
Why was Burke Marshall surprising and significant?
How did the song "We Shall Overcome" calm MLK and others in the
aftermath of the bombing? What does this ability suggest about the power of
song? Of this particular song?
Critical Thinking Questions
This chapter addresses the reality of what happens when young people are
involved in effecting social change. What other contexts can you find in which
young people were integral in organizing for change? Present your findings.
Recreate the conversation between MLK and his brother on page 125. Be sure
to incorporate the emotion described by MLK.
Chapter VII: The Summer of Our Discontent (129-148)
Summary: MLK broadens his reflection about the freedom struggle of African
Americans in Birmingham. Opposition and resistance to integration continued,
with increasing violence. He notes the achievements of the movement while reminding
that numerous challenges remained, leading up to the March on Washington near
the end of the summer.
MLK begins the chapter with a graphic description of a young Black man killed
by poison gas. Why begin with such a haunting image? How does the image set
the tone for the chapter?
MLK writes on page 131, "In the summer of 1963, the Negroes of America
wrote an emancipation proclamation to themselves." What does he mean?
What was the response to the settlement by White citizens? What did these
responses indicate about the settlement? Who does MLK implicate in his condemnation
of the response by White citizens?
MLK often locates events in Birmingham within a larger historical context,
often of resistance. What is the effect of drawing on history in his argument?
Does it strengthen or weaken his argument? What is the relationship between
the historical events he recounts and Birmingham?
Is MLK optimistic near the end of the first section (pg. 135)? What does
this paragraph suggest about him as a leader? As a part of the movement?
What is the difference between a social movement and a revolution?
MLK describes the remaining challenges on page 138. Have the goals been
adjusted? Describe his outlook regarding the remaining work to be accomplished.
Why were moderates another form of resistance?
MLK uses the word "our" on page 143. How does the use of this
word extend the cause to others?
Why was the March on Washington successful?
Why was the participation of the White church significant?
Critical Thinking Activities
Research the Civil Rights Bill of 1964. What were the primary aims of the
List the results of the movement on page 139. Which results do you find
most effective? Do any of these results remain today? Why or why not?
Explore the March on Washington, noting the debate held by MLK and his supporters
in section 4 of this chapter. Visit the NPR website (http://www.npr.org/news/specials/march40th/)
for coverage of the 40th anniversary of the March as well as information about
Who was A. Philip Randolph? Research and present information about this
important leader. Visit the A. Philip Randolph Institute (http://www.apri.org/ht/d/Home/pid/212)
for information, pictures and videos.
Chapter VIII: The Days to Come (149-182)
Summary: MLK addresses the scars of racism that remain after Birmingham. He
stresses urgency and reminds readers that African Americans cannot be denied
equal rights based on what they learned through participation in nonviolent
resistance. Refusing to allow efforts to wane and to allow African Americans
to regress in their efforts for civil rights, MLK rejects compromise and resolves
that all will continue fighting for what is owed Blacks in the United States.
He continues that America must atone for injustices suffered by Blacks, including
economic opportunities. He proposes a Disadvantaged Bill of Rights as well as
details alliances necessary to assure progress. Finally, he articulates the
implications of the Civil Rights struggle on a national and international scale
and the legacy of nonviolent resistance.
What parallels does MLK draw between slaves who purchased their freedom
MLK describes the difficult economic conditions of African American workers
in 1963. How do these conditions compare to African American workers today?
MLK provides an answer to the question, "What more does the Negro want?"
How does he answer this query?
What is a compromise? What does it mean? Why does MLK call compromising
"profane and pernicious" (p. 155)?
Why did some try to create division among Blacks? Why did MLK say these
efforts would be unsuccessful?
Reflect on the statement "Someone once wrote: 'When you are right,
you cannot be too radical; when you are wrong, you cannot be too conservative'"
What is atonement? What would atoning for injustices suffered by Blacks
What examples of programs for the deprived does MLK provide? How did these
programs benefit particular groups?
What is the Bill of Rights for the Disadvantaged? Who would benefit?
What is the condition of the White poor? Why would they also benefit from
a Bill of the Disadvantaged?
Why is Southern acceptance of racial equality slow to come?
What will the freedom movement need to do to continue its progress?
What similarities does organized labor have with Black civil rights? Why
does MLK say Blacks need to make alliances?
What is the role of the federal government?
How does MLK implicate everyone in JFK's death? Why? Do you think MLK is
Why does MLK want Blacks to form political alliances?
Critical Thinking Questions
Design a response that answers this question, "How then can he [the
Negro] be absorbed into the mainstream of American life if we do not do something
special for him now, in order to balance the equation and equip him to compete
on a just and equal basis?" (p. 159).
Research the debate about reparations. What are they? Who wants them? Who
opposes them? How are reparations similar to what MLK suggests for African
Americans? Stage a debate about whether or not reparations are needed, and,
if so, what those reparations would entail.
Research the Wagner Act to understand why MLK wanted a similar plan.
Compare the legislation of Eisenhower to John F. Kennedy.
MLK notes a number of unsolved crimes of civil rights leaders. Who are these
people? Conduct research and create a presentation that details the people
and updates your audience on the status of solving their murders.
Culminating Essay Topics
Young people were an important catalyst in the efforts for equality in Birmingham.
Evaluate the importance of their role and decide if you think involving young
people was, indeed, the wisest decision MLK made.
What are the requisite skills needed to create social change? Are they inherent,
or can they be cultivated? What are the most important skills one must have?
What problems could be addressed by nonviolent action? Select one issue
and propose how nonviolent action could be used to solve it. Be sure to incorporate
technology and other modes of communication you use frequently, as well as
a rationale for how and why you would use each aspect of nonviolent resistance.
In addition, propose an appropriate audience to present your proposal. If
possible, work with your teacher and peers to present your proposal to that
audience for feedback and to demonstrate what you've learned.
What are the larger contributions of the Civil Rights struggle? What evidence
suggests that the Civil Rights struggle was a success? Was it a success? What
must be done to continue the Civil Rights struggle today? What are the issues
that need attention in your world?
Civil Rights Digital Library contains digital video archives and a virtual
library, including political cartoons: http://crdl.usg.edu/
Citizen King Website by PBS details MLK's life after Montgomery. The site
includes an interactive map of Civil Rights hotspots as well as more information
on his philosophical influences: http://www.pbs.org/wgbh/amex/mlk/index.html
Teaching Tolerance has an array of teacher resources and teaching ideas
appropriate for continuing to integrate principles of nonviolence and social
justice into curriculum: http://www.tolerance.org
About the Author of This Guide:
Kimberly N. Parker, Ph.D. currently teaches English at Newton North High School
in Newton, MA. She holds a Ph.D. in Curriculum and Instruction at the University
of Illinois-Champaign Urbana and has expertise in literacy and African American
young men. Dr. Parker has taught in urban public schools in Boston and has published
articles and given professional development about the literacy practices of
young people of Color. | http://www.beacon.org/client/teachers_guides/0112tg.cfm | 13 |
18 | The process of the synthesis of RNA in cells is called transcription. Transcription is a process where information in DNA is assembled into RNA using complementarity similar to that used in making double-stranded DNA. Mechanistically, transcription is similar to DNA replication, particularly in the use of nucleoside triphosphate substrates and the template-directed growth of nucleic acid chains in a 5' to 3' direction. The first nucleotide of the RNA chain retains the 5'-triphosphate group, but all subsequent nucleotides that are added to the growing chain only retain the alpha phosphate in the phosphodiester linkage. In a differentiated eukaryotic cell, very little of the total DNA is transcribed. Even in single-celled organisms, in which virtually all of the DNA sequences can be transcribed, far fewer than half of all genes may be transcribed at any time.
The mechanisms used to select particular genes and template strands for transcription operate largely at the levels of initiation and termination of transcription, through the actions of proteins that contact DNA in a highly site-specific manner. RNAs are modified ('processed') after synthesis in many cases, particularly in eukaryotic organisms.
Three major types of RNA are found in cells--ribosomal RNA (rRNA), transfer RNA (tRNA), and messenger RNA (mRNA). The major RNA types function in ribosome structure/function, translating the genetic code, and carrying the message to be translated, respectively. mRNA is a small percentage of the bulk of total cellular RNA (1% to 3% in bacteria). A fourth type of RNA called snRNA is present in eukaryotic cells. It functions to aid in the splicing of RNAs (see below).
Size in nucleotides
|Transfer RNA||tRNA||Carries activated amino acid||
|Ribosomal RNA||5S rRNA
|Messenger RNA||mRNA||Codes for proteins||
Bacterial genes are organized in clusters under common regulation. These clusters are called operons and are controlled by binding of proteins to specific sequences in the DNA called regulatory elements.
RNA polymerase is an enzyme that makes RNA, using DNA as a template. RNA polymerase uses the nucleoside triphosphates, ATP, GTP, CTP, and UTP (uridine triphosphate) to make RNA. The nucleoside bases adenine, guanine, cytosine and uracil pair with the bases thymine, cytosine, guanine, and adenine, respectively, in DNA to make RNA. Like DNA polymerase, RNA polymerases catalyze polymerization of nucleotides only in the 5' to 3' direction. Unlike DNA polymerases, however, RNA polymerases do not require a primer to initiate synthesis.
Like DNA replication and (as we shall see) protein synthesis, transcription occurs in three distinct phases-initiation, elongation, and termination. Initiation and termination signals in the DNA sequence punctuate the genetic message by directing RNA polymerase to specific genes and by specifying where transcription will start, where it will stop, and which strand will be transcribed. The signals involve instructions encoded in DNA base sequences mediated by interactions between DNA and proteins. In prokaryotes, RNA polymerase finds DNA control sequences called promoters, binds to them, unwinds a short stretch of DNA and begins polymerization of RNA without the need for a primer to start the process. It also detects (by itself or with assistance of other proteins) the termination sequence to stop transcription. Proteins (called transcription factors) that bind at or near to prokaryotic (and eukaryotic) promoters interact with RNA polymerase. Some transcription factors act positively (activators) to assist RNA polymerase to begin RNA synthesis at a particular site, whereas others act negatively (repressors) to inhibit the ability of RNA polymerase to act at that site.
A single RNA polymerase catalyzes the synthesis of all three E. coli RNA classes--mRNA, rRNA, and tRNA. This was shown in experiments with rifampicin, an antibiotic that inhibits prokaryotic RNA polymerase in vitro and blocks the synthesis of mRNA, rRNA, and tRNA in vivo. Another antibiotic that inhibits transcription i s actinomycin D, that acts by binding specifically to double-stranded DNA and prevents it from acting as a template for transcription. Eukaryotes contain three distinct RNA polymerases, one each for the synthesis of the three larger rRNAs, mRNA, and small RNAs (tRNA plus the 5S species of rRNA). These are called RNA polymerases I, II, and III, respectively. The eukaryotic RNA polymerases differ in their sensitivity to inhibition by -amanitin, a toxin from the poisonous Amanita mushroom. RNA polymerase II is inhibited at low concentrations, RNA polymerase III is inhibited at high concentrations, and RNA polymerase I is quite resistant.
The maximum rate of polymerization of the DNA polymerase III holoenzyme (about 500 to 1000 nucleotides per second), is much higher than the chain growth rate for bacterial transcription and, not surprisingly, RNA polymerase (50 nucleotides per second). Although there are only about 10 molecules of DNA polymerase III per E. coli cell, there are some 3000 molecules of RNA polymerase, of which half might be involved in transcription at any one time.
Replicative DNA chain growth is rapid but occurs at few sites, whereas transcription is much slower, but occurs at many sites. The result is that far more RNA accumulates in the cell than DNA. Like the DNA polymerase III holoenzyme, the action of RNA polymerase is highly processive. Once transcription of a gene has been initiated, RNA polymerase rarely, if ever, dissociates from the template until the specific signal to terminate has been reached.
Another important difference between DNA and RNA polymerases is the accuracy with which a template is copied. With an error rate of about 10-5, RNA polymerase is far less accurate than replicative DNA polymerase holoenzymes, although RNA polymerase is much more accurate than would be predicted from Watson-Crick base pairing alone. Recent observations suggest the existence of error-correction mechanisms. In E. coli, two proteins, called GreA and GreB, catalyze the hydrolytic cleavage of nucleotides at the 3' ends of nascent RNA molecules. These processes may be akin to 3' exonucleolytic proofreading by DNA polymerases.
E. coli RNA Polymerase Subunits
E. coli RNA polymerase is a multi-subunit protein. Two copies of the subunit are present, along with one each of , ', and , giving an Mr of about 450,000 for the holoenzyme. The subunit helps the polymerase to identify the promoter sequence in the DNA in initiation of RNA synthesis and then dissociates. The core enzyme consists of 2'). Subunit is the target for rifampicin inhibition is the subunit with the catalytic site for chain elongation. The catalytic site of RNA polymerase resembles that of DNA polymerase, having two metal ions.
Promoters and Promoter Selection
In E. coli, rates of transcription initiation vary enormously--from about one initiation every 10 seconds for some genes to as infrequently as once per generation (30 to 60 minutes) for others. Because all genes in bacteria are transcribed by the same core enzyme, variations in promoter structure must be largely responsible for the great variation in the frequency of initiation. Variations in promoter structure represent a simple way for the cell to vary rates of transcription from different genes.
By analyzing DNA sequences ahead of genes, it is possible to identify common sequence features of promoter regions. For instance, near position -10 (position +1 is the start site of transcription), a common sequence motif is present in E. coli that is close to (or exactly) the sequence TATAAT on the sense strand (nontranscribed DNA strand). Another region of conserved nucleotide sequence is centered at nucleotide -35, with a consensus sequence of TTGACA. In general, the more closely these regions in a promoter resemble the consensus sequences, the more efficient that promoter is in initiating transcription.
The subunit plays an important role in directing E. coli's RNA polymerase to bind to template at the proper site for initiation--the promoter site--and to select the correct strand for transcription. The addition of to core polymerase reduces the affinity of the enzyme for nonpromoter sites by about 104, thereby increasing the enzyme's specificity for binding to promoters. In at least some cases, gene expression is regulated by having core polymerase interact with different forms of , which would in turn direct the holoenzyme to different promoters. For example, the subunit with a mass of 70 kiloDaltons recognizes general promoter sequences (called consensus sequences), but when the temperature rises, the 32 kiloDalton unit is synthesized. It directs the RNA polymerase to promoters of genes to deal with the heat shock. The promoters in both cases are in the region of DNA about 10 base pairs ahead of the starting point of polymerization by the RNA polymerase (called the transcriptional start point). These sequences are also called the -10 sequences. Other factors allow E. coli to respond to other environmental changes.
The first step in transcription is binding of RNA polymerase to DNA, followed by migration to the promoter.
1. RNA polymerase finds promoters by a search process, in which the holoenzyme binds nonspecifically to DNA, with low affinity, and then slides along the DNA, without dissociation from it, until it reaches a promoter sequence, to which it binds with much higher affinity. factor is essential for this search, because the core enzyme does not bind to promoters more tightly than to nonpromoter sites. Binding to DNA and then moving along it reduce the complexity of the search for the promoter from three dimensions to one, just as finding a house becomes simpler once you find the street upon which that house is located.
2. The initial encounter between RNA polymerase holoenzyme and a promoter generates a closed-promoter complex. Whereas DNA strands unwind later in transcription, no unwinding is detectable in a closed-promoter complex. Footprinting studies show that polymerase contacts DNA from about nucleotide -55 to -5, where +1 represents the first DNA nucleotide to be transcribed.
3. RNA polymerase unwinds about 17 base pairs of DNA, from about, giving an open-promoter complex, so-called because it binds DNA whose strands are open, or unwound. This highly temperature-dependent reaction occurs with half-times of about 15 seconds to 20 minutes, depending upon the structure of the promoter. A Mg2+-dependent isomerization next occurs, giving a modified form of the open- promoter complex with the unwound DNA region extending from -12 to +2. It is worth noting that negative supercoiling is consistent with unwinding of DNA. It should come as no surprise, therefore that enzymes, such as topoisomerase II, which introduces negative supercoils in DNA, favors transcription of many genes. Interestingly, one of the genes INHIBITED by action of topoisomerase II is the topoisomerase II gene itself. Thus, this serves as an autoregulatory system.
After RNA polymerase has bound to a promoter and formed an open-promoter complex, the enzyme is ready to initiate synthesis of an RNA chain. One nucleoside triphosphate binding site on RNA polymerase is used during elongation. It binds any of the four common ribonucleoside triphosphates (rNTPs). Another binding site is used for initiation. It binds ATP and GTP preferentially. Thus, most mRNAs have a purine at the 5' end.
1. Chain growth begins with binding of the template-specified rNTP at the initiation-specific site of RNA polymerase,
2. The next nucleotide binds at the elongation-specific site.
3. Nucleophilic attack by the 3' hydroxyl of the first nucleotide on the (inner) phosphorus of the second nucleotide generates the first phosphodiester bond and leaves an intact triphosphate moiety at the 5' position of the first nucleotide.
Most initiations are abortive, with release of oligonucleotides 2 to 9 residues long. It is not yet clear why this happens.
During transcription of the first 10 nucleotides, the subunit dissociates from the transcription complex, and the remainder of the transcription process is catalyzed by the core polymerase. Once has dissociated, the elongation complex becomes quite stable. Transcription, as studied in vitro, can no longer be inhibited by adding rifampicin after this point, and virtually all transcription events proceed to completion.
Unwinding and Rewinding of DNA
During elongation, the core enzyme moves along the duplex DNA template and simultaneously unwinds the DNA, exposing a single-strand template for base pairing with incoming nucleotides and with the nascent transcript (the most recently synthesized RNA). It also rewinds the template behind the 3' end of the growing RNA chain. About 18 base pairs of DNA are unwound to form a moving "transcription bubble." As one base pair becomes unwound in advance of the 3' end of the nascent RNA strand, one base pair becomes rewound near the trailing end of the RNA polymerase molecule. A structure with RNA polymerase separates the RNA from the DNA after about 8 base pairs of the 3' end of the nascent transcript are hybridized to the template DNA strand.
(This section is not described in your book) RNA polymerase often advances through DNA discontinuously, holding its position for several cycles of nucleotide addition and then jumping forward by several base pairs along the template. RNA polymerase "pauses" when it reaches DNA sequences that are difficult to transcribe in vitro, often sitting at the same site for several minutes before transcription is resumed. At such sites, RNA polymerase often translocates backward, and in the process the 3' end of the nascent transcript is displaced from the catalytic site of the enzyme. When this happens, a 3' "tail," is created which may be several nucleotides long and is not base-paired to the template, protruding downstream of the enzyme. In order for transcription to resume, the 3' end of the RNA must be positioned in the active site of the RNA polymerase. This is evidently the main function of the RNA 3' cleavage reactions catalyzed by the GreA and GreB proteins, which have been shown to stimulate a transcript cleavage activity intrinsic to the polymerase. These observations suggest that RNA polymerase movement generally moves forward until one of these special sequences is reached, or perhaps, until a transcription insertion error generates a DNA-RNA mispairing that weakens the hybrid and allows backtracking.
Overwinding in front of the transcription bubble (which puts in positive supercoils) is removed by the action of gyrase (also known as topoisomerase II, which puts in negative supercoils). Likewise, topoisomerase I (which relaxes negative supercoils) eliminates the underwinding (negative supercoils) behind the transcription bubble.
In bacteria two distinct types of termination events have been identified-those that depend on the action of a protein termination factor, called (rho), and those that are factor-independent.
Factor-independent termination - Sequencing the 3' ends of genes that terminate in a factor-independent manner reveals the following two structural features shared by many such genes:
1. Two symmetrical GC-rich segments in the transcript have the potential to form a stem--loop structure
2. A downstream run of four to eight A residues.
These features suggest the following as elements of the termination mechanism:
1. RNA Polymerase slows down, or pauses, when it reaches the first GC-rich segment, because the stability of G-C base pairs makes the template hard to unwind. In vitro, RNA polymerase does pause for several minutes at a GC-rich segment.
2. The pausing gives time for the complementary GC-rich parts of the nascent transcript to base-pair with one another. In the process, the downstream GC-rich segment of the transcript is displaced from its template. Hence, the complex of RNA polymerase, DNA template, and RNA is weakened. Further weakening, leading to dissociation, occurs when the A-rich segment is transcribed to give a series of AU bonds (which are very weak), linking transcript to template.
The actual mechanism of termination is more complex than just described, in part because DNA sequences both upstream and downstream of the sequence also influence termination efficiency. Moreover, not all pause sites are termination sites.
Factor-dependent termination - Factor-dependent termination sites are less frequent than factor-independent termination sites, and the mechanism of factor-dependent termination is complex. The protein, a hexamer composed of identical subunits, has been characterized as an RNA--DNA helicase and contains a nucleoside triphosphatase activity that is activated by binding to polynucleotides. Apparently acts by binding to the nascent transcript at a specific site near the 3' end, when RNA polymerase has paused. Then moves along the transcript toward the 3' end, with the helicase activity unwinding the 3' end of the transcript from the template (and/or the RNA polymerase molecule), thus causing it to be released.
Involvement of NusA protein - It is not clear what causes RNA polymerase to pause at -dependent termination sites. However, the action of another protein, NusA, is somehow involved. The NusA protein evidently associates with RNA polymerase, and there is reason to believe that it binds at some point in transcription after the factor has dissociated, because the two purified proteins compete with each other for binding to core RNA polymerase.
Further insight into termination mechanisms has come from an extensively studied regulatory mechanism called attenuation (not described in your book). Attenuation occurs in bacterial some operons undergoing simultaneous transcription/translation. Pausing of the translational machinery in the process affects the stability of the RNA/DNA hybrid in the RNA polymerase and can terminate the synthesis of a nascent transcript before RNA polymerase has transcribed very far.
Processing of E. coli's tRNAs and rRNAs
Ribosomal RNAs (rRNAs) and transfer RNAs (tRNAs) are created by extensive post-transcriptional processing of larger primary transcripts (sometimes also called precursor RNA or pre-RNA) by exonucleolytic and endonucleolytic cleavages. Many of the original standard bases in the tRNAs are also changed into modified bases by post-transcriptional enzymatic action. Examples include ribothymidylate and pseudouridylate. Processing enzymes for tRNAs include Ribonuclease P (generates 5' terminus of all E. coli tRNAs) and Ribonuclease III which removes the rRNA precursors (5S, 16S,a nd 23S rRNAs) by cleaving at specific sites in the primary RNA. The sugars of some rRNA nucleotides are modificed.
Amino acids are attached to the 3'-ends of tRNAs by the enzyme aminoacyl-tRNA synthetases to form aminoacyl-tRNAs (charged tRNAs). There are 86 tRNAs in E. coli. Most tRNAs are about 75 nucleotides long and have extensive secondary structure (base pairing interactions) as well as tertiary structure (not, in this case, supercoiling, but additional folding in three dimensional space). All tRNAs end with ...CCA at the 3'-end.
Eukaryotes and prokaryotes have similarities in both transcription and translational (protein synthesis) processes, but they have significant differences as well. For instance, transcription and translation are often occurring simultaneously for the same gene in E. coli, due to the lack of spatial separation of the processes. By contrast, in eukaryotes, transcription occurs exclusively in the nucleus and translation occurs in the cytosol. In addition, as noted above, eukaryotic RNAs differ from prokaryotic RNAs in being more heavily processed, as will be described below. This includes one major process (splicing) that does not occur in prokaryotes at all.
Eukaryotic RNA Synthesis
Unlike prokaryotes which have one RNA polymerase that makes all classes of RNA molecules, eukaryotic cells have three types of RNA polymerase (called RNA pol I, RNA pol II, and RNA pol III), and each type of RNA is made by its own polymerase:
RNA polymerase I makes ribosomal RNA (rRNA)
RNA polymerase II makes messenger RNA (mRNA)
RNA polymerase III makes transfer RNA (tRNA)
The three eukaryotic RNA polymerases vary in their sensitivity to the poison, -amanitin, produced by the mushroom Amanita phalloides. RNA polymerase II is VERY sensitive to the toxin, whereas RNA polymerase III is less sensitive and RNA polymerase I is not very sensitive.
RNA Polymerase I is a complex enzyme, containing 13 subunits totaling over 600,000 daltons. It is responsible for synthesizing the large 45S pre-rRNA transcript that is later processed into mature 28S, 18S, and 5.8S ribosomal RNAs (rRNAs). At least two transcription factors are known to be required, but there is no need for an elaborate transcriptional apparatus characteristic of pol II transcription, because only a single kind of gene is transcribed.
RNA Polymerase II - All of the protein-coding genes in eukaryotes are transcribed by RNA polymerase II (pol II). This enzyme also transcribes some of the small nuclear RNAs (snRNAs) involved in splicing. Like other RNA polymerases, pol II is a complex, multisubunit enzyme, but not even its numerous subunits are sufficient to allow pol II to initiate transcription on a eukaryotic promoter. To form a minimal complex capable of initiation, at least five additional protein factors are needed. The minimal unit involves the TATA binding protein, (TBP), but in vivo formation of the complex probably always uses TFIID, a multi-subunit structure incorporating both TBP and TATA binding associated factors (TAFs). RNA polymerase II is partly regulated by phosphorylation of serine and threonine residues in the carboxy terminal domain.
RNA polymerase III (pol III) is the largest and most complex of the eukaryotic RNA polymerases. It involves 14 subunits, totaling 700,000 daltons. All of the genes it transcribes are small, they are not all translated into proteins, and their transcription is regulated by certain sequences that lie within the transcribed region. The major targets for pol III are the genes for all the tRNAs and for the 5S ribosomal RNA. Like the major ribosomal RNA genes, these small genes are present in multiple copies, but they are usually not grouped together in tandem arrays, nor are they localized in one region of the nucleus. Rather, they are scattered over the genome and throughout the nucleus.
Most of the modifications that are made on RNAs are performed on mRNAs (except some chemical modification of the bases in tRNAs) and are the subject of descriptions below.
We shall talk later in the term about control of gene expression of which transcriptional regulation is one mechanism. Nevertheless, it is useful to understand some of the basic principles of transcriptional regulation here. As noted above transription starts at a sequence near to a promoter. Each of the three eukaryotic RNA polymerases recognizes specific types of promoters. RNA polymerase I uses only a single type of promoter. RNA polymerase III is unusual in recognizing promoter sequences that are sometime ahead of (in the 5' direction of) the transcriptional start site and in other genes are downstream of the transcriptional start site. RNA polymerase II promoters are simple and complex. In each case, for all of the RNA polymerases, the promoters are on the same physical strand as the genes they control and are ahead of the genes the control. This property of the promoter sequence being on the same molecule as the gene it regulates is referred to as being a cis-acting element. By contrast, proteins (apart from the RNA polymerase) often bind promoter sequences and help RNA polymerase to start (or in some cases stop) transcription. These proteins are called transcription factors and are referred to as being trans-acting elements.
Like prokaryotic promoters, eukaryotic promoters are found 5' to the tranascriptional start site and often contain a sequence rich in thymine and adenine called a TATA box. The TATA box is usually found between positions -30 to -100 of the transcriptional start site. Other sequence elements besides the TATA box are important for proper transcriptional control in eukaryotes. They include a so-called CAAT box and, in some cases, a GC box. These sequences vary somewhat in position between -40 and -150 of the transcriptional start site. These variable positions in eukaryotes are possible because proteins bind these sequences and help control transcription. By contrast, E. coli has conserved sequence elements positioned fairly strictly at -10 and -35 due to the fact that these sequences are binding sites for RNA polymerase itself.
The TATA box is necessary for strong promoter activity. This sequence is bound by a protein called the TATA-box-binding protein (TBP), which is a small component of a much larger complex that helps regulate transcription. As noted above, transcription factors are proteins that bind sequences in the promoter and help control transcription. Transcription factors interacting with RNA polymerase II in eukaryotes have names like TFIIA, TFIIB, TFIID, TFIIE, and TFIIF. The 'TF' stands for 'transcription factor' and the 'II' refers to RNA polymerase II. Note that the 'transcription factors' can be complexes of many proteins. TBP is a saddle-shaped protein that recognizes the TATA box, causing the DNA to undergo a large conformational change, including unwinding. Once TBP has bound the TATA box, it helps recruit binding of other proteins/complexes. The order of binding of the transcription factors is TFIID, TFIIA, TFIIB, TFIIF, followed by binding of RNA polymerase II and then TFIIE. This complex of complexes is called the basal transcription apparatus. Notably, TFIIF is a helicase that helps separate strands to allow RNA polymerase II to bind. During the formation of the basal transcription complex, RNA polymerase is phosphorylated near its carboxyl end - a process required for initiation of transcription.
Other Transcription Factors
Initiation of transcription does not occur efficiently by the basal transcription apparatus alone. Other transcription factors bind to other sequence sites to allow mRNA synthesis to occur with high efficiency. For example, the transcription factor called Sp1 binds to promoters with GC boxes. The CAAT box is bound by the CCAAT-binding transcription factor called CTF or NF1.
Still other control over transcription of eukaryotic genes is exerted by enhancer sequences in the DNA that are bound by still other proteins. Enhancer sequences provide a considerable amount of 'fine tuning' of transcription, being active in some cell types, but not others or active at certain times, but not others. Enhancer sequences differ from promoters in 1) not being able to activate transcription by themselves, but 2) when active increase transcription of the genes they control. Interestingly, enhancer sequences can be located up to thousands of base pairs away from the gene they regulate. They can be found on either strand relative to the gene they regulate, and they can be 5' to, 3' to, or in some cases, within the gene they regulate.
tRNA Processing (yeast)
tRNAs are modified after they are transcribed. This is necessary because the tRNA is made initially in an RNA that is longer than the final tRNA. This involves cleavage of the 5' leader sequence, removal of a middle sequence called an intron (see splicing description below), and replacement of the 3' UU sequence by CCA. In addition, several bases are chemically modified, as well.
mRNAs- Prokaryotes vs. Eukaryotes
There are significant differences in the ways that messenger RNAs (mRNAs) for protein-coding genes are produced and processed in prokaryotic and eukaryotic cells.
Prokaryotes - Prokaryotic mRNAs are synthesized on the bacterial nucleoid in direct contact with the cytosol and are immediately available for translation. The Shine-Dalgarno sequence (we will talk about later in translation) near the 5' end of the mRNA binds to a site on the prokaryotic ribosomal RNA (rRNA), allowing attachment of the ribosome and initiation of translation, often even before transcription is completed.
Eukaryotes - In eukaryotes, the mRNA is produced in the nucleus and must be exported into the cytosol for translation. Furthermore, the initial product of transcription (pre-mRNA) may include sequences (called introns), which must be removed before translation can occur. There is no ribosomal attachment sequence like the Shine - Dalgarno sequence in prokaryotes. For all these reasons, eukaryotic mRNA requires extensive processing before it can be used as a protein template. This processing takes place while mRNA is still in the nucleus.
The description below all refer to modifications to mRNAs in eukaryotes.
mRNA End Modifications
Capping - The first modification to mRNA occurs at the 5' end of the pre-mRNA (pre-mRNA is the term given to the raw mRNA before processing begins). A GTP residue is added in reverse orientation (that is, to form a 5' to 5' bond) and forms, together with the first two nucleotides of the chain, a structure known as a cap. The cap is "decorated" by the addition of methyl groups to the N-7 position of the guanine and to one or two sugar hydroxyl groups of the cap nucleotides. The cap structure serves to position the mRNA on the ribosome for translation. Capping occurs very early during the synthesis of eukaryotic mRNAs, even before mRNA molecules are finished being made by RNA polymerase II. Capped mRNAs are very efficiently translated by ribosomes to make proteins. Viruses, such as poliovirus, prevent capped cellular mRNAs from being translated into proteins. This enables poliovirus to take over the protein synthesizing machinery in the infected cell to make new viruses.
Polyadenylation - The 3'-ends of eukaryotic mRNAs are altered in a process called polyadenylation where about 250 A nucleotides are added to the 3' ends of mRNAs. The sequence 5'AAUAAA 3' acts as a signal to an endonuclease to cut the RNA and locate the polyA tail 11-30 nucleotides downstream. PolyA Polymerase is the enzyme responsible for adding the tail. The polyA tail appears to increase the efficiency with which translation of the message occurs and the length of the tail may be a factor in the stability of the mRNA outside of the nucleus.
RNA editing is a process affecting mitochondrial mRNAs of some unicellular eukaryotes and a few other genes, including some, such as Apolipoprotein B (apo B), which is involved in lipid transport in lipoprotein complexes in the blood. RNA editing involves changes in the sequence of the mRNA after it is made. It can involve insertion or deletion of residues into messages during processing steps or it can involve chemical modification to change one base in a sequence to a different one. Such is the case with apo B. The apo B protein is found in two forms - a 512 kd apoB-100 (in LDLs) and a 240 kD apoB-48. The larger form is made in the liver for LDL synthesis and the smaller form is made in the intestine for chylomicron synthesis. The same RNA is used to make both proteins - one edited (apoB-48) and one unedited (apoB-100). Editing occurs in intestinal cells, but not liver cells and involves chemical alteration of a single cytosine to make a uracil. The change to a uracil creates a STOP signal for translation in the middle of the gene, giving rise to the smaller apoB-48 protein. The unedited sequence does not contain the STOP signal and translation continues much further, creating the apoB100 protein.
In trypanosomes, nearly half of the uridines in some mitochondrial mRNAs are inserted into the sequence AFTER the RNA is made. Apparently, the insertions are made by a kind of reverse splicing mechanism (see below), and only at certain points. Small RNAs, called guide RNAs, are required for the process.
Eukaryotic genes often contain sequences that are not found in the final RNA (true for mRNAs, tRNAs, and rRNAs). The process of removing the intervening sequences is called splicing. The intervening sequences that are removed in splicing are called 'introns' and the sequences that remain after splicing are called 'exons.' Splicing provides cells with a simple mechanisms of 'shuffling' functional domains of proteins (see alternative splicing below). Higher eukaryotes tend to have a larger percentage of their genes containing introns than lower eukaryotes, and the introns tend to be larger as well. The pattern of intron size and usage roughly follows the evolutionary tree, but this is only a general tendency. The human titin gene has the largest number of exons (178), the longest single exon (17,106 nucleotides) and the longest coding sequence (80,781 nucleotides = 26,927 amino acids). The longest primary transcript, however, is produced by the dystrophin gene (2.4 million nucleotides).
For short transcription units, RNA splicing usually follows cleavage and polyadenylation of the 3' end of the primary transcript. But for long transcription units containing multiple exons, splicing of exons in the nascent RNA sometimes begins before transcription of the gene is complete. The location of splice sites in a pre-mRNA can be determined by comparing the sequence of genomic DNA with that of the cDNA prepared from the corresponding mRNA. Sequences that are present in the genomic DNA but absent from the cDNA represent introns. Comparison of cDNA sequence to genomic DNA sequence of a large number of different mRNAs revealed short consensus sequences at intron-exon boundaries in eukaryotic pre-mRNA; in higher organisms, a pyrimidine-rich region just upstream of the 3' splice site also is common . The only completely conserved nucleotides are the (5')GU and AG(3') in the 5' and 3' ends of the intron, respectively, and the conserved branch point A.
Errors in splicing can cause problems, but do not occur often. More likely problems arise from mutation in DNA to alter splice junction sites, as is the case for beta-thalassemia, which arises from mutation of a single base to create a new splice site where none existed previously. (Note that your book talks about splicing as if it only occurs on mRNAs, but that is not right. Splicing occurs on tRNAs and rRNAs too.
The splicing process begins with a pre-RNA becoming complexed with a number of small nuclear ribonucleoprotein particles (snRNPs), which are themselves complexes of small nuclear RNAs (snRNAs) and special splicing enzymes. The snRNP--pre--mRNA complex is called a spliceosome and it is in the spliceosome where splicing occurs. snRNAs recognize and bind intron--exon splice sites by means of complementary sequences. Excision of a single intron involves assembling and disassembling a spliceosome. The sequence of reactions can be summarized as follows:
After capping, poly(A) tailing, and splicing are complete, the newly formed mRNA is exported from the nucleus, almost certainly through the nuclear pores. It is then attached to ribosomes for translation.
SNRNPs in Splicing - Six small U-rich RNAs are abundant in the nuclei of mammalian cells. Designated U1 through U6, these small nuclear RNAs (snRNAs) range in size from 107 to 210 nucleotides. The observation that the short consensus sequence at the 5' end of introns (CAG|GUAAGU) was found to be complementary to a sequence near the 5' end of the snRNA called U1 led to the suggestion that snRNAs assisted in the splicing reaction. The snRNAs associate in the nucleus with six to to ten proteins to form small nuclear ribonucleoprotein particles (snRNPs). Some of these proteins are common to all snRNPs, and some are specific for individual snRNPs.
It is estimated that at least one hundred proteins are involved in RNA splicing, making this process comparable in complexity to protein synthesis and initiation of transcription. Some of these splicing factors are associated with snRNPs, but others are not. Some proteins also exhibit sequence homologies to known RNA helicases. RNA helicases may be necessary for the base-pairing rearrangements that occur in snRNAs during spliceosomal splicing cycle, particularly the dissociation of U4 from U6 and of U1 from the 5' splice site.
Some gene transcripts may be spliced in different ways, in different tissues of an organism or at different developmental stages. Alternative splicing of the heavy chains of immunoglobulins results in proteins that may or may not carry a hydrophobic membrane-binding domain. The protein -tropomyosin is used in different kinds of contractile systems in various cell types. A single gene is transcribed, but the specific splicing patterns in different tissues provide a variety of -tropomyosins. There are three positions at which alternative choices can be made for which exon to splice in. The choice of splice site appears to be determined by a cell-specific protein that interacts with the spliceosome. The economy of alternative splicing, given the size of the genome is significant.
Implications of Splicing
Evolutionary - Exons often coincide with protein "domains", or parts of the protein with a specific function. Splicing allows domains to be independent of each other and thus allow exchange readily over evolutionary time. This means that new types of proteins can be formed relatively easily compared to the situation in bacteria where domains are kept in a single intact unit (the polypeptide coding sequence).
Developmental Domain Switching - Splicing permits cells to "swap" exons during differential gene expression. For example, during development, some genes are spliced one way, and then spliced a different way later (see Tropomyosin expression above). Changing the way a mRNA is spliced changes the amino acid sequence in the protein made from it, so cells can, in this way, "modify" the sequence, and function, of a protein as the needs of a cell change. Splicing thus offers yet another opportunity for regulation of gene expression in eukaryotic cells besides that of transcriptional control.
A remarkable ability of some RNAs to catalyze reactions (like enzymes) was discovered that involved splicing and processing of tRNAs. Such catalytic RNAs are known as ribozymes and the list of catalytic RNAs has grown and includes the catalytic activity in ribosomes responsible for making peptide bonds. The ribozyme involved in tRNA processing is known as ribonuclease P and it removes nucleotides from the 5' end of the precursor molecule. Another interesting catalytic RNA activity is that involving a self-splicing intron of Tetrahymena (a ciliated protozoan). The process requires only an added guanosine residue. In the mechanism, G binds to the RNA and attacks the 5' splice site to form a phosphodiester bond with the 5' end of the intron, generating a 3' hydroxyl at the end of the upstream exon that attacks the 3' splice. This second reaction joins the exons together and leads to release of the intron. The folding of the RNA in the intron is important for its function (not unlike the situation in enzymes where the folding of the protein is essential for its function).
Self-splicing is now known to occur in introns of other species, including yeast, fungi, and Chlamydomonas. Group I self-splicing introns are mediated by a guanosine factor, whereas Group II self-splicing RNAs require the 2'-hydroxyl group of an adenylate in the intron. Self-splicing resembles splicing by spliceosomes in that the first step involves attack of the 5' splice site by a ribose hydroxyl. The newly formed 3'-hydroxyl then attacks the 3' splice junction to form a phosphodiester bond with the downstream exon. Group II splicing is more closely related to spliceosome splicing and may be an intermediate process between the two. | http://oregonstate.edu/instruct/bb451/winter12/lectures/transcriptionnotes.html | 13 |
17 | This section presents the mark-and-sweep garbage collection algorithm. The mark-and-sweep algorithm was the first garbage collection algorithm to be developed that is able to reclaim cyclic data structures. Variations of the mark-and-sweep algorithm continue to be among the most commonly used garbage collection techniques.
When using mark-and-sweep, unreferenced objects are not reclaimed immediately. Instead, garbage is allowed to accumulate until all available memory has been exhausted. When that happens, the execution of the program is suspended temporarily while the mark-and-sweep algorithm collects all the garbage. Once all unreferenced objects have been reclaimed, the normal execution of the program can resume.
The mark-and-sweep algorithm is called a tracing garbage collector because is traces out the entire collection of objects that are directly or indirectly accessible by the program. The objects that a program can access directly are those objects which are referenced by local variables on the processor stack as well as by any static variables that refer to objects. In the context of garbage collection, these variables are called the roots . An object is indirectly accessible if it is referenced by a field in some other (directly or indirectly) accessible object. An accessible object is said to be live . Conversely, an object which is not live is garbage.
The mark-and-sweep algorithm consists of two phases: In the first phase, it finds and marks all accessible objects. The first phase is called the mark phase. In the second phase, the garbage collection algorithm scans through the heap and reclaims all the unmarked objects. The second phase is called the sweep phase. The algorithm can be expressed as follows:
for each root variable r mark (r); sweep ();
In order to distinguish the live objects from garbage, we record the state of an object in each object. That is, we add a special boolean field to each object called, say, marked. By default, all objects are unmarked when they are created. Thus, the marked field is initially false.
An object p and all the objects indirectly accessible from p can be marked by using the following recursive mark method:
void mark (Object p)Notice that this recursive mark algorithm does nothing when it encounters an object that has already been marked. Consequently, the algorithm is guaranteed to terminate. And it terminates only when all accessible objects have been marked.
p.marked = true; for each Object q referenced by p mark (q);
In its second phase, the mark-and-sweep algorithm scans through all the objects in the heap, in order to locate all the unmarked objects. The storage allocated to the unmarked objects is reclaimed during the scan. At the same time, the marked field on every live object is set back to false in preparation for the next invocation of the mark-and-sweep garbage collection algorithm:
void sweep ()
for each Object p in the heap
if (p.marked) p.marked = false else heap.release (p);
Figure illustrates the operation of the mark-and-sweep garbage collection algorithm. Figure (a) shows the conditions before garbage collection begins. In this example, there is a single root variable. Figure (b) shows the effect of the mark phase of the algorithm. At this point, all live objects have been marked. Finally, Figure (c) shows the objects left after the sweep phase has been completed. Only live objects remain in memory and the marked fields have all been set to false again.
Figure: Mark-and-sweep garbage collection.
Because the mark-and-sweep garbage collection algorithm traces out the set of objects accessible from the roots, it is able to correctly identify and collect garbage even in the presence of reference cycles. This is the main advantage of mark-and-sweep over the reference counting technique presented in the preceding section. A secondary benefit of the mark-and-sweep approach is that the normal manipulations of reference variables incurs no overhead.
The main disadvantage of the mark-and-sweep approach is the fact that that normal program execution is suspended while the garbage collection algorithm runs. In particular, this can be a problem in a program that interacts with a human user or that must satisfy real-time execution constraints. For example, an interactive application that uses mark-and-sweep garbage collection becomes unresponsive periodically. | http://www.brpreiss.com/books/opus5/html/page424.html | 13 |
49 | Part I - Introducing LSAT Logical Reasoning :: The Terrain
The Format Of Logical Reasoning
Logical Reasoning consists of two of the four scored sections on the LSAT. Each section will have approximately twenty-five questions.
Nature Of The Stimulus – Mostly Arguments
Most of the questions are based on "arguments". What is the definition of "argument" that is used by LSAT? According to LSAT arguments are: "… sets of statements that present evidence and draw a conclusion based on that evidence."
Now, a bit of vocabulary:
Premises: The set of statements that present evidence, which are offered as justification for the conclusion, are called "premises".
The language of "Premise Indicators":
Words like "because", "since", "for the reason that", etc., do indicate justification. For that reason they are often indicative of "premises".
On the LSAT, all "premises" are offered as being true. There is no attempt to justify or support them. Hence, when answering LSAT questions, you just assume that premises are true.
Conclusion: The main point – the position that the "premises" are in support of is called the "conclusion".
The language of "Conclusion Indicators":
Words like "hence", "therefore", "thus", "it follows that", etc., indicate a "conclusion". For this reason they are often "conclusion indicators".
(A word of caution: since arguments can have multiple conclusions, a "conclusion indicator" may not be an indicator of the main conclusion! Please read carefully.)
In contrast to "premises", "conclusions" are not assumed to be true. They are offered as being somehow justified by the "premises".
Getting A Bit Ahead Of Ourselves For The Moment – Two Additional Points About "Premises" and "Conclusions"
First – it is a mistake to think that the "conclusion" or "premises" appear in any particular location in an argument. For example, although the "conclusion" is often at the end, it doesn’t have to be.
Second – arguments may have more than one conclusion. The first conclusion may actually operate as the a premise for a second conclusion.
The extent to which the "premises" justify the "conclusion", is NOT ASSUMED, but is open to challenge. It is the analysis of the relationship between the premises and the conclusion that is at the heart of most of these questions. LSAT calls this the process of evaluating:
"How The Argument Goes". Part II - What Are You Asked To Do With The Arguments? Determine "How The Argument Goes"
"Once you have identified the premises and the conclusion, the next is to get clear about exactly how the argument is meant to go; that is, how the grounds offered for the conclusion are actually supposed to bear on the conclusion. Understanding how the argument goes is a crucial step in answering many questions that appear on the LSAT. This includes questions that ask you to identify a reasoning technique used within an argument, questions that require you to match the patterning of reasoning used in two separate arguments and a variety of other question types.
Determining how the argument goes involves discerning how the premises are supposed to support the overall conclusion."
• page 16 "The Official LSAT SuperPrep."
You will notice that this is very non-technical language. That is deliberate. LSAT cannot use language that would require a specific academic background to understand.
How The Argument Goes – A Three Dimensional Analysis
Dimension 1: The Argument or Passage;
Dimension 2: The Questions;
Dimension 3: The Answer Choices
Every question involves analyzing the interplay among these three dimensions.
Dimension 1: LSAT Logical Reasoning Arguments – What Are They About?
Most of the questions are based on arguments. An argument always has a conclusion which is justified because of one or more premise(s). When you read the argument ask:
In summary: when you read the argument simply ask: What is being said and why?
Arguments or Passage – The Basic Skills
If it is an argument – you must be able to:
If the passage is NOT an argument you must be able to understand the primary purpose/main point of the passage.
Dimension 2: LSAT Logical Reasoning Questions – What Are They About? Question Types vs. Question Groups
For each question type you must understand what exactly you are being asked to do. (This does not mean that you must put the argument into a specific category.) In other words, what aspect of "how the argument goes" are you asked to respond to?
There are many different Logical Reasoning question types. It is always easier to think in terms of smaller groups than larger groups. The large number of questions types can be broken into a smaller number of question groups.
Although there is considerable overlap, the vast majority of the actual LSAT questions fall into one of five groups.
Group 1 – Objective Description
These questions will ask you to identify:
Group 1 questions ask you to identify objective aspects of the argument but do not require you to make a judgement about its persuasive value.
Group 2 – Effectiveness And Persuasiveness – Questions That Ask You To Identify What Would Make The Argument More Or Less Persuasive
These questions will ask you to assess:
Group 2 questions will require you to assess the persuasive value of the argument. They often use the language "strengthen" or "weaken" the above argument? They are related to Group 4 questions, which often use the language of "assumptions".
Group 3 – Questions That Ask You To Determine What Follows From The Argument Or Passage
These questions will ask you to determine:
Group 3 questions may or not be based on arguments.
Group 4 – Questions That Ask You To Identify What Additional Information/Premise Plays A Role In Ensuring The Conclusion Must From The Premise(s)
These questions will ask you to:
Group 4 questions are related to Group 2 questions but will be use the language of "assumptions". Group 2 questions often use the language of "strengthen" or "weaken".
Group 5 – Questions That Ask About Principles
These questions will ask you to:
Dimension 3: LSAT Logical Reasoning Answer Choices – What Are They About?
This is where the action is. The LSAT is multiple-choice. Your job is to recognise or identify the answer. LSAT will not make this easy. In fact, the job of LSAT is to:
"Attract you to answer choices that are wrong, and repel you from answer choices that are right!"
To facilitate this goal, LSAT has developed a large number of disguises. Some of these disguises are based on content, some are based on application and some are based on format. What follows are some comments on each:
Content Based Disguises – Some Examples:
Format Based Disguises – Some Examples:
Application Based Disguises – Some Explanation:
In my experience theses are by far the hardest for students to see. When reading an answer choice, you must go read the language of the answer choice very carefully. But, you must go well beyond that careful reading.
Ask yourself: if the information in this answer choice were true, how would that affect the answer to the question being asked!
Our LSATtutoring.com Tutorial Series has been designed to explore may of these topics. | http://www.prep.com/law/lsatlogicalreasoning.html | 13 |
65 | From Math Images
|Euclid's Method to find the gcd|
Euclid's Method to find the gcd
- This image shows Euclid's method to find the greatest common divisor (gcd) of two integers. The greatest common divisor of two numbers a and b is the largest integer that divides the numbers without a remainder.
- Here I use 52 and 36 as an example to show you how Euclid found the gcd, so you have a sense of the Euclidean algorithm in advance. As you have probably noticed already, Euclid uses lines, defined as multiples of a common unit length, to represent numbers. First, use the smaller integer of the two, 36, to divide the bigger one, 52. Use the remainder of this division, 16, to divide 36 and you get the remainder 4. Now divide the last divisor, 16, by 4 and you find that they divide exactly. Therefore, 4 is the greatest common divisor. For every two integers, you will get the gcd by repeating the same process until there is no remainder.
- You may have many questions so far: "What is going on here?" "Are you sure that 4 is the gcd of 52 and 36?" Don't worry. We will talk about them precisely later. This brief explanation is just to preheat your enthusiasm for Euclidean Algorithm! It is amazing to see that he explains and proves his algorithm relying on visual graphs, which is different from how we treat number theory now.
Basic DescriptionWe all know that 1 divides every number and that no positive integer is smaller than 1, so 1 is the smallest common divisor for any two integers a and b. Then what about the greatest common divisor? Finding the gcd is not as easy as finding the smallest common divisor.
When asked to find the gcd of two integers, a possible way we can think of is to prime factorize each integer and see which factors are common between the two integers, or we could simply try different numbers and see which number works. However, both approaches could be very sophisticated and time consuming as the two integers become relatively large.
About 2000 years ago, Euclid, one of the greatest mathematician of Greece, devised a fairly simple and efficient algorithm to determine the gcd of two integers, which is now considered as one of the most efficient and well-known early algorithms in the world. The Euclidean algorithm hasn't changed in 2000 years and has always been the the basis of Euclid's number theory.
Euclidean algorithm (also known as Euclid’s algorithm) describes a procedure for finding the greatest common divisor of two positive integers. This method is recorded in Euclid’s Elements Book VII. This book contains the foundation of number theory for which Euclid is famous.
The Euclidean algorithm comes in handy with computers because large numbers are hard to factor but relatively easy to divide. It is used in many other places and we’ll talk about its applications later.
A More Mathematical Explanation
- Note: understanding of this explanation requires: *Number Theory, Algebra
The Description of Euclidean Algorithm
Mathematical definitions and their abbreviations
- a mod b is the remainder when a is divided by b (where mod = modulo).
- Example: 7 mod 4 = 3; 4 mod 2 = 0; 5 mod 9 = 5
- a b means a divides b exactly or b is divided by a without any remainder.
- Example: 3 6 ; 4 16
- gcd means the greatest common divisor, also called the greatest common factor (gcf), the highest common factor (hcf), and the greatest common measure (gcm).
- gcd(a, b) means the gcd of two positive integers a and b.
Keep those abbreviations in mind; you will see them a lot later.
The Euclidean Algorithm is based on the following theorem:
- Theorem: where and
- Proof: Since , could be denoted as with . Then . Assume is a common divisor of and , thus , or we could write them as Because of , and we will get. Therefore is also a common divisor of . Hence, the common divisors of and are the same. In other words, and have the same common divisors, and so they have the same greatest common divisor.
The description of the Euclidean algorithm is as follows:
- Input two positive integers, a,b (a > b)
- Output g, the gcd of a, b
- Internal Computation
- Divide a by b and get the remainder r.
- If r=0, report b as the gcd of a and b. If r 0, replace a by b and replace b by r. Go back to the previous step.
The algorithm process is like this:
- ... ...
To sum up,
is the gcd of a and b.
Note: The Euclidean algorithm is iterative, meaning that the next step is repeated using the result from the last step until it reaches the end.
An example will make the Euclidean algorithm clearer. Let's say we want to know the gcd of 168 and 64.
168 = 2 64 + 40
64 = 1 40 + 24
40 = 1 24 + 16
24 = 1 16 + 8
16 = 2 8
(168, 64) = (64, 24) = (24, 16) = (16, 8)
Therefore, 8 is the gcd of 168 and 64.
- Here's an applet for you to play around with finding the gcd by using the Euclidean algorithm.
Proof of the Euclidean Algorithm
In order to prove that Euclidean algorithm works, the first thing is to show that the number we get from this algorithm is a common divisor of a and b. Then we will show that it is the greatest. Recall that
- ... ...
Based on the last two equations, we substitute with in the last to second equation such that .
Thus we have .
From the equation before those two, we repeat the steps we did just now: .
Now we know .
Continue this process and we will find that , so , the number we get from Euclidean algorithm, is indeed a common divisor of a and b.
Second, we need to show that is the greatest among all the common divisors of a and b. To show that is the greatest, let's assume that there is another common divisor of a and b, d, where d is a positive integer. Then we could rewrite a and b as a = dm , b = dn, where m and n are also positive integers. This second part of the proof is going to be similar to the first part because they both repeat the same steps and eventually get the result, but this time we start from the first equation of the Euclidean algorithm:
Because Therefore (substitute dm for a and dn for b)
Therefore, . Let .
Consider the second equation. Solve for in the same way. Because Therefore Thus,
Continuing the process until we reach the last equation, we will get . Since we pick d to represent any possible common divisor of a and b except means that divides any other common divisor of a and b, meaning that must be greater than all the other common divisors. Therefore, the number we get from the Euclidean Algorithm, , is indeed the greatest common divisor of a and b.
Euclid's method of finding the gcd is based on several definitions. First, I quote the first 15 definitions in Book VII of his Elements.
- 1. A unit is that by virtue of which each of the things that exist is called one.
- 2. A number is a multitude composed of units.
- 3. A number is a part of a number, the less of the greater, when it measures the greater.
- 4. but parts when it does not measure it.
- 5. The greater number is a multiple of the less when it is measured by the less.
- 6. An even number is that which is divisible into two equal parts.
- 7. An odd number is that which is not divisible into two equal parts, or that which differs by an unit from an even number.
- 8. An even-times even number is that which is measured by an even number according to an even number.
- 9. An even-times odd number is that which is measured by an even number according to an odd number.
- 10. An odd-times odd number is that which is measured by an odd number according to an odd number.
- 11. A prime number is that which is measured by an unit alone.
- 12. Numbers prime to one another are those which are measured by an unit alone as a common measure.
- 13. A composite number is that which is measured by some number.
- 14. Numbers composite to one another are those which are measured by some number as a common measure.
- 15. A number is said to multiply a number when that which is multiplied is added to itself as many times as there are units in the other, and thus some number is produced.
- In short, Euclid's one unit is the number 1 in algebra. He uses lines to represent numbers; the longer the line the bigger the number.
- In Def.3, "measure" means "divide."
- Two unequal numbers being set out, and the less being continually subtracted in turn from the greater, if the number which is left never measures the one before it until an unit is left, the original numbers will be prime to one another.
- For, the less of two unequal numbers AB, CD being continually subtracted from the greater, let the number which is left never measure the one before it until an unit is left;
I say that AB, CD are prime to one another, that is, that an unit alone measures AB, CD.
- For, if AB, CD are not prime to one another, some number will measure them.
- Let a number measure them, and let it be E; let CD, measuring BF, leave FA less then itself,
let, AF measuring DG, leave GC less than itself, and let GC, measuring FH, leave an unit HA.
- Since, then, E measures CD, and CD measure BF, therefore E also measures BF.
- But it also measures the whole BA;
therefore it will also measure the remainder AF.
- But AF measures DG;
therefore E also measures DG.
- But it also measures the whole DC;
therefore it will also measure the remainder CG.
- But CG measures FH;
therefore E also measures FH.
- But it also measures the whole FA;
therefore it will also measure the remainder, the unit AH, though it is a number: which is impossible.
- Therefore no number will measure the numbers AB, CD; therefore AB, CD are prime to one another. [VII.Def.12] Q.E.D.
Let's write Euclid's proof in several equations. Assume a > b; then
Assume a and b have a common measure e (e >1); then e measures r based on the first equation and t based on the second equation. Hence, e measures r and 1, but e cannot measure (divide) 1. Therefore, a and b are prime to each other.
- Given two numbers not prime to one another, to find their greatest common measure.
- Let AB, CD be the two given numbers not prime to one another.
- Thus it is required to find the greatest common measure of AB, CD.
- If now CD measures AB - and it also measures itself - CD is a common measure of CD, AB.
- And it is manifest that it is also the greatest; for no greater number than CD will measure CD.
- But, if CD does not measure AB, then, the less of the numbers AB, CD being continually subtracted from the greater, some number will be left which will measure the one before it.
- For an unit will not be left; otherwise AB, CD will be prime to one another [VII, I], which is contrary to the hypothesis.
- Therefore, some number will be left which will measure the one before it.
- Now let CD, measuring BE, leave EA less than itself, let EA, measuring DF, leave FC less than itself, and let CF measure AE.
- Since then, CF measures AE, and AE measures DF,
therefore CF will also measure DF.
- But it also measures itself;
therefore it will also measure the whole CD.
- But CD measures BE;
therefore CF also measures BE.
- But it also measures EA;
therefore it will also measure the whole BA.
- But it also measures CD;
therefore CF measures AB, CD.
- Therefore CF is a common measure of AB, CD.
- I sat next that it is also the greatest.
- For, if CF is not the greatest common measure of AB, CD, some number which is greater than CF will measure the numbers AB, CD.
- Let such a number measure them, and let it be G.
- Now, since G measures CD, while CD measures BE, G also measures BE.
- But it also measures the whole BA;
therefore it will also measure the remainder AE.
- But AE measures DF;
therefore G will also measure DF.
- But it also measures the whole DC;
therefore it will also measure the remainder CF, that is, the greater will measure the less: which is impossible.
- Therefore no number which is greater than CF will measure the number AB, CD;
- therefore CF is the greatest common measure of AB, CD.
PORISM. From this it is manifest that, if a number measure two numbers, it will also measure their greatest common measure. Q.E.D
Prop.2 is pretty self-explanatory, proved in a similar way as Prop.1.
Comparing the modern proof with Euclid's proof, it is not hard to notice that the modern proof is more about algebra, while Euclid did his proof of his algorithm using geometry because algebra had not been invented yet. However, the main idea is pretty much the same. They both prove that the result is a common divisor first and then show that it is the biggest common divisor.
Extended Euclidean Algorithm
Expand the Euclidean algorithm and you will be able to solve Bézout's identity for x and y when d = gcd(a, b): ax +by = gcd(a, b).
Note: Usually either x or y will be negative since a, b and gcd(a, b) are positive and a,b are bigger than gcd(a, b) more often than not.
- ... ...
Solve for using the second last equation and we get:
Now let's solve the previous equation for in the same way:
Now you can see gcd(a, b) is expressed by a linear combination of and based on the last two equations. If we continue this process by using the previous equations from the list above, we could get a linear combination of and with representing and representing . If we keep going like this till we hit the first equation, we can express gcd(a, b) as a linear combination of a and b, which is what we intend to do.
The description of the extended Euclidean algorithm:
Input: Two non-negative integers a and b ( ).
Output: d = gcd(a, b) and integers x and y satifying ax + by = d.
- If b = 0, set d = a, x = 1, y = 0, and return(d, x, y).
- If not, set
- While b > 0, do
- Set and return(d, x, y).
This linear equation is going to be very complicated with all these notations, so it is much easier to understand with an example:
Solve for integers x and y such that 168x + 64y = 8.
- Apply Euclidean algorithm to compute gcd(168, 64):
- 168 = 2 64 + 40
- 64 = 1 40 + 24
- 40 = 1 24 + 16
- 24 = 1 16 + 8
- 16 = 2 8 + 0
- Use the extended Euclidean algorithm to get x and y:
From the fourth equation we get
- 8 = 24 - 1 16.
From the third equation we get
- 16 = 40 - 1 24.
- 8 = 24 - 1 (40 - 1 24)
- 8 = 24 - 1 40 + 124
- 8 = 224 - 140
- Do the same steps for the second equation: 24 = 64 - 1 40
- 8 = 2(64 - 1 40) - 140
- 8 = 264 - 3 40
From the first equation we get 40 = 168 - 264
Therefore, 8 = 264 - 3 (168 - 264)
- 8 = -3168 + 864
- x = -3, y = 8
The Euclidean algorithm makes it elegantly easy to compute the two Bézout's coefficients.
Number of Steps - Lamé's Theorem
Gabriel Lamé is the first person who shows the number of steps required by the Euclidean algorithm. His theorem states that the number of steps in Euclidean algorithm for gcd(a,b) is at most five times the number of digits of the smaller number b. Thus, the Euclidean algorithm is linear-time in the number of digits in b.
Recall the division equations from the Euclidean algorithm,
- ... ...
The number of steps is n+1.
a and b are integers and we assume a is bigger than b, so . The Fibonacci Numbers are 1, 1, 2, 3, 5, 8, 13, ... , where every later number is the sum of the two previous numbers. Denote as the nth Fibonacci number (i.e. ). Note that all the numbers in the division equations, , are positive integers.
- couldn't be 0, as otherwise all the remainders would be 0. Hence, .
- From the last equation , we know that . Thus, should be bigger than 1: . Therefore, .
- For the integer , we must have . Thus, . Since and we have .
So far, we have three conclusions:
According to induction,
A theorem about the lower bound of Fibonacci numbers states that for all integers where (the sum of the Golden Ratio and 1).
Assume b has k digits, so . Then
Therefore, the number of steps . The number of steps required by Euclidean algorithm for gcd(a,b) is no more than five times the number of digits of b.
Shortcomings of the Euclidean Algorithm
The Euclidean algorithm is an ancient but good and simple algorithm to find the gcd of two nonnegative integers; it is well designed both theoretically and practically. Due to its simplicity, it is widely applied in many industries today. However, when dealing with really big integers (prime numbers over 64 digits in particular), finding the right quotients using the Euclidean algorithm adds to the time of computation for modern computers.
Stein's algorithm (also known as the binary GCD algorithm) is also an algorithm to compute the gcd of two nonnegative integers brought forward by J. Stein in 1967. This alternative is made to enhance the efficiency of the Euclidean algorithm, because it replaces complicated division and multiplication with addition, subtraction and shifts, which make it easier for the CPU to compute large integers.
The algorithm has the following conclusions:
- gcd(m, 0) = m, gcd(0, m) = m. It is because every number except 0 divides 0 and m is the biggest number that can divide itself.
- If e and f are both even integers, then gcd(e, f) = 2 gcd(), because 2 is definitely a common divisor of two even integers.
- If e is even and v is odd, then gcd(e, f) = gcd(, f), because 2 is definitely not a common divisor of an even integer and an odd integer.
- Otherwise both are odd and gcd(e, f) = gcd(, the smaller one of e and f). According to Euclidean algorithm, the difference of e and f could also divide the gcd of e and f. And Euclidean algorithm with a division by 2 results in an integer because the difference of two odd integers is even.
The description of Stein's algorithm:
Output: g = gcd(u, v)
- g = 1.
- While both u and v are even integers, do .
- While , do:
- While u is even, do: .
- While v is even, do: .
- If , u = t; else, v = t.
- Return ()
- ; ; ;
Now you may have a better understanding of the efficiency of Stein's algorithm, which substitutes divisions with faster operations by exploiting the binary representation that real computers use nowadays.
Why It's InterestingThe Euclidean algorithm is a fundamental algorithm for other mathematical theories and various subjects in different areas. Please see The Application of Euclidean Algorithm to learn more about the Euclidean algorithm.
- There are currently no teaching materials for this page. Add teaching materials.
Loy, Jim. Euclid's Algorithm. Retrieved from http://www.jimloy.com/number/euclids.htm.
Wikipedia(Extended Euclidean Algorithm). (n.d.). Extended Euclidean Algorithm. Retrieved from http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm.
Artmann, Benno. (1999) ‘‘ Euclid-the creation of mathematics.’’ New York: Springer-Verlag.
Weisstein, Eric W. Euclidean Algorithm. From MathWorld--A Wolfram Web Resource. Retrieved from http://mathworld.wolfram.com/EuclideanAlgorithm.html.
Bogomolny, Alexander. Euclid's Algorithm. Retrieved from http://www.cut-the-knot.org/blue/Euclid.shtml.
Health, T.L. (1926) Euclid The Thirteen Books of the Elements. Volume 2, Second Edition. London: Cambridge University Press.
Klappenecker, Andreas. Euclid's Algorithm. Retrieved from http://faculty.cs.tamu.edu/klappi/alg/euclid.pdf.
Ranjan, Desh. Euclid’s Algorithm for the Greatest Common Divisor. Retrieved from http://www.cs.nmsu.edu/historical-projects/Projects/EuclidGCD.pdf.
The Euclidean Algorithm. Retrieved from http://www.math.rutgers.edu/~greenfie/gs2004/euclid.html.
Caldwell, Chris K. Euclidean algorithm. Retrieved from http://primes.utm.edu/glossary/xpage/EuclideanAlgorithm.html.
Gallian, Joseph A. (2010) Contemporary Abstract Algebra Seventh Edition. Belmont: Brooks/Cole, Cengage Learning.
Milson, Robert. Euclid's Algorithm. Retrieved from http://planetmath.org/encyclopedia/EuclidsAlgorithm.html.
Black, Paul E. Binary GCD Algorithm. Retrieved from http://ce.sharif.edu/~ghodsi/ds-alg-dic/HTML/binaryGCD.html.
Wikipedia (Binary GCD Algorithm). (n.d.). Binary GCD Algorithm. Retrieved from http://en.wikipedia.org/wiki/Binary_GCD_algorithm.
Caldwell, Chris K. Lame's Theorem. Retrieved from http://primes.utm.edu/glossary/xpage/LamesTheorem.html.
University of Minnesota. Induction and Recursion. Retrieved from http://www-users.cselabs.umn.edu/classes/Fall-2009/csci2011/lecture35.pdf.
Future Directions for this Page
- More applets or animations of Euclidean algorithm.
- More pictures if possible.
- Worst case of Euclidean algorithm.
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | http://mathforum.org/mathimages/index.php?title=Euclidean_Algorithm&oldid=25204 | 13 |
48 | First part is a look at the history of an algorithm. Then follows its definition and next part is focused on the main properties of algorithms. That should give reader better insight into the topic. This part is followed by different types of algorithms; there will be told a little bit about their strengths and weaknesses. Last part is about presentation and writing of algorithms.
1. History and Origin of an Algorithm
The word „algorithm“ is derived from a name of a major Persian mathematician Abd Allāh Muhammad ibn Mūsā al-Khwārizmī, who lived in the first half of 9th century. This mathematician practically created the system of Arabic numerals and basic algebra (specifically, methods for solving linear and quadratic equations). His name was transferred into Latin as „algorismus“, over time as „algorithm“, what originally meant „the implementation of arithmetic using Arabic numerals“.
2. Definition of an Algorithm
An algorithm is often defined as precise instructions or steps which are used to solve a given type of task. This definition seems to be true; however, by further research could find out that it is not very accurate. The accurate definition of an algorithm is – an algorithm is a procedure which can be done by Turing machine. Turing machine is a theoretical computer model described by the mathematician Alan Turing.
It consists of a processor unit formed by a finite automaton and a program in the shape of the transition rules function and a potentially infinite tape for writing intermediate results and data inputs. An algorithm is a schematic procedure for solving a certain kind of problem, which is implemented using a finite number of well-defined steps. Although, this term is now used mainly in a computer science and natural sciences in general, so its scope is much broader (kitchen recipes, instructions, …).
3. Properties of Algorithms
In practice, an algorithm is denoted by prescription, which has the following properties:
- It is clever, that makes work easier.
- This applies to repetitive activities.
Creating an algorithm is certainly very hard mental work. There is no point wasting it on things unique to one person, that will not be repeated and that nobody needs. An algorithm usually works with some inputs, variables and activities. Inputs have a defined set of values which may be acquired. An algorithm has at least one output, which is required in relation to specific inputs and thus form the answer to a problem that the algorithm is supposed to solve. Algorithms have to be finite, definite, effective and general.
Each algorithm must terminate after a finite number of steps. This number can be arbitrarily large (depending on the extent and values of the input data) but for every input it must be finite. Procedures that do not meet this condition could be called computing methods. The special example of an infinite computing method is a reactive process which continuously interacts with the environment.
Each step of an algorithm must be clearly and precisely defined, in each situation must be quite clear what and how to do – no step can be interpreted in multiple ways. Because a plain language generally does not provide absolute accuracy and clarity of expression, the programming language was designed. In this language, each command has a clearly defined meaning. Expression of computational methods in a programming language is called program.
Generally, we require that an algorithm to be effective in a sense that every procedure should be simple enough to be realizable. This means that instructions can be at least in principle performed in finite amount of time using only pencil and paper.
An algorithm does not solve one specific problem (for example how to calculate 8 + 9) but the general class of similar problems (for example how to calculate the sum of two integers).
4. Partitioning of Algorithms
Recursive and iterative algorithms
An iterative algorithm is one that lies in the repetition of certain of its parts (block). A recursive algorithm repeats the code through a call to itself (usually on smaller sub-problems). Every recursive algorithm can be converted into an iterative form. The advantage of recursive algorithms is their easily readable and compact notation. The disadvantage is a consumption of additional system resources to maintain each recursive call.
Deterministic and nondeterministic algorithms
A deterministic algorithm is one that allows in every step just one way to proceed. A nondeterministic algorithm allows more ways. An example could be deterministic and nondeterministic automata.
Serial, parallel and distributed algorithms
A serial algorithm performs all steps in series (one after the other). A parallel algorithm performs the following steps simultaneously (multi-train) and a distributed algorithm is designed to run simultaneously on multiple machines.
Asymptotic complexity of an algorithm
Asymptotic complexity of an algorithm is characterized by the number of transactions, depending on data size. For example, if the algorithm walks through a field, then the complexity is linear (for each element is assigned a constant number of operations).
Class P – contains problems decidable in polynomial time. Class NP – it is possible to verify their solution in polynomial time.
5. Types of Algorithms
This algorithm calls itself with smaller input values in each step. Base cases are solved directly and immediately and then the algorithm backtracks to find a simpler solution. Generally, recursive computer programs require more memory and calculations compared to other algorithms, but they are simpler and in many cases natural problem solvers.
Backtracking algorithms try to find the solution. If the solution is found, the algorithm stops. Otherwise, it continues to backtrack and test again until the solution is found.
Divide and conquer
This algorithm divides the problem into smaller sub-problems of a same type which are recursively applied (up to trivial sub-problems that can be solved directly). Then the partial solutions are appropriately combined. Traditionally, an algorithm is only called divide and conquer if it contains two or more recursive calls. This algorithm is used for example to solve Hanoi towers puzzle. In the case that more than one computer (or one processor core) is available, the task can be divided between them – this goal is dedicated to parallel algorithms.
This kind of algorithm remembers past results and uses them to find new results. Dynamic programming is usually used if there are multiple solutions and we need to find the best one. It works on a principle that the algorithm gradually solves the problems from simplest to more complex, by using the results of already solved simpler sub-problems. This algorithm is used for example to count Fibonacci number.
Greedy algorithms work well for optimization problems. The term „optimization problem” means a problem when finding solution is not good enough and the best solution is needed. It works in phases. In every phase the best solution is taken which will further lead to the best overall solution. Once a choice is made, it is not possible to backtrack later.
Branch and bound algorithms
As an algorithm progress, a tree of sub-problems is formed to the root which is the original problem. It follows each branch until it is either solved or concentrated with another branch. It is used for example to solve problems when minimal total distance travelled between unspecified amounts of points is needed.
Brute force algorithms
It starts from a random point and uses every possibility until the solution is found. It is easier to implement but very slow and cannot be applied to a problem which have a big input size.
This class contains any algorithm which made some decisions randomly or pseudo-randomly.
Genetic algorithms work by mimicking biological evolutionary processes, the gradual growing the best solutions through mutation and crossbreeding. In genetic programming, this procedure is applied directly to algorithms that are interpreted as a possible solution.
The goal is not to find the exact solution, but a suitable approximation. This is used in situations where the available resources (e.g. time) are not sufficient to use exact algorithms (or if no suitable exact algorithm is known at all).
6. Presentation and Writing of Algorithms
There are few ways to present or write down an algorithm. Like a natural language in text, a pseudo-language, a flowchart, a structural diagram, a kopenogram, and finally a program. Of course, it can be described verbally by a natural language. A graphical representation by a flowchart using specific symbols and shortcuts is the most typical option.
This paper was supported by the Internal Grant Agency of TBU in Zlín, project No. IGA/FAI/2012/015.
- An algorithm [online]. 2011-11-29 [Cit. 2011-11-29]. Available from WWW:
- Algorithms [online]. [Cit. 2011-12-03]. Available from WWW:
- Introduction to the theory of algorithms [online]. [Cit. 2011-12-04]. Available from WWW:
- An algorithm [online]. [Cit. 2011-12-04]. Available from WWW:
- Algorithms: Introduction to the theory [online]. [Cit. 2011-12-02]. Available from WWW:
- Algorithms [online]. [Cit. 2011-12-04]. Available from WWW:
- Algorithmic solvability of problems [online]. [Cit. 2011-10-15]. Available from WWW:
- An algorithm [online]. [Cit. 2011-12-04]. Available from WWW:
- WRÓBLEWSKI, Piotr. Algorithms: Data Structures and Programming Techniques. Brno: Computer Press, 2004. ISBN 80-251-0343-9. | http://www.posterus.sk/?p=13127 | 13 |
17 | Evolutionary history of life
|Part of a series on|
The evolutionary history of life on Earth traces the processes by which living and fossil organisms have evolved since life on the planet first originated until the present day. Earth formed about 4.5 Ga (billion years ago) and life appeared on its surface within one billion years. The similarities between all present-day organisms indicate the presence of a common ancestor from which all known species have diverged through the process of evolution.1
Microbial mats of coexisting bacteria and archaea were the dominant form of life in the early Archean and many of the major steps in early evolution are thought to have taken place within them.2 The evolution of oxygenic photosynthesis, around 3.5 Ga, eventually led to the oxygenation of the atmosphere, beginning around 2.4 Ga.3 The earliest evidence of eukaryotes (complex cells with organelles) dates from 1.85 Ga,45 and while they may have been present earlier, their diversification accelerated when they started using oxygen in their metabolism. Later, around 1.7 Ga, multicellular organisms began to appear, with differentiated cells performing specialised functions.6 Bilateria, animals with a front and a back, appeared by 555 million years ago.7
The earliest land plants date back to around 450 Ma (million years ago),8 although evidence suggests that algal scum formed on the land as early as 1.2 Ga. Land plants were so successful that they are thought to have contributed to the late Devonian extinction event.9 Invertebrate animals appear during the Ediacaran period,10 while vertebrates originated about during the Cambrian explosion.11 During the Permian period, synapsids, including the ancestors of mammals, dominated the land,12 but most of this group became extinct in the Permian–Triassic extinction event .13 During the recovery from this catastrophe, archosaurs became the most abundant land vertebrates, displacing therapsids in the mid-Triassic;14 one archosaur group, the dinosaurs, dominated the Jurassic and Cretaceous periods.15 After the Cretaceous–Paleogene extinction event killed off the dinosaurs,16 mammals increased rapidly in size and diversity.17 Such mass extinctions may have accelerated evolution by providing opportunities for new groups of organisms to diversify.18
Fossil evidence indicates that flowering plants appeared and rapidly diversified in the Early Cretaceous ( ) probably helped by coevolution with pollinating insects. Flowering plants and marine phytoplankton are still the dominant producers of organic matter. Social insects appeared around the same time as flowering plants. Although they occupy only small parts of the insect "family tree", they now form over half the total mass of insects. Humans evolved from a lineage of upright-walking apes whose earliest fossils date from over 6 Ma. Although early members of this lineage had chimpanzee-sized brains, there are signs of a steady increase in brain size after about 3 Ma.
The oldest meteorite fragments found on Earth are about 4.54 billion years ago; this, coupled primarily with the dating of ancient lead deposits, has put the estimated age of Earth at around that time.20 The Moon has the same composition as Earth's crust but does not contain an iron-rich core like the Earth's. Many scientists think that about 40 million years later a body the size of Mars struck the Earth, throwing into orbit crust material that formed the Moon. Another hypothesis is that the Earth and Moon started to coalesce at the same time but the Earth, having much stronger gravity than the early Moon, attracted almost all the iron particles in the area.21
Until recentlywhen?, the oldest rocks found on Earth were about 3.8 Ga,20 leading scientists to believe for decadestimeframe? that Earth's surface had been molten until then. Accordingly, they named this part of Earth's history the Hadean eon, whose name means "hellish".22 However analysis of zircons formed 4.4 billion years ago indicates that Earth's crust solidified about 100 Ma after the planet's formation and that the planet quickly acquired oceans and an atmosphere, which may have been capable of supporting life.23
Evidence from the Moon indicates that from 4 billion to 3.8 billion years ago it suffered a Late Heavy Bombardment by debris that was left over from the formation of the Solar System, and the Earth should have experienced an even heavier bombardment due to its stronger gravity.2224 While there is no direct evidence of conditions on Earth 4 billion to 3.8 billion years ago, there is no reason to think that the Earth was not also affected by this late heavy bombardment.25 This event may well have stripped away any previous atmosphere and oceans; in this case gases and water from comet impacts may have contributed to their replacement, although volcanic outgassing on Earth would have supplied at least half.26 However, if subsurface microbial life had evolved by this point, it would have survived the bombardment.27
The earliest identified organisms were minute and relatively featureless, and their fossils look like small rods, which are very difficult to tell apart from structures that arise through abiotic physical processes. The oldest undisputed evidence of life on Earth, interpreted as fossilized bacteria, dates to 3 Ga.28contradictory Other finds in rocks dated to about 3.5 Ga have been interpreted as bacteria,29 with geochemical evidence also seeming to show the presence of life 3.8 Ga.30 However these analyses were closely scrutinized, and non-biological processes were found which could produce all of the "signatures of life" that had been reported.3132 While this does not prove that the structures found had a non-biological origin, they cannot be taken as clear evidence for the presence of life. Geochemical signatures from rocks deposited 3.4 Ga have been interpreted as evidence for life,2833 although these statements have not been thoroughly examined by critics.
Biologists reason that all living organisms on Earth must share a single last universal ancestor, because it would be virtually impossible that two or more separate lineages could have independently developed the many complex biochemical mechanisms common to all living organisms.3536 As previously mentioned the earliest organisms for which fossil evidence is available are bacteria, cells far too complex to have arisen directly from non-living materials.37 The lack of fossil or geochemical evidence for earlier organisms has left plenty of scope for hypotheses, which fall into two main groups: 1) that life arose spontaneously on Earth or 2) that it was "seeded" from elsewhere in the Universe.
The idea that life on Earth was "seeded" from elsewhere in the Universe dates back at least to the Greek philosopher Anaximander in the sixth century BCE.38 In the twentieth century it was proposed by the physical chemist Svante Arrhenius,39 by the astronomers Fred Hoyle and Chandra Wickramasinghe,40 and by molecular biologist Francis Crick and chemist Leslie Orgel.41 There are three main versions of the "seeded from elsewhere" hypothesis: from elsewhere in our Solar System via fragments knocked into space by a large meteor impact, in which case the most credible sources are Mars42 and Venus;43 by alien visitors, possibly as a result of accidental contamination by micro-organisms that they brought with them;41 and from outside the Solar System but by natural means.3942 Experiments suggest that some micro-organisms can survive the shock of being catapulted into space and some can survive exposure to radiation for several days, but there is no proof that they can survive in space for much longer periods.42 Scientists are divided over the likelihood of life arising independently on Mars,44 or on other planets in our galaxy.42
Life on Earth is based on carbon and water. Carbon provides stable frameworks for complex chemicals and can be easily extracted from the environment, especially from carbon dioxide. The only other element with similar chemical properties, silicon, forms much less stable structures and, because most of its compounds are solids, would be more difficult for organisms to extract. Water is an excellent solvent and has two other useful properties: the fact that ice floats enables aquatic organisms to survive beneath it in winter; and its molecules have electrically negative and positive ends, which enables it to form a wider range of compounds than other solvents can. Other good solvents, such as ammonia, are liquid only at such low temperatures that chemical reactions may be too slow to sustain life, and lack water's other advantages.45 Organisms based on alternative biochemistry may however be possible on other planets.46
Research on how life might have emerged unaided from non-living chemicals focuses on three possible starting points: self-replication, an organism's ability to produce offspring that are very similar to itself; metabolism, its ability to feed and repair itself; and external cell membranes, which allow food to enter and waste products to leave, but exclude unwanted substances.47 Research on abiogenesis still has a long way to go, since theoretical and empirical approaches are only beginning to make contact with each other.4849
Even the simplest members of the three modern domains of life use DNA to record their "recipes" and a complex array of RNA and protein molecules to "read" these instructions and use them for growth, maintenance and self-replication. This system is far too complex to have emerged directly from non-living materials.37 The discovery that some RNA molecules can catalyze both their own replication and the construction of proteins led to the hypothesis of earlier life-forms based entirely on RNA.50 These ribozymes could have formed an RNA world in which there were individuals but no species, as mutations and horizontal gene transfers would have meant that the offspring in each generation were quite likely to have different genomes from those that their parents started with.51 RNA would later have been replaced by DNA, which is more stable and therefore can build longer genomes, expanding the range of capabilities a single organism can have.515253 Ribozymes remain as the main components of ribosomes, modern cells' "protein factories".54
Although short self-replicating RNA molecules have been artificially produced in laboratories,55 doubts have been raised about where natural non-biological synthesis of RNA is possible.56 The earliest "ribozymes" may have been formed of simpler nucleic acids such as PNA, TNA or GNA, which would have been replaced later by RNA.5758
In 2003 it was proposed that porous metal sulfide precipitates would assist RNA synthesis at about 100 °C (212 °F) and ocean-bottom pressures near hydrothermal vents. In this hypothesis lipid membranes would be the last major cell components to appear and until then the proto-cells would be confined to the pores.59
A series of experiments starting in 1997 showed that early stages in the formation of proteins from inorganic materials including carbon monoxide and hydrogen sulfide could be achieved by using iron sulfide and nickel sulfide as catalysts. Most of the steps required temperatures of about 100 °C (212 °F) and moderate pressures, although one stage required 250 °C (482 °F) and a pressure equivalent to that found under 7 kilometres (4.3 mi) of rock. Hence it was suggested that self-sustaining synthesis of proteins could have occurred near hydrothermal vents.60
It has been suggested that double-walled "bubbles" of lipids like those that form the external membranes of cells may have been an essential first step.61 Experiments that simulated the conditions of the early Earth have reported the formation of lipids, and these can spontaneously form liposomes, double-walled "bubbles", and then reproduce themselves. Although they are not intrinsically information-carriers as nucleic acids are, they would be subject to natural selection for longevity and reproduction. Nucleic acids such as RNA might then have formed more easily within the liposomes than they would have outside.62
RNA is complex and there are doubts about whether it can be produced non-biologically in the wild.56 Some clays, notably montmorillonite, have properties that make them plausible accelerators for the emergence of an RNA world: they grow by self-replication of their crystalline pattern; they are subject to an analog of natural selection, as the clay "species" that grows fastest in a particular environment rapidly becomes dominant; and they can catalyze the formation of RNA molecules.63 Although this idea has not become the scientific consensus, it still has active supporters.64
Research in 2003 reported that montmorillonite could also accelerate the conversion of fatty acids into "bubbles", and that the "bubbles" could encapsulate RNA attached to the clay. These "bubbles" can then grow by absorbing additional lipids and then divide. The formation of the earliest cells may have been aided by similar processes.65
Microbial mats are multi-layered, multi-species colonies of bacteria and other organisms that are generally only a few millimeters thick, but still contain a wide range of chemical environments, each of which favors a different set of micro-organisms.67 To some extent each mat forms its own food chain, as the by-products of each group of micro-organisms generally serve as "food" for adjacent groups.68
Stromatolites are stubby pillars built as microbes in mats slowly migrate upwards to avoid being smothered by sediment deposited on them by water.67 There has been vigorous debate about the validity of alleged fossils from before 3 Ga,69 with critics arguing that so-called stromatolites could have been formed by non-biological processes.31 In 2006 another find of stromatolites was reported from the same part of Australia as previous ones, in rocks dated to 3.5 Ga.70
In modern underwater mats the top layer often consists of photosynthesizing cyanobacteria which create an oxygen-rich environment, while the bottom layer is oxygen-free and often dominated by hydrogen sulfide emitted by the organisms living there.68 It is estimated that the appearance of oxygenic photosynthesis by bacteria in mats increased biological productivity by a factor of between 100 and 1,000. The reducing agent used by oxygenic photosynthesis is water, which is much more plentiful than the geologically produced reducing agents required by the earlier non-oxygenic photosynthesis.71 From this point onwards life itself produced significantly more of the resources it needed than did geochemical processes.72 Oxygen is toxic to organisms that are not adapted to it, but greatly increases the metabolic efficiency of oxygen-adapted organisms.7374 Oxygen became a significant component of Earth's atmosphere about 2.4 Ga.75 Although eukaryotes may have been present much earlier,7677 the oxygenation of the atmosphere was a prerequisite for the evolution of the most complex eukaryotic cells, from which all multicellular organisms are built.78 The boundary between oxygen-rich and oxygen-free layers in microbial mats would have moved upwards when photosynthesis shut down overnight, and then downwards as it resumed on the next day. This would have created selection pressure for organisms in this intermediate zone to acquire the ability to tolerate and then to use oxygen, possibly via endosymbiosis, where one organism lives inside another and both of them benefit from their association.2
Cyanobacteria have the most complete biochemical "toolkits" of all the mat-forming organisms. Hence they are the most self-sufficient of the mat organisms and were well-adapted to strike out on their own both as floating mats and as the first of the phytoplankton, providing the basis of most marine food chains.2
|This section requires expansion. (October 2012)|
Eukaryotes may have been present long before the oxygenation of the atmosphere,76 but most modern eukaryotes require oxygen, which their mitochondria use to fuel the production of ATP, the internal energy supply of all known cells.78 In the 1970s it was proposed and, after much debate, widely accepted that eukaryotes emerged as a result of a sequence of endosymbioses between "procaryotes". For example: a predatory micro-organism invaded a large procaryote, probably an archaean, but the attack was neutralized, and the attacker took up residence and evolved into the first of the mitochondria; one of these chimeras later tried to swallow a photosynthesizing cyanobacterium, but the victim survived inside the attacker and the new combination became the ancestor of plants; and so on. After each endosymbiosis began, the partners would have eliminated unproductive duplication of genetic functions by re-arranging their genomes, a process which sometimes involved transfer of genes between them.818283 Another hypothesis proposes that mitochondria were originally sulfur- or hydrogen-metabolising endosymbionts, and became oxygen-consumers later.84 On the other hand mitochondria might have been part of eukaryotes' original equipment.85
There is a debate about when eukaryotes first appeared: the presence of steranes in Australian shales may indicate that eukaryotes were present 2.7 Ga;77 however an analysis in 2008 concluded that these chemicals infiltrated the rocks less than 2.2 Ga and prove nothing about the origins of eukaryotes.86 Fossils of the alga Grypania have been reported in 1.85 Ga rocks (originally dated to 2.1 Ga but later revised5), and indicates that eukaryotes with organelles had already evolved.87 A diverse collection of fossil algae were found in rocks dated between 1.5 and 1.4 Ga.88 The earliest known fossils of fungi date from 1.43 Ga.89
Plastids are thought to have originated from endosymbiotic cyanobacteria. The symbiosis evolved around 1500 million years ago and enabled eukaryotes to carry out oxygenic photosynthesis.90 Three evolutionary lineages have since emerged in which the plastids are named differently: chloroplasts in green algae and plants, rhodoplasts in red algae and cyanelles in the glaucophytes.
The defining characteristics of sexual reproduction are meiosis and fertilization. There is much genetic recombination in this kind of reproduction, in which offspring receive 50% of their genes from each parent,91 in contrast with asexual reproduction, in which there is no recombination. Bacteria also exchange DNA by bacterial conjugation, the benefits of which include resistance to antibiotics and other toxins, and the ability to utilize new metabolites.92 However conjugation is not a means of reproduction, and is not limited to members of the same species – there are cases where bacteria transfer DNA to plants and animals.93
The disadvantages of sexual reproduction are well-known: the genetic reshuffle of recombination may break up favorable combinations of genes; and since males do not directly increase the number of offspring in the next generation, an asexual population can out-breed and displace in as little as 50 generations a sexual population that is equal in every other respect.91 Nevertheless the great majority of animals, plants, fungi and protists reproduce sexually. There is strong evidence that sexual reproduction arose early in the history of eukaryotes and that the genes controlling it have changed very little since then.94 How sexual reproduction evolved and survived is an unsolved puzzle.95
The Red Queen Hypothesis suggests that sexual reproduction provides protection against parasites, because it is easier for parasites to evolve means of overcoming the defenses of genetically identical clones than those of sexual species that present moving targets, and there is some experimental evidence for this. However there is still doubt about whether it would explain the survival of sexual species if multiple similar clone species were present, as one of the clones may survive the attacks of parasites for long enough to out-breed the sexual species.91
The Mutation Deterministic Hypothesis assumes that each organism has more than one harmful mutation and the combined effects of these mutations are more harmful than the sum of the harm done by each individual mutation. If so, sexual recombination of genes will reduce the harm that bad mutations do to offspring and at the same time eliminate some bad mutations from the gene pool by isolating them in individuals that perish quickly because they have an above-average number of bad mutations. However the evidence suggests that the MDH's assumptions are shaky, because many species have on average less than one harmful mutation per individual and no species that has been investigated shows evidence of synergy between harmful mutations.91
The random nature of recombination causes the relative abundance of alternative traits to vary from one generation to another. This genetic drift is insufficient on its own to make sexual reproduction advantageous, but a combination of genetic drift and natural selection may be sufficient. When chance produces combinations of good traits, natural selection gives a large advantage to lineages in which these traits become genetically linked. On the other hand, the benefits of good traits are neutralized if they appear along with bad traits. Sexual recombination gives good traits the opportunities to become linked with other good traits, and mathematical models suggest this may be more than enough to offset the disadvantages of sexual reproduction.95 Other combinations of hypotheses that are inadequate on their own are also being examined.91
The simplest definitions of "multicellular", for example "having multiple cells", could include colonial cyanobacteria like Nostoc. Even a professional biologist's definition such as "having the same genome but different types of cell" would still include some genera of the green alga Volvox, which have cells that specialize in reproduction.97 Multicellularity evolved independently in organisms as diverse as sponges and other animals, fungi, plants, brown algae, cyanobacteria, slime moulds and myxobacteria.598 For the sake of brevity this article focuses on the organisms that show the greatest specialization of cells and variety of cell types, although this approach to the evolution of complexity could be regarded as "rather anthropocentric".99
The initial advantages of multicellularity may have included: increased resistance to predators, many of which attacked by engulfing; the ability to resist currents by attaching to a firm surface; the ability to reach upwards to filter-feed or to obtain sunlight for photosynthesis;101 the ability to create an internal environment that gives protection against the external one;99 and even the opportunity for a group of cells to behave "intelligently" by sharing information.100 These features would also have provided opportunities for other organisms to diversify, by creating more varied environments than flat microbial mats could.101
Multicellularity with differentiated cells is beneficial to the organism as a whole but disadvantageous from the point of view of individual cells, most of which lose the opportunity to reproduce themselves. In an asexual multicellular organism, rogue cells which retain the ability to reproduce may take over and reduce the organism to a mass of undifferentiated cells. Sexual reproduction eliminates such rogue cells from the next generation and therefore appears to be a prerequisite for complex multicellularity.101
The available evidence indicates that eukaryotes evolved much earlier but remained inconspicuous until a rapid diversification around 1 Ga. The only respect in which eukaryotes clearly surpass bacteria and archaea is their capacity for variety of forms, and sexual reproduction enabled eukaryotes to exploit that advantage by producing organisms with multiple cells that differed in form and function.101
The Francevillian Group Fossil, dated to 2.1 Ga, is the earliest known fossil organism that is clearly multicellular.19 This may have had differentiated cells.102 Another early multicellular fossil, Qingshania,note 1 dated to 1.7 Ga, appears to consist of virtually identical cells. The red alga called Bangiomorpha, dated at 1.2 Ga, is the earliest known organism that certainly has differentiated, specialized cells, and is also the oldest known sexually reproducing organism.101 The 1.43 billion-year-old fossils interpreted as fungi appear to have been multicellular with differentiated cells.89 The "string of beads" organism Horodyskia, found in rocks dated from 1.5 Ga to 900 Ma, may have been an early metazoan;5 however it has also been interpreted as a colonial foraminiferan.96
Animals are multicellular eukaryotes,note 2 and are distinguished from plants, algae, and fungi by lacking cell walls.104 All animals are motile,105 if only at certain life stages. All animals except sponges have bodies differentiated into separate tissues, including muscles, which move parts of the animal by contracting, and nerve tissue, which transmits and processes signals.106
The earliest widely accepted animal fossils are rather modern-looking cnidarians (the group that includes jellyfish, sea anemones and hydras), possibly from around , although fossils from the Doushantuo Formation can only be dated approximately. Their presence implies that the cnidarian and bilaterian lineages had already diverged.107
The Ediacara biota, which flourished for the last 40 Ma before the start of the Cambrian,108 were the first animals more than a very few centimeters long. Many were flat and had a "quilted" appearance, and seemed so strange that there was a proposal to classify them as a separate kingdom, Vendozoa.109 Others, however, been interpreted as early molluscs (Kimberella110111), echinoderms (Arkarua112), and arthropods (Spriggina,113 Parvancorina114). There is still debate about the classification of these specimens, mainly because the diagnostic features which allow taxonomists to classify more recent organisms, such as similarities to living organisms, are generally absent in the Ediacarans. However there seems little doubt that Kimberella was at least a triploblastic bilaterian animal, in other words significantly more complex than cnidarians.115
The small shelly fauna are a very mixed collection of fossils found between the Late Ediacaran and Mid Cambrian periods. The earliest, Cloudina, shows signs of successful defense against predation and may indicate the start of an evolutionary arms race. Some tiny Early Cambrian shells almost certainly belonged to molluscs, while the owners of some "armor plates", Halkieria and Microdictyon, were eventually identified when more complete specimens were found in Cambrian lagerstätten that preserved soft-bodied animals.116
In the 1970s there was already a debate about whether the emergence of the modern phyla was "explosive" or gradual but hidden by the shortage of Pre-Cambrian animal fossils.116 A re-analysis of fossils from the Burgess Shale lagerstätte increased interest in the issue when it revealed animals, such as Opabinia, which did not fit into any known phylum. At the time these were interpreted as evidence that the modern phyla had evolved very rapidly in the "Cambrian explosion" and that the Burgess Shale's "weird wonders" showed that the Early Cambrian was a uniquely experimental period of animal evolution.118 Later discoveries of similar animals and the development of new theoretical approaches led to the conclusion that many of the "weird wonders" were evolutionary "aunts" or "cousins" of modern groups119 – for example that Opabinia was a member of the lobopods, a group which includes the ancestors of the arthropods, and that it may have been closely related to the modern tardigrades.120 Nevertheless there is still much debate about whether the Cambrian explosion was really explosive and, if so, how and why it happened and why it appears unique in the history of animals.121
Most of the animals at the heart of the Cambrian explosion debate are protostomes, one of the two main groups of complex animals. One deuterostome group, the echinoderms, many of which have hard calcite "shells", are fairly common from the Early Cambrian small shelly fauna onwards.116 Other deuterostome groups are soft-bodied, and most of the significant Cambrian deuterostome fossils come from the Chengjiang fauna, a lagerstätte in China.123 The Chengjiang fossils Haikouichthys and Myllokunmingia appear to be true vertebrates,124 and Haikouichthys had distinct vertebrae, which may have been slightly mineralized.125 Vertebrates with jaws, such as the Acanthodians, first appeared in the Late Ordovician.126
Adaptation to life on land is a major challenge: all land organisms need to avoid drying-out and all those above microscopic size have to resist gravity; respiration and gas exchange systems have to change; reproductive systems cannot depend on water to carry eggs and sperm towards each other.127128 Although the earliest good evidence of land plants and animals dates back to the Ordovician Period ( ), and a number of microorganism lineages made it onto land much earlier,129 modern land ecosystems only appeared in the late Devonian, about .130
Oxygen is a potent oxidant whose accumulation in terrestrial atmosphere resulted from the development of photosynthesis over 3 Ga, in blue-green algae (cyanobacteria), which were the most primitive oxygenic photosynthetic organisms. Brown algae (seaweeds) accumulate inorganic mineral antioxidants such as rubidium, vanadium, zinc, iron, copper, molybdenum, selenium and iodine which is concentrated more than 30,000 times the concentration of this element in seawater. Protective endogenous antioxidant enzymes and exogenous dietary antioxidants helped to prevent oxidative damage. Most marine mineral antioxidants act in the cells as essential trace-elements in redox and antioxidant metallo-enzymes.
When 500 Ma ago plants and animals began to transfer from the sea to rivers and land about 500 Ma ago, environmental deficiency of these marine mineral antioxidants and iodine, was a challenge to the evolution of terrestrial life.131132 Terrestrial plants slowly optimized the production of “new” endogenous antioxidants such as ascorbic acid, polyphenols, flavonoids, tocopherols etc. A few of these appeared more recently, in last 200-50 Ma ago, in fruits and flowers of angiosperm plants.
In fact, angiosperms (the dominant type of plant today) and most of their antioxidant pigments evolved during the late Jurassic Period. Plants employ antioxidants to defend their structures against reactive oxygen species produced during photosynthesis. Animals are exposed to the same oxidants, and they have evolved endogenous enzymatic antioxidant systems.133 Iodine is the most primitive and abundant electron-rich essential element in the diet of marine and terrestrial organisms, and as iodide acts as an electron-donor and has this ancestral antioxidant function in all iodide-concentrating cells from primitive marine algae to more recent terrestrial vertebrates.134
Before the colonization of land, soil, a combination of mineral particles and decomposed organic matter, did not exist. Land surfaces would have been either bare rock or unstable sand produced by weathering. Water and any nutrients in it would have drained away very quickly.130
Films of cyanobacteria, which are not plants but use the same photosynthesis mechanisms, have been found in modern deserts, and only in areas that are unsuitable for vascular plants. This suggests that microbial mats may have been the first organisms to colonize dry land, possibly in the Precambrian. Mat-forming cyanobacteria could have gradually evolved resistance to desiccation as they spread from the seas to tidal zones and then to land.130 Lichens, which are symbiotic combinations of a fungus (almost always an ascomycete) and one or more photosynthesizers (green algae or cyanobacteria),135 are also important colonizers of lifeless environments,130 and their ability to break down rocks contributes to soil formation in situations where plants cannot survive.135 The earliest known ascomycete fossils date from in the Silurian.130
Soil formation would have been very slow until the appearance of burrowing animals, which mix the mineral and organic components of soil and whose feces are a major source of the organic components.130 Burrows have been found in Ordovician sediments, and are attributed to annelids ("worms") or arthropods.130136
In aquatic algae, almost all cells are capable of photosynthesies and are nearly independent. Life on land required plants to become internally more complex and specialized: photosynthesis was most efficient at the top; roots were required in order to extract water from the ground; the parts in between became supports and transport systems for water and nutrients.127137
Spores of land plants, possibly rather like liverworts, have been found in Mid Ordovician rocks dated to about . In Mid Silurian rocks there are fossils of actual plants including clubmosses such as Baragwanathia; most were under 10 centimetres (3.9 in) high, and some appear closely related to vascular plants, the group that includes trees.137
By the late Devonian , trees such as Archaeopteris were so abundant that they changed river systems from mostly braided to mostly meandering, because their roots bound the soil firmly.138 In fact they caused a "Late Devonian wood crisis",139 because:
- They removed more carbon dioxide from the atmosphere, reducing the greenhouse effect and thus causing an ice age in the Carboniferous period.140 In later ecosystems the carbon dioxide "locked up" in wood is returned to the atmosphere by decomposition of dead wood. However, the earliest fossil evidence of fungi that can decompose wood also comes from the Late Devonian.141
- The increasing depth of plants' roots led to more washing of nutrients into rivers and seas by rain. This caused algal blooms whose high consumption of oxygen caused anoxic events in deeper waters, increasing the extinction rate among deep-water animals.140
Animals had to change their feeding and excretory systems, and most land animals developed internal fertilization of their eggs. The difference in refractive index between water and air required changes in their eyes. On the other hand, in some ways movement and breathing became easier, and the better transmission of high-frequency sounds in air encouraged the development of hearing.128
Some trace fossils from the Cambrian-Ordovician boundary about are interpreted as the tracks of large amphibious arthropods on coastal sand dunes, and may have been made by euthycarcinoids,142 which are thought to be evolutionary "aunts" of myriapods.143 Other trace fossils from the Late Ordovician a little over probably represent land invertebrates, and there is clear evidence of numerous arthropods on coasts and alluvial plains shortly before the Silurian-Devonian boundary, about , including signs that some arthropods ate plants.144 Arthropods were well pre-adapted to colonise land, because their existing jointed exoskeletons provided protection against desiccation, support against gravity and a means of locomotion that was not dependent on water.145
The fossil record of other major invertebrate groups on land is poor: none at all for non-parasitic flatworms, nematodes or nemerteans; some parasitic nematodes have been fossilized in amber; annelid worm fossils are known from the Carboniferous, but they may still have been aquatic animals; the earliest fossils of gastropods on land date from the Late Carboniferous, and this group may have had to wait until leaf litter became abundant enough to provide the moist conditions they need.128
The earliest confirmed fossils of flying insects date from the Late Carboniferous, but it is thought that insects developed the ability to fly in the Early Carboniferous or even Late Devonian. This gave them a wider range of ecological niches for feeding and breeding, and a means of escape from predators and from unfavorable changes in the environment.146 About 99% of modern insect species fly or are descendants of flying species.147
Tetrapods, vertebrates with four limbs, evolved from other rhipidistian fish over a relatively short timespan during the Late Devonian ( ).150 The early groups are grouped together as Labyrinthodontia. They retained aquatic, fry-like tadpoles, a system still seen in modern amphibians. From the 1950s to the early 1980s it was thought that tetrapods evolved from fish that had already acquired the ability to crawl on land, possibly in order to go from a pool that was drying out to one that was deeper. However, in 1987, nearly complete fossils of Acanthostega from about showed that this Late Devonian transitional animal had legs and both lungs and gills, but could never have survived on land: its limbs and its wrist and ankle joints were too weak to bear its weight; its ribs were too short to prevent its lungs from being squeezed flat by its weight; its fish-like tail fin would have been damaged by dragging on the ground. The current hypothesis is that Acanthostega, which was about 1 metre (3.3 ft) long, was a wholly aquatic predator that hunted in shallow water. Its skeleton differed from that of most fish, in ways that enabled it to raise its head to breathe air while its body remained submerged, including: its jaws show modifications that would have enabled it to gulp air; the bones at the back of its skull are locked together, providing strong attachment points for muscles that raised its head; the head is not joined to the shoulder girdle and it has a distinct neck.148
The Devonian proliferation of land plants may help to explain why air breathing would have been an advantage: leaves falling into streams and rivers would have encouraged the growth of aquatic vegetation; this would have attracted grazing invertebrates and small fish that preyed on them; they would have been attractive prey but the environment was unsuitable for the big marine predatory fish; air-breathing would have been necessary because these waters would have been short of oxygen, since warm water holds less dissolved oxygen than cooler marine water and since the decomposition of vegetation would have used some of the oxygen.148
Later discoveries revealed earlier transitional forms between Acanthostega and completely fish-like animals.151 Unfortunately there is then a gap (Romer's gap) of about 30 Ma between the fossils of ancestral tetrapods and Mid Carboniferous fossils of vertebrates that look well-adapted for life on land. Some of these look like early relatives of modern amphibians, most of which need to keep their skins moist and to lay their eggs in water, while others are accepted as early relatives of the amniotes, whose waterproof skin enables them to live and breed far from water.149
Amniotes, whose eggs can survive in dry environments, probably evolved in the Late Carboniferous period (synapsids and sauropsids, date from around .153154 The synapsid pelycosaurs and their descendants the therapsids are the most common land vertebrates in the best-known Permian ( ) fossil beds. However at the time these were all in temperate zones at middle latitudes, and there is evidence that hotter, drier environments nearer the Equator were dominated by sauropsids and amphibians.155). The earliest fossils of the two surviving amniote groups,
The Permian-Triassic extinction wiped out almost all land vertebrates,156 as well as the great majority of other life.157 During the slow recovery from this catastrophe, estimated to have taken 30 million years,158 a previously obscure sauropsid group became the most abundant and diverse terrestrial vertebrates: a few fossils of archosauriformes ("ruling lizard forms") have been found in Late Permian rocks,159 but, by the Mid Triassic, archosaurs were the dominant land vertebrates. Dinosaurs distinguished themselves from other archosaurs in the Late Triassic, and became the dominant land vertebrates of the Jurassic and Cretaceous periods ( ).160
During the Late Jurassic, birds evolved from small, predatory theropod dinosaurs.161 The first birds inherited teeth and long, bony tails from their dinosaur ancestors,161 but some had developed horny, toothless beaks by the very Late Jurassic162 and short pygostyle tails by the Early Cretaceous.163
While the archosaurs and dinosaurs were becoming more dominant in the Triassic, the mammaliaform successors of the therapsids evolved into small, mainly nocturnal insectivores. This ecological role may have promoted the evolution of mammals, for example nocturnal life may have accelerated the development of endothermy ("warm-bloodedness") and hair or fur.164 By in the Early Jurassic there were animals that were very like today's mammals in a number of respects.165 Unfortunately there is a gap in the fossil record throughout the Mid Jurassic.166 However fossil teeth discovered in Madagascar indicate that the split between the lineage leading to monotremes and the one leading to other living mammals had occurred by .167 After dominating land vertebrate niches for about 150 Ma, the dinosaurs perished in the Cretaceous–Paleogene extinction ( ) along with many other groups of organisms.168 Mammals throughout the time of the dinosaurs had been restricted to a narrow range of taxa, sizes and shapes, but increased rapidly in size and diversity after the extinction,169170 with bats taking to the air within 13 Ma,171 and cetaceans to the sea within 15 Ma.172
The 250,000 to 400,000 species of flowering plants outnumber all other ground plants combined, and are the dominant vegetation in most terrestrial ecosystems. There is fossil evidence that flowering plants diversified rapidly in the Early Cretaceous, from ,173174 and that their rise was associated with that of pollinating insects.174 Among modern flowering plants Magnolias are thought to be close to the common ancestor of the group.173 However paleontologists have not succeeded in identifying the earliest stages in the evolution of flowering plants.173174
The social insects are remarkable because the great majority of individuals in each colony are sterile. This appears contrary to basic concepts of evolution such as natural selection and the selfish gene. In fact there are very few eusocial insect species: only 15 out of approximately 2,600 living families of insects contain eusocial species, and it seems that eusociality has evolved independently only 12 times among arthropods, although some eusocial lineages have diversified into several families. Nevertheless social insects have been spectacularly successful; for example although ants and termites account for only about 2% of known insect species, they form over 50% of the total mass of insects. Their ability to control a territory appears to be the foundation of their success.175
The sacrifice of breeding opportunities by most individuals has long been explained as a consequence of these species' unusual haplodiploid method of sex determination, which has the paradoxical consequence that two sterile worker daughters of the same queen share more genes with each other than they would with their offspring if they could breed.176 However Wilson and Hölldobler argue that this explanation is faulty: for example, it is based on kin selection, but there is no evidence of nepotism in colonies that have multiple queens. Instead, they write, eusociality evolves only in species that are under strong pressure from predators and competitors, but in environments where it is possible to build "fortresses"; after colonies have established this security, they gain other advantages through co-operative foraging. In support of this explanation they cite the appearance of eusociality in bathyergid mole rats,175 which are not haplodiploid.177
The earliest fossils of insects have been found in Early Devonian rocks from about Mazon Creek lagerstätten from the Late Carboniferous, about , include about 200 species, some gigantic by modern standards, and indicate that insects had occupied their main modern ecological niches as herbivores, detritivores and insectivores. Social termites and ants first appear in the Early Cretaceous, and advanced social bees have been found in Late Cretaceous rocks but did not become abundant until the Mid Cenozoic.178, which preserve only a few varieties of flightless insect. The
Modern humans evolved from a lineage of upright-walking apes that has been traced back over to Sahelanthropus.179 The first known stone tools were made about , apparently by Australopithecus garhi, and were found near animal bones that bear scratches made by these tools.180 The earliest hominines had chimp-sized brains, but there has been a fourfold increase in the last 3 Ma; a statistical analysis suggests that hominine brain sizes depend almost completely on the date of the fossils, while the species to which they are assigned has only slight influence.181 There is a long-running debate about whether modern humans evolved all over the world simultaneously from existing advanced hominines or are descendants of a single small population in Africa, which then migrated all over the world less than 200,000 years ago and replaced previous hominine species.182 There is also debate about whether anatomically modern humans had an intellectual, cultural and technological "Great Leap Forward" under 100,000 years ago and, if so, whether this was due to neurological changes that are not visible in fossils.183
Life on Earth has suffered occasional mass extinctions at least since life on Earth. When dominance of particular ecological niches passes from one group of organisms to another, it is rarely because the new dominant group is "superior" to the old and usually because an extinction event eliminates the old dominant group and makes way for the new one.18184. Although they were disasters at the time, mass extinctions have sometimes accelerated the evolution of
The fossil record appears to show that the gaps between mass extinctions are becoming longer and the average and background rates of extinction are decreasing. Both of these phenomena could be explained in one or more ways:185
- The oceans may have become more hospitable to life over the last 500 Ma and less vulnerable to mass extinctions: dissolved oxygen became more widespread and penetrated to greater depths; the development of life on land reduced the run-off of nutrients and hence the risk of eutrophication and anoxic events; and marine ecosystems became more diversified so that food chains were less likely to be disrupted.186187
- Reasonably complete fossils are very rare, most extinct organisms are represented only by partial fossils, and complete fossils are rarest in the oldest rocks. So paleontologists have mistakenly assigned parts of the same organism to different genera, which were often defined solely to accommodate these finds – the story of Anomalocaris is an example of this. The risk of this mistake is higher for older fossils because these are often unlike parts of any living organism. Many of the "superfluous" genera are represented by fragments which are not found again and the "superfluous" genera appear to become extinct very quickly.185
Biodiversity in the fossil record, which is
- "the number of distinct genera alive at any given time; that is, those whose first occurrence predates and whose last occurrence postdates that time"188
Oxygenic photosynthesis accounts for virtually all of the production of organic matter from non-organic ingredients. Production is split about evenly between land and marine plants, and phytoplankton are the dominant marine producers.189
The processes that have driven evolution are still operating. Well-known examples include the changes in coloration of the peppered moth over the last 200 years, the more recent appearance of pathogens that are resistant to antibiotics,190191 and the shortened wingspan, facilitating emergency takeoffs, of swallows that make their nests around automobile bridges.192 There is even evidence that humans are still evolving, and possibly at an accelerating rate over the last 40,000 years.193
- Constructal law
- Evolution of mammals
- Evolutionary history of plants
- History of evolutionary thought
- Taxonomy of commonly fossilised invertebrates
- Timeline of evolution
- Treatise on Invertebrate Paleontology
- Name given as in Butterfield's paper "Bangiomorpha pubescens ..." (2000). A fossil fish, also from China, has also been named Qingshania. The name of one of these will have to change.
- Myxozoa were thought to be an exception, but are now thought to be heavily modified members of the Cnidaria: Jímenez-Guri, E., Philippe, H., Okamura, B. and Holland, P. W. H. (July 2007). "Buddenbrockia is a cnidarian worm". Science 317 (116): 116–118. Bibcode:2007Sci...317..116J. doi:10.1126/science.1142024. PMID 17615357. Retrieved 2008-09-03.
- Futuyma, Douglas J. (2005). Evolution. Sunderland, Massachusetts: Sinuer Associates, Inc. ISBN 0-87893-187-2.
- Nisbet, E.G., and Fowler, C.M.R. (December 7, 1999). "Archaean metabolic evolution of microbial mats". Proceedings of the Royal Society: Biology 266 (1436): 2375. doi:10.1098/rspb.1999.0934. PMC 1690475. - abstract with link to free full content (PDF)
- Anbar, A.; Duan, Y.; Lyons, T.; Arnold, G.; Kendall, B.; Creaser, R.; Kaufman, A.; Gordon, G. et al. (2007). "A whiff of oxygen before the great oxidation event?". Science 317 (5846): 1903–1906. Bibcode:2007Sci...317.1903A. doi:10.1126/science.1140325. PMID 17901330. More than one of
|last2=specified (help); More than one of
|last3=specified (help); More than one of
|last4=specified (help); More than one of
|last5=specified (help); More than one of
|last6=specified (help); More than one of
|last7=specified (help); More than one of
|last8=specified (help); More than one of
- Knoll, Andrew H.; Javaux, E.J, Hewitt, D. and Cohen, P. (2006). "Eukaryotic organisms in Proterozoic oceans". Philosophical Transactions of the Royal Society of London, Part B 361 (1470): 1023–38. doi:10.1098/rstb.2006.1843. PMC 1578724. PMID 16754612.
- Fedonkin, M. A. (March 2003). "The origin of the Metazoa in the light of the Proterozoic fossil record" (PDF). Paleontological Research 7 (1): 9–41. doi:10.2517/prpsj.7.9. Retrieved 2008-09-02.
- Bonner, J.T. (1998) The origins of multicellularity. Integr. Biol. 1, 27–36
- Fedonkin, M. A.; Simonetta, A.; Ivantsov, A. Y. (2007). "New data on Kimberella, the Vendian mollusc-like organism (White Sea region, Russia): palaeoecological and evolutionary implications". Geological Society, London, Special Publication2 286: 157–179. doi:10.1144/SP286.12. Retrieved May 16, 2013.
- "The oldest fossils reveal evolution of non-vascular plants by the middle to late Ordovician Period (~450-440 m.y.a.) on the basis of fossil spores" Transition of plants to land
- Algeo, T.J.; Scheckler, S. E. (1998). "Terrestrial-marine teleconnections in the Devonian: links between the evolution of land plants, weathering processes, and marine anoxic events". Philosophical Transactions of the Royal Society B: Biological Sciences 353 (1365): 113–130. doi:10.1098/rstb.1998.0195.
- Chen, J-Y.; Oliveri, P; Li, CW; Zhou, GQ; Gao, F; Hagadorn, JW; Peterson, KJ; Davidson, EH (2000). "Putative phosphatized embryos from the Doushantuo Formation of China". Proceedings of the National Academy of Sciences 97 (9): 4457–4462. Bibcode:2000PNAS...97.4457C. doi:10.1073/pnas.97.9.4457. PMC 18256. PMID 10781044. Retrieved 2009-04-30.
- Shu et al. (November 4, 1999). "Lower Cambrian vertebrates from south China". Nature 402 (6757): 42–46. Bibcode:1999Natur.402...42S. doi:10.1038/46965.
- Hoyt, Donald F. (1997). "Synapsid Reptiles".
- Barry, Patrick L. (January 28, 2002). "The Great Dying". Science@NASA. Science and Technology Directorate, Marshall Space Flight Center, NASA. Retrieved March 26, 2009.
- Tanner LH, Lucas SG & Chapman MG (2004). "Assessing the record and causes of Late Triassic extinctions" (PDF). Earth-Science Reviews 65 (1–2): 103–139. Bibcode:2004ESRv...65..103T. doi:10.1016/S0012-8252(03)00082-5. Archived from the original on October 25, 2007. Retrieved 2007-10-22.
- Benton, M.J. (2004). Vertebrate Palaeontology. Blackwell Publishers. ISBN 0-632-05614-2.
- Fastovsky DE, Sheehan PM (2005). "The extinction of the dinosaurs in North America". GSA Today 15 (3): 4–10. doi:10.1130/1052-5173(2005)015<4:TEOTDI>2.0.CO;2. ISSN 1052-5173. Retrieved 2007-05-18.
- "Dinosaur Extinction Spurred Rise of Modern Mammals". News.nationalgeographic.com. Retrieved 2009-03-08.
- Van Valkenburgh, B. (1999). "Major patterns in the history of carnivorous mammals". Annual Review of Earth and Planetary Sciences 27: 463–493. Bibcode:1999AREPS..27..463V. doi:10.1146/annurev.earth.27.1.463.
- El Albani, Abderrazak; Bengtson, Stefan; Canfield, Donald E.; Bekker, Andrey; Macchiarelli, Reberto; Mazurier, Arnaud; Hammarlund, Emma U.; Boulvais, Philippe et al. (July 2010). "Large colonial organisms with coordinated growth in oxygenated environments 2.1 Gyr ago". Nature 466 (7302): 100–104. Bibcode:2010Natur.466..100A. doi:10.1038/nature09166. PMID 20596019.
- Dalrymple, G.B. (1991). The Age of the Earth. California: Stanford University Press. ISBN 0-8047-1569-6.
- Newman, W.L. (July 2007). "Age of the Earth". Publications Services, USGS. Retrieved 2008-08-29.
- Dalrymple, G.B. (2001). "The age of the Earth in the twentieth century: a problem (mostly) solved". Geological Society, London, Special Publications 190 (1): 205–221. Bibcode:2001GSLSP.190..205D. doi:10.1144/GSL.SP.2001.190.01.14. Retrieved 2007-09-20.
- Galimov, E.M. and Krivtsov, A.M. (December 2005). "Origin of the Earth-Moon System". J. Earth Syst. Sci. 114 (6): 593–600. Bibcode:2005JESS..114..593G. doi:10.1007/BF02715942.
- Cohen, B.A., Swindle, T.D. and Kring, D.A. (December 2000). "Support for the Lunar Cataclysm Hypothesis from Lunar Meteorite Impact Melt Ages". Science 290 (5497): 1754–1756. Bibcode:2000Sci...290.1754C. doi:10.1126/science.290.5497.1754. PMID 11099411. Retrieved 2008-08-31.
- "Early Earth Likely Had Continents And Was Habitable". University of Colorado. 2005-11-17. Retrieved 2009-01-11.
- Cavosie, A.J., Valley, J.W., Wilde, S. A. and the Edinburgh Ion Microprobe Facility (July 15, 2005). "Magmatic δ18O in 4400-3900 Ma detrital zircons: A record of the alteration and recycling of crust in the Early Archean". Earth and Planetary Science Letters 235 (3–4): 663–681. Bibcode:2005E&PSL.235..663C. doi:10.1016/j.epsl.2005.04.028.
- Britt, R.R. (2002-07-24). "Evidence for Ancient Bombardment of Earth". Space.com. Retrieved 2006-04-15.
- Valley, J.W., Peck, W.H., King, E.M. and Wilde, S.A. (April 2002). "A cool early Earth" (PDF). Geology 30 (4): 351–354. Bibcode:2002Geo....30..351V. doi:10.1130/0091-7613(2002)030<0351:ACEE>2.0.CO;2. ISSN 0091-7613. Retrieved 2008-09-13.
- Dauphas, N., Robert, F. and Marty, B. (December 2000). "The Late Asteroidal and Cometary Bombardment of Earth as Recorded in Water Deuterium to Protium Ratio". Icarus 148 (2): 508–512. Bibcode:2000Icar..148..508D. doi:10.1006/icar.2000.6489.
- Scalice, Daniella (May 20, 2009). "Microbial Habitability During the Late Heavy Bombardment". Astrobiology (NASA). Retrieved May 18, 2013.
- Brasier, M., McLoughlin, N., Green, O. and Wacey, D. (June 2006). "A fresh look at the fossil evidence for early Archaean cellular life" (PDF). Philosophical Transactions of the Royal Society: Biology 361 (1470): 887–902. doi:10.1098/rstb.2006.1835. PMC 1578727. PMID 16754605. Retrieved 2008-08-30.
- Schopf, J. W. (April 1993). "Microfossils of the Early Archean Apex Chert: New Evidence of the Antiquity of Life". Science 260 (5108): 640–646. Bibcode:1993Sci...260..640S. doi:10.1126/science.260.5108.640. PMID 11539831. Retrieved 2008-08-30.
- Altermann, W. and Kazmierczak, J. (2003). "Archean microfossils: a reappraisal of early life on Earth". Res Microbiol 154 (9): 611–7. doi:10.1016/j.resmic.2003.08.006. PMID 14596897.
- Mojzsis, S.J., Arrhenius, G., McKeegan, K.D., Harrison, T.M., Nutman, A.P. and Friend, C.R.L. (November 1996). "Evidence for life on Earth before 3.8 Ga". Nature 384 (6604): 55–59. Bibcode:1996Natur.384...55M. doi:10.1038/384055a0. PMID 8900275. Retrieved 2008-08-30.
- Grotzinger, J.P. and Rothman, D.H. (1996). "An abiotic model for stomatolite morphogenesis". Nature 383 (6599): 423–425. Bibcode:1996Natur.383..423G. doi:10.1038/383423a0.
- Fedo, C.M. and Whitehouse, M.J. (May 2002). "Metasomatic Origin of Quartz-Pyroxene Rock, Akilia, Greenland, and Implications for Earth's Earliest Life". Science 296 (5572): 1448–1452. Bibcode:2002Sci...296.1448F. doi:10.1126/science.1070336. PMID 12029129. Retrieved 2008-08-30.
- Lepland, A., van Zuilen, M.A., Arrhenius, G., Whitehouse, M.J. and Fedo, C.M. (January 2005). "Questioning the evidence for Earth's earliest life — Akilia revisited". Geology 33 (1): 77–79. Bibcode:2005Geo....33...77L. doi:10.1130/G20890.1. Retrieved 2008-08-30.
- Schopf, J. (2006). "Fossil evidence of Archaean life". Philosophical Transactions of the Royal Society of London: B Biological Sciences 361 (1470): 869–85. doi:10.1098/rstb.2006.1834. PMC 1578735. PMID 16754604.
- Ciccarelli, F.D., Doerks, T., von Mering, C., Creevey, C.J. et al. (2006). "Toward automatic reconstruction of a highly resolved tree of life". Science 311 (5765): 1283–7. Bibcode:2006Sci...311.1283C. doi:10.1126/science.1123061. PMID 16513982.
- Mason, S.F. (1984). "Origins of biomolecular handedness". Nature 311 (5981): 19–23. Bibcode:1984Natur.311...19M. doi:10.1038/311019a0. PMID 6472461.
- Orgel, L.E. (October 1994). "The origin of life on the earth" (PDF). Scientific American 271 (4): 76–83. doi:10.1038/scientificamerican1094-76. PMID 7524147. Retrieved 2008-08-30. Also available as a web page
- Cowen, R. (2000). History of Life (3rd ed.). Blackwell Science. p. 6. ISBN 0-632-04444-6.
- O'Leary, M.R. (2008). Anaxagoras and the Origin of Panspermia Theory. iUniverse, Inc. ISBN 0-595-49596-6.
- Arrhenius, S. (1903). "The Propagation of Life in Space". Die Umschau volume=7. Reprinted in Goldsmith, D., (ed.). The Quest for Extraterrestrial Life. University Science Books. ISBN 0-19-855704-3.
- Hoyle, F. and Wickramasinghe, C. (1979). "On the Nature of Interstellar Grains". Astrophysics and Space Science 66: 77–90. Bibcode:1979Ap&SS..66...77H. doi:10.1007/BF00648361.
- Crick, F; Orgel, L.E. (1973). "Directed Panspermia". Icarus 19 (3): 341–348. Bibcode:1973Icar...19..341C. doi:10.1016/0019-1035(73)90110-3. Unknown parameter
- Warmflash, D. and Weiss, B. (November 2005). "Did Life Come From Another World?". Scientific American: 64–71. Retrieved 2008-09-02.
- Wickramasinghe, N. C.; Wickramasinghe, J. T. "On the possibility of microbiota transfer from Venus to Earth". Astrophysics and Space Science 317 (1-2): 133–137. Bibcode:2008Ap&SS.317..133W. doi:10.1007/s10509-008-9851-2.
- Ker, Than (August 2007). "Claim of Martian Life Called 'Bogus'". space.com. Retrieved 2008-09-02.
- Bennett, J. O. (2008). "What is life?". Beyond UFOs: The Search for Extraterrestrial Life and Its Astonishing Implications for Our Future. Princeton University Press. pp. 82–85. ISBN 0-691-13549-5. Retrieved 2009-01-11.
- Schulze-Makuch, D., Irwin, L. N. (April 2006). "The prospect of alien life in exotic forms on other worlds". Naturwissenschaften 93 (4): 155–72. Bibcode:2006NW.....93..155S. doi:10.1007/s00114-005-0078-6. PMID 16525788.
- Peretó, J. (2005). "Controversies on the origin of life" (PDF). Int. Microbiol. 8 (1): 23–31. PMID 15906258. Retrieved 2007-10-07.
- Szathmáry, E. (February 2005). "Life: In search of the simplest cell". Nature 433 (7025): 469–470. Bibcode:2005Natur.433..469S. doi:10.1038/433469a. PMID 15690023. Retrieved 2008-09-01.
- Luisi, P. L., Ferri, F. and Stano, P. (2006). "Approaches to semi-synthetic minimal cells: a review". Naturwissenschaften 93 (1): 1–13. Bibcode:2006NW.....93....1L. doi:10.1007/s00114-005-0056-z. PMID 16292523.
- Joyce, G.F. (2002). "The antiquity of RNA-based evolution". Nature 418 (6894): 214–21. doi:10.1038/418214a. PMID 12110897.
- Hoenigsberg, H. (December 2003)). "Evolution without speciation but with selection: LUCA, the Last Universal Common Ancestor in Gilbert's RNA world". Genetic and Molecular Research 2 (4): 366–375. PMID 15011140. Retrieved 2008-08-30.(also available as PDF)
- Trevors, J. T. and Abel, D. L. (2004). "Chance and necessity do not explain the origin of life". Cell Biol. Int. 28 (11): 729–39. doi:10.1016/j.cellbi.2004.06.006. PMID 15563395.
- Forterre, P., Benachenhou-Lahfa, N., Confalonieri, F., Duguet, M., Elie, C. and Labedan, B. (1992). "The nature of the last universal ancestor and the root of the tree of life, still open questions". BioSystems 28 (1–3): 15–32. doi:10.1016/0303-2647(92)90004-I. PMID 1337989.
- Cech, T.R. (August 2000). "The ribosome is a ribozyme". Science 289 (5481): 878–9. doi:10.1126/science.289.5481.878. PMID 10960319. Retrieved 2008-09-01.
- Johnston, W. K. et al. (2001). "RNA-Catalyzed RNA Polymerization: Accurate and General RNA-Templated Primer Extension". Science 292 (5520): 1319–1325. Bibcode:2001Sci...292.1319J. doi:10.1126/science.1060786. PMID 11358999.
- Levy, M. and Miller, S.L. (July 1998). "The stability of the RNA bases: Implications for the origin of life". Proc. Natl. Acad. Sci. U.S.A. 95 (14): 7933–8. Bibcode:1998PNAS...95.7933L. doi:10.1073/pnas.95.14.7933. PMC 20907. PMID 9653118.
- Larralde, R., Robertson, M. P. and Miller, S. L. (August 1995). "Rates of decomposition of ribose and other sugars: implications for chemical evolution". Proc. Natl. Acad. Sci. U.S.A. 92 (18): 8158–60. Bibcode:1995PNAS...92.8158L. doi:10.1073/pnas.92.18.8158. PMC 41115. PMID 7667262.
- Lindahl, T. (April 1993). "Instability and decay of the primary structure of DNA". Nature 362 (6422): 709–15. Bibcode:1993Natur.362..709L. doi:10.1038/362709a0. PMID 8469282.
- Orgel, L. (November 2000). "Origin of life. A simpler nucleic acid". Science 290 (5495): 1306–7. doi:10.1126/science.290.5495.1306. PMID 11185405.
- Nelson, K.E., Levy, M., and Miller, S.L. (April 2000). "Peptide nucleic acids rather than RNA may have been the first genetic molecule". Proc. Natl. Acad. Sci. U.S.A. 97 (8): 3868–71. Bibcode:2000PNAS...97.3868N. doi:10.1073/pnas.97.8.3868. PMC 18108. PMID 10760258.
- Martin, W. and Russell, M.J. (2003). "On the origins of cells: a hypothesis for the evolutionary transitions from abiotic geochemistry to chemoautotrophic prokaryotes, and from prokaryotes to nucleated cells". Philosophical Transactions of the Royal Society: Biological 358 (1429): 59–85. doi:10.1098/rstb.2002.1183. PMC 1693102. PMID 12594918.
- Wächtershäuser, G. (August 2000). "Origin of life. Life as we don't know it". Science 289 (5483): 1307–8. doi:10.1126/science.289.5483.1307. PMID 10979855.
- Trevors, J.T. and Psenner, R. (2001). "From self-assembly of life to present-day bacteria: a possible role for nanocells". FEMS Microbiol. Rev. 25 (5): 573–82. doi:10.1111/j.1574-6976.2001.tb00592.x. PMID 11742692.
- Segré, D., Ben-Eli, D., Deamer, D. and Lancet, D. (February–April 2001). "The Lipid World" (PDF). Origins of Life and Evolution of Biospheres 2001 31 (1–2): 119–45. doi:10.1023/A:1006746807104. PMID 11296516. Retrieved 2008-09-01.
- Cairns-Smith, A.G. (1968). "An approach to a blueprint for a primitive organism". In Waddington, C,H. Towards a Theoretical Biology 1. Edinburgh University Press. pp. 57–66
- Ferris, J.P. (June 1999). "Prebiotic Synthesis on Minerals: Bridging the Prebiotic and RNA Worlds". Biological Bulletin. Evolution: A Molecular Point of View (Biological Bulletin, Vol. 196, No. 3) 196 (3): 311–314. doi:10.2307/1542957. JSTOR 1542957. PMID 10390828.
- Hanczyc, M.M., Fujikawa, S.M. and Szostak, Jack W. (October 2003). "Experimental Models of Primitive Cellular Compartments: Encapsulation, Growth, and Division". Science 302 (5645): 618–622. Bibcode:2003Sci...302..618H. doi:10.1126/science.1089904. PMID 14576428. Retrieved 2008-09-01.
- Hartman, H. (October 1998). "Photosynthesis and the Origin of Life". Origins of Life and Evolution of Biospheres 28 (4–6): 512–521. doi:10.1023/A:1006548904157. Retrieved 2008-09-01.
- Krumbein, W.E., Brehm, U., Gerdes, G., Gorbushina, A.A., Levit, G. and Palinska, K.A. (2003). "Biofilm, Biodictyon, Biomat Microbialites, Oolites, Stromatolites, Geophysiology, Global Mechanism, Parahistology" (PDF). In Krumbein, W.E., Paterson, D.M., and Zavarzin, G.A. Fossil and Recent Biofilms: A Natural History of Life on Earth. Kluwer Academic. pp. 1–28. ISBN 1-4020-1597-6. Archived from the original on January 6, 2007. Retrieved 2008-07-09
- Risatti, J. B., Capman, W. C. and Stahl, D. A. (October 11, 1994). "Community structure of a microbial mat: the phylogenetic dimension" (PDF). Proceedings of the National Academy of Sciences 91 (21): 10173–10177. Bibcode:1994PNAS...9110173R. doi:10.1073/pnas.91.21.10173. PMC 44980. PMID 7937858. Retrieved 2008-07-09.
- (the editor) (June 2006)). "Editor's Summary: Biodiversity rocks". Nature 441 (7094). Retrieved 2009-01-10.
- Allwood, A. C., Walter, M. R., Kamber, B. S., Marshall, C. P. and Burch, I. W. (June 2006)). "Stromatolite reef from the Early Archaean era of Australia". Nature 441 (7094): 714–718. Bibcode:2006Natur.441..714A. doi:10.1038/nature04764. PMID 16760969. Retrieved 2008-08-31.
- Blankenship, R.E. (1 January 2001). "Molecular evidence for the evolution of photosynthesis". Trends in Plant Science 6 (1): 4–6. doi:10.1016/S1360-1385(00)01831-8. PMID 11164357. Retrieved 2008-07-14.
- Hoehler, T.M., Bebout, B.M. and Des Marais, D.J. (19 July 2001). "The role of microbial mats in the production of reduced gases on the early Earth". Nature 412 (6844): 324–327. doi:10.1038/35085554. PMID 11460161. Retrieved 2008-07-14.
- Abele, D. (7 November 2002). "Toxic oxygen: The radical life-giver". Nature 420 (27): 27. doi:10.1038/420027a. PMID 12422197. Retrieved 2008-07-14.
- "Introduction to Aerobic Respiration". University of California, Davis. Archived from the original on October 29, 2007. Retrieved 2008-07-14.
- Goldblatt, C., Lenton, T.M. and Watson, A.J. (2006). "The Great Oxidation at ~2.4 Ga as a bistability in atmospheric oxygen due to UV shielding by ozone" (PDF). Geophysical Research Abstracts 8 (770). Retrieved 2008-09-01.
- Glansdorff, N., Xu, Y. and Labedan, B. (2008). "The Last Universal Common Ancestor: emergence, constitution and genetic legacy of an elusive forerunner". Biology Direct 3 (29): 29. doi:10.1186/1745-6150-3-29. PMC 2478661. PMID 18613974.
- Brocks, J. J., Logan, G. A., Buick, R. and Summons, R. E. (1999). "Archaean molecular fossils and the rise of eukaryotes". Science 285 (5430): 1033–1036. doi:10.1126/science.285.5430.1033. PMID 10446042. Retrieved 2008-09-02.
- Hedges, S. B., Blair, J. E., Venturi, M. L. and Shoe, J. L (January 2004). "A molecular timescale of eukaryote evolution and the rise of complex multicellular life". BMC Evolutionary Biology 4: 2. doi:10.1186/1471-2148-4-2. PMC 341452. PMID 15005799. Retrieved 2008-07-14.
- Burki, F., Shalchian-Tabrizi, K., Minge, M., Skjæveland, Å., Nikolaev (2007). "Phylogenomics Reshuffles the Eukaryotic Supergroups". In Butler, Geraldine. PLoS ONE 2 (8): e790. Bibcode:2007PLoSO...2..790B. doi:10.1371/journal.pone.0000790. PMC 1949142. PMID 17726520.
- Parfrey, L. W., Barbero, E., Lasser, E., Dunthorn, M., Bhattacharya, D., Patterson, D.J. and Katz, L.A. (December 2006). "Evaluating Support for the Current Classification of Eukaryotic Diversity". PLoS Genetics 2 (12): e220. doi:10.1371/journal.pgen.0020220. PMC 1713255. PMID 17194223.
- Margulis, L. (1981). Symbiosis in cell evolution. San Francisco: W.H. Freeman. ISBN 0-7167-1256-3.
- Vellai, T. and Vida, G. (1999). "The origin of eukaryotes: the difference between prokaryotic and eukaryotic cells". Proceedings of the Royal Society: Biology 266 (1428): 1571–1577. doi:10.1098/rspb.1999.0817. PMC 1690172. PMID 10467746.
- Selosse, M-A., Abert, B., and Godelle, B. (2001). "Reducing the genome size of organelles favours gene transfer to the nucleus". Trends in ecology & evolution 16 (3): 135–141. doi:10.1016/S0169-5347(00)02084-X. Retrieved 2008-09-02.
- Pisani, D., Cotton, J.A. and McInerney, J.O. (2007). "Supertrees disentangle the chimerical origin of eukaryotic genomes". Mol Biol Evol. 24 (8): 1752–60. doi:10.1093/molbev/msm095. PMID 17504772.
- Gray, M.W., Burger, G., and Lang, B.F. (1999). "Mitochondrial evolution". Science 283 (5407): 1476–1481. Bibcode:1999Sci...283.1476G. doi:10.1126/science.283.5407.1476. PMID 10066161. Retrieved 2008-09-02.
- Rasmussen, B., Fletcher, I.R., Brocks, J.R. and Kilburn, M.R. (October 2008). "Reassessing the first appearance of eukaryotes and cyanobacteria". Nature 455 (7216): 1101–1104. Bibcode:2008Natur.455.1101R. doi:10.1038/nature07381. PMID 18948954.
- Han, T.M. and Runnegar, B. (July 1992). "Megascopic eukaryotic algae from the 2.1-billion-year-old negaunee iron-formation, Michigan". Science 257 (5067): 232–235. Bibcode:1992Sci...257..232H. doi:10.1126/science.1631544. PMID 1631544. Retrieved 2008-09-02.
- Javaux, E. J., Knoll, A. H. and Walter, M. R. (September 2004). "TEM evidence for eukaryotic diversity in mid-Proterozoic oceans". Geobiology 2 (3): 121–132. doi:10.1111/j.1472-4677.2004.00027.x. Retrieved 2008-09-02.
- Butterfield, N. J. (2005). "Probable Proterozoic fungi". Paleobiology 31 (1): 165–182. doi:10.1666/0094-8373(2005)031<0165:PPF>2.0.CO;2. ISSN 0094-8373. Retrieved 2008-09-02.
- Hedges SB, Blair JE, Venturi ML, Shoe JL (January 2004). "A molecular timescale of eukaryote evolution and the rise of complex multicellular life". BMC Evol. Biol. 4: 2. doi:10.1186/1471-2148-4-2. PMC 341452. PMID 15005799.
- Jokela, J. (2001). "Sex: Advantage". Encyclopedia of Life Sciences. John Wiley & Sons, Ltd. doi:10.1038/npg.els.0001716. ISBN 0-470-01617-5 More than one of
- Holmes, R.K. and Jobling, M.G. (1996). "Genetics: Exchange of Genetic Information". In Baron, S. Baron's Medical Microbiology (4th ed.). Galveston: University of Texas Medical Branch. ISBN 0-9631172-1-1. Retrieved 2008-09-02
- Christie, P. J. (April 2001). "Type IV secretion: intercellular transfer of macromolecules by systems ancestrally related to conjugation machines". Molecular Microbiology 40 (22): 294–305. doi:10.1046/j.1365-2958.2001.02302.x. PMID 11309113. Retrieved 2008-09-02.
- Ramesh, M. A., Malik, S-B. and Logsdon, J. M. Jr. (January 2005). "A phylogenomic inventory of meiotic genes; evidence for sex in Giardia and an early eukaryotic origin of meiosis" (PDF). Current Biology 15 (2): 185–91. doi:10.1016/j.cub.2005.01.003. PMID 15668177. Retrieved 2008-12-22.
- Otto, S. P., and Gerstein, A. C. (2006). "Why have sex? The population genetics of sex and recombination". Biochemical Society Transactions 34 (Pt 4): 519–522. doi:10.1042/BST0340519. PMID 16856849. Retrieved 2008-12-22.
- Dong, L., Xiao, S., Shen, B. and Zhou, C. (January 2008). "Silicified Horodyskia and Palaeopascichnus from upper Ediacaran cherts in South China: tentative phylogenetic interpretation and implications for evolutionary stasis". Journal of the Geological Society 165: 367–378. doi:10.1144/0016-76492007-074. Retrieved 2008-09-02.
- Bell, G. and Mooers, A.O. (1968). "Size and complexity among multicellular organisms". Biological Journal of the Linnean Society 60 (3): 345–363. doi:10.1111/j.1095-8312.1997.tb01500.x. Retrieved 2008-09-03.
- Kaiser, D. (2001). "Building a multicellular organism". Annual Review of Genetics 35: 103–123. doi:10.1146/annurev.genet.35.102401.090145. PMID 11700279.
- Bonner, J. T. (January 1999). "The Origins of Multicellularity". Integrative Biology 1 (1): 27–36. doi:10.1002/(SICI)1520-6602(1998)1:1<27::AID-INBI4>3.0.CO;2-6. Retrieved 2008-09-03.
- Nakagaki, T., Yamada, H. and Tóth, Á. (September 2000). "Intelligence: Maze-solving by an amoeboid organism". Nature 407 (6803): 470. doi:10.1038/35035159. PMID 11028990. Retrieved 2008-09-03.
- Butterfield, N. J. (September 2000). "Bangiomorpha pubescens n. gen., n. sp.: implications for the evolution of sex, multicellularity, and the Mesoproterozoic/Neoproterozoic radiation of eukaryotes". Paleobiology 26 (3): 386–404. doi:10.1666/0094-8373(2000)026<0386:BPNGNS>2.0.CO;2. ISSN 0094-8373. Retrieved 2008-09-02.
- Dickey, Gwyneth. "African fossils suggest complex life arose early", Science News, Washington, D.C., Wednesday, June 30th, 2010. Retrieved on 2010-07-02.
- Gaidos, E., Dubuc, T., Dunford, M., McAndrew, P., Padilla-gamiño, J., Studer, B., Weersing, K. and Stanley, S. (2007). "The Precambrian emergence of animal life: a geobiological perspective" (PDF). Geobiology 5 (4): 351. doi:10.1111/j.1472-4669.2007.00125.x. Retrieved 2008-09-03.dead link
- Davidson, M.W. "Animal Cell Structure". Florida State University. Retrieved 2008-09-03.
- Saupe, S.G. "Concepts of Biology". College of St. Benedict / St. John's University. Retrieved 2008-09-03.
- Hinde, R. T. (1998). "The Cnidaria and Ctenophora". In Anderson, D.T.,. Invertebrate Zoology. Oxford University Press. pp. 28–57. ISBN 0-19-551368-1.
- Chen, J.-Y., Oliveri, P., Gao, F., Dornbos, S.Q., Li, C-W., Bottjer, D.J. and Davidson, E.H. (August 2002). "Precambrian Animal Life: Probable Developmental and Adult Cnidarian Forms from Southwest China" (PDF). Developmental Biology 248 (1): 182–196. doi:10.1006/dbio.2002.0714. PMID 12142030. Retrieved 2008-09-03.
- Grazhdankin, D. (2004). "Patterns of distribution in the Ediacaran biotas: facies versus biogeography and evolution". Paleobiology 30 (2): 203. doi:10.1666/0094-8373(2004)030<0203:PODITE>2.0.CO;2. ISSN 0094–8373.
- Seilacher, A. (1992). "Vendobionta and Psammocorallia: lost constructions of Precambrian evolution" (abstract). Journal of the Geological Society, London 149 (4): 607–613. doi:10.1144/gsjgs.149.4.0607. ISSN 0016–7649. Retrieved 2007-06-21.
- Martin, M.W.; Grazhdankin, D. V., Bowring, S. A., Evans, D. A. D., Fedonkin, M. A. and Kirschvink, J. L. (2000-05-05). "Age of Neoproterozoic Bilaterian Body and Trace Fossils, White Sea, Russia: Implications for Metazoan Evolution" (abstract). Science 288 (5467): 841–5. Bibcode:2000Sci...288..841M. doi:10.1126/science.288.5467.841. PMID 10797002. Retrieved 2008-07-03.
- Fedonkin, M. A. and Waggoner, B. (1997). "The late Precambrian fossil Kimberella is a mollusc-like bilaterian organism" (abstract). Nature 388 (6645): 868–871. Bibcode:1997Natur.388..868F. doi:10.1038/42242. Retrieved 2008-07-03.
- Mooi, R. and Bruno, D. (1999). "Evolution within a bizarre phylum: Homologies of the first echinoderms" (PDF). American Zoologist 38 (6): 965–974. Retrieved 2007-11-24.
- McMenamin, M. A. S (2003). "Spriggina is a trilobitoid ecdysozoan" (abstract). Abstracts with Programs (Geological Society of America) 35 (6): 105. Retrieved 2007-11-24.
- Lin, J. P.; Gon, S.M.; Gehling, J.G.; Babcock, L.E.; Zhao, Y.L.; Zhang, X.L.; Hu, S.X.; Yuan, J.L.; Yu, M.Y.; Peng, J. (2006). "A Parvancorina-like arthropod from the Cambrian of South China". Historical Biology 18 (1): 33–45. doi:10.1080/08912960500508689.
- Butterfield, N. J. (2006). "Hooking some stem-group "worms": fossil lophotrochozoans in the Burgess Shale". BioEssays 28 (12): 1161–6. doi:10.1002/bies.20507. PMID 17120226.
- Bengtson, S. (2004). Early skeletal fossils (PDF). In Lipps, J.H., and Waggoner, B.M. "Neoproterozoic - Cambrian Biological Revolutions". Paleontological Society Papers 10. pp. 67–78. Retrieved 2008-07-18
- Gould, S. J. (1989). Wonderful Life. Hutchinson Radius. pp. 124–136 and many others. ISBN 0-09-174271-4.
- Gould, S. J. (1989). Wonderful Life: The Burgess Shale and the Nature of History. W.W. Norton & Company. ISBN 0-393-30700-X.
- Budd, G. E. (2003). "The Cambrian Fossil Record and the Origin of the Phyla" (Free full text). Integrative and Comparative Biology 43 (1): 157–165. doi:10.1093/icb/43.1.157. PMID 21680420. Retrieved 2008-07-15.
- Budd, G. E. (1996). "The morphology of Opabinia regalis and the reconstruction of the arthropod stem-group". Lethaia 29 (1): 1–14. doi:10.1111/j.1502-3931.1996.tb01831.x.
- Marshall, C. R. (2006). "Explaining the Cambrian "Explosion" of Animals". Annu. Rev. Earth Planet. Sci. 34: 355–384. Bibcode:2006AREPS..34..355M. doi:10.1146/annurev.earth.33.031504.103001. Retrieved 2007-11-06.
- Janvier, P. (2001). "Vertebrata (Vertebrates)". Encyclopedia of Life Sciences. Wiley InterScience. doi:10.1038/npg.els.0001531. ISBN 0-470-01617-5 More than one of
- Conway Morris, S. (August 2, 2003). "Once we were worms". New Scientist 179 (2406): 34. Retrieved 2008-09-05.
- Shu, D-G., Luo, H-L., Conway Morris, S., Zhang, X-L., Hu, S-X., Chen, L., J. Han, J., Zhu, M., Li, Y. and Chen, L-Z. (November 1999). "Lower Cambrian vertebrates from south China" (PDF). Nature 402 (6757): 42–46. Bibcode:1999Natur.402...42S. doi:10.1038/46965. Retrieved 2008-09-05.
- Shu, D.-G., Conway Morris, S., Han, J., Zhang, Z.-F., Yasui, K., Janvier, P., Chen, L., Zhang, X.-L., Liu, J.-N., Li, Y. and Liu, H.-Q. (January 2003). "Head and backbone of the Early Cambrian vertebrate Haikouichthys". Nature 421 (6922): 526–529. Bibcode:2003Natur.421..526S. doi:10.1038/nature01264. PMID 12556891. Retrieved 2008-09-05.
- Sansom I. J., Smith, M. M. and Smith, M. P. (2001). "The Ordovician radiation of vertebrates". In Ahlberg, P.E. Major Events in Early Vertebrate Evolution. Taylor and Francis. pp. 156–171. ISBN 0-415-23370-4
- Cowen, R. (2000). History of Life (3rd ed.). Blackwell Science. pp. 120–122. ISBN 0-632-04444-6.
- Selden, P. A. (2001). ""Terrestrialization of Animals"". In Briggs, D.E.G., and Crowther, P.R. Palaeobiology II: A Synthesis. Blackwell. pp. 71–74. ISBN 0-632-05149-3. Retrieved 2008-09-05
- Battistuzzi, F. U.; Feijao, A.; Hedges, S. B. (2004). "A genomic timescale of prokaryote evolution: insights into the origin of methanogenesis, phototrophy, and the colonization of land". BMC Evolutionary Biology 4: 44. doi:10.1186/1471-2148-4-44. PMC 533871. PMID 15535883.
- Shear, W.A. (2000). "The Early Development of Terrestrial Ecosystems". In Gee, H. Shaking the Tree: Readings from Nature in the History of Life. University of Chicago Press. pp. 169–184. ISBN 0-226-28496-4. Retrieved 2008-09-09
- Venturi, Sebastiano (2011). "Evolutionary Significance of Iodine". Current Chemical Biology- 5 (3): 155–162. doi:10.2174/187231311796765012. ISSN 1872-3136.
- Crockford, S.J. (2009). "Evolutionary roots of iodine and thyroid hormones in cell-cell signaling". Integr Comp Biol 49 (2): 155–166. doi:10.1093/icb/icp053. PMID 21669854.
- Venturi, S.; Donati, F.M.; Venturi, A.; Venturi, M. (2000). "Environmental Iodine Deficiency: A Challenge to the Evolution of Terrestrial Life?". Thyroid 10 (8): 727–9. doi:10.1089/10507250050137851. PMID 11014322.
- Küpper FC, Carpenter LJ, McFiggans GB et al. (2008). "Iodide accumulation provides kelp with an inorganic antioxidant impacting atmospheric chemistry" (Free full text). Proceedings of the National Academy of Sciences of the United States of America 105 (19): 6954–8. Bibcode:2008PNAS..105.6954K. doi:10.1073/pnas.0709959105. PMC 2383960. PMID 18458346.
- Hawksworth, D.L. (2001). "Lichens". Encyclopedia of Life Sciences. John Wiley & Sons, Ltd. doi:10.1038/npg.els.0000368. ISBN 0-470-01617-5 More than one of
- Retallack, G.J.; Feakes, C.R. (1987). "Trace Fossil Evidence for Late Ordovician Animals on Land". Science 235 (4784): 61–63. Bibcode:1987Sci...235...61R. doi:10.1126/science.235.4784.61. PMID 17769314.
- Kenrick, P. and Crane, P. R. (September 1997). "The origin and early evolution of plants on land" (PDF). Nature 389 (6646): 33. Bibcode:1997Natur.389...33K. doi:10.1038/37918. Retrieved 2008-09-05.dead link
- Scheckler, S. E. (2001). ""Afforestation – the First Forests"". In Briggs, D.E.G., and Crowther, P.R. Palaeobiology II: A Synthesis. Blackwell. pp. 67–70. ISBN 0-632-05149-3. Retrieved 2008-09-05
- The phrase "Late Devonian wood crisis" is used at "Palaeos – Tetrapoda: Acanthostega". PALAEOS: The Trace of Life on Earth. Retrieved 2008-09-05.
- Algeo, T. J. and Scheckler, S. E. (1998). "Terrestrial-marine teleconnections in the Devonian: links between the evolution of land plants, weathering processes, and marine anoxic events". Philosophical Transactions of the Royal Society: Biology 353 (1365): 113–130. doi:10.1098/rstb.1998.0195. PMC 1692181.
- Taylor T. N. and Osborn J. M. (1996). "The importance of fungi in shaping the paleoecosystem". Review of Paleobotany and Palynology 90 (3–4): 249–262. doi:10.1016/0034-6667(95)00086-0. Retrieved 2008-09-05.
- MacNaughton, R. B., Cole, J. M., Dalrymple, R. W., Braddy, S. J., Briggs, D. E. G. and Lukie, T. D. (May 2002). "First steps on land: Arthropod trackways in Cambrian-Ordovician eolian sandstone, southeastern Ontario, Canada". Geology 30 (5): 391–394. Bibcode:2002Geo....30..391M. doi:10.1130/0091-7613(2002)030<0391:FSOLAT>2.0.CO;2. ISSN 0091-7613. Retrieved 2008-09-05.
- Vaccari, N. E., Edgecombe, G. D. and Escudero, C. (2004). "Cambrian origins and affinities of an enigmatic fossil group of arthropods". Nature 430 (6999): 554–557. Bibcode:2004Natur.430..554V. doi:10.1038/nature02705. PMID 15282604.
- Buatois, L. A., Mangano, M. G., Genise, J. F. and Taylor, T. N. (June 1998). "The ichnologic record of the continental invertebrate invasion; evolutionary trends in environmental expansion, ecospace utilization, and behavioral complexity". PALAIOS (PALAIOS, Vol. 13, No. 3) 13 (3): 217–240. doi:10.2307/3515447. JSTOR 3515447. Retrieved 2008-09-05.
- Cowen, R. (2000). History of Life (3rd ed.). Blackwell Science. p. 126. ISBN 0-632-04444-6.
- Grimaldi, D. and Engel, M. (2005). "Insects Take to the Skies". Evolution of the Insects. Cambridge University Press. pp. 155–160. ISBN 0-521-82149-5. Retrieved 2009-01-11.
- Grimaldi, D. and Engel, M. (2005). "Diversity of evolution". Evolution of the Insects. Cambridge University Press. p. 12. ISBN 0-521-82149-5. Retrieved 2009-01-11.
- Clack, J. A. (November, 2005). "Getting a Leg Up on Land". Scientific American. Retrieved 2008-09-06.
- Ahlberg, P. E. and Milner, A. R. (April 1994). "The Origin and Early Diversification of Tetrapods". Nature 368 (6471): 507–514. Bibcode:1994Natur.368..507A. doi:10.1038/368507a0. Retrieved 2008-09-06.
- Gordon, M. S., Graham, J. B. and Wang, T. (September/October 2004). "Revisiting the Vertebrate Invasion of the Land". Physiological and Biochemical Zoology 77 (5): 697–699. doi:10.1086/425182.
- Daeschler, E. B., Shubin, N. H. and Jenkins, F. A. (April 2006). "A Devonian tetrapod-like fish and the evolution of the tetrapod body plan" (PDF). Nature 440 (7085): 757–763. Bibcode:2006Natur.440..757D. doi:10.1038/nature04639. PMID 16598249. Retrieved 2008-09-06.
- Debraga, M. and Rieppel, O. (July 1997). "Reptile phylogeny and the interrelationships of turtles". Zoological Journal of the Linnean Society 120 (3): 281–354. doi:10.1111/j.1096-3642.1997.tb01280.x. Retrieved 2008-09-07.
- Benton M. J. and Donoghue, P. C. J. (2007). "Paleontological Evidence to Date the Tree of Life". Molecular Biology and Evolution 24 (1): 26–53. doi:10.1093/molbev/msl150. PMID 17047029. Retrieved 2008-09-07.
- Benton, M. J. (May 1990). "Phylogeny of the Major Tetrapod Groups: Morphological Data and Divergence Dates". Journal of Molecular Evolution 30 (5): 409–424. doi:10.1007/BF02101113. PMID 2111854. Retrieved 2008-09-07.
- Sidor, C. A., O'Keefe, F. R., Damiani, R., Steyer, J. S., Smith, R. M. H., Larsson, H. C. E., Sereno, P. C., Ide, O., and Maga, A. (April 2005). "Permian tetrapods from the Sahara show climate-controlled endemism in Pangaea". Nature 434 (7035): 886–889. Bibcode:2005Natur.434..886S. doi:10.1038/nature03393. PMID 15829962. Retrieved 2008-09-08.
- Smith, R. and Botha, J. (September–October 2005). "The recovery of terrestrial vertebrate diversity in the South African Karoo Basin after the end-Permian extinction". Comptes Rendus Palevol 4 (6–7): 623–636. doi:10.1016/j.crpv.2005.07.005. Retrieved 2008-09-08.
- Benton, M. J. (2005). When Life Nearly Died: The Greatest Mass Extinction of All Time. Thames & Hudson. ISBN 978-0-500-28573-2.
- Sahney, S. and Benton, M.J. (2008). "Recovery from the most profound mass extinction of all time" (PDF). Proceedings of the Royal Society: Biological 275 (1636): 759–65. doi:10.1098/rspb.2007.1370. PMC 2596898. PMID 18198148.
- Gauthier, J., Cannatella, D. C., de Queiroz, K., Kluge, A. G. and Rowe, T. (1989). "Tetrapod Phylogeny" (PDF). In B. Fernholm, B., Bremer K., and Jörnvall, H. The Hierarchy of Life. Elsevier Science. p. 345. Retrieved 2008-09-08
- Benton, M. J. (March 1983). "Dinosaur Success in the Triassic: a Noncompetitive Ecological Model" (PDF). Quarterly Review of Biology 58 (1). Retrieved 2008-09-08
- Padian, K. (2004). "Basal Avialae". In Weishampel, David B.; Dodson, Peter; & Osmólska, Halszka (eds.). The Dinosauria (Second ed.). Berkeley: University of California Press. pp. 210–231. ISBN 0-520-24209-2.
- Hou, L., Zhou, Z., Martin, L. D. and Feduccia, A. (October 2002). "A beaked bird from the Jurassic of China". Nature 377 (6550): 616–618. Bibcode:1995Natur.377..616H. doi:10.1038/377616a0. Retrieved 2008-09-08.
- Clarke, J. A., Zhou, Z. and Zhang, F. (2006). "Insight into the evolution of avian flight from a new clade of Early Cretaceous ornithurines from China and the morphology of Yixianornis grabaui". Journal of Anatomy 208 (3): 287–308. doi:10.1111/j.1469-7580.2006.00534.x. PMC 2100246. PMID 16533313. Retrieved 2008-09-08.
- Ruben, J. A. and Jones, T. D. (2000). "Selective Factors Associated with the Origin of Fur and Feathers". American Zoologist 40 (4): 585–596. doi:10.1093/icb/40.4.585.
- Luo, Z-X., Crompton, A. W. and Sun, A-L. (May 2001). "A New Mammaliaform from the Early Jurassic and Evolution of Mammalian Characteristics". Science 292 (5521): 1535–1540. Bibcode:2001Sci...292.1535L. doi:10.1126/science.1058476. PMID 11375489. Retrieved 2008-09-08.
- Cifelli, R.L. (November 2001). "Early mammalian radiations". Journal of Paleontology 75 (6): 1214. doi:10.1666/0022-3360(2001)075<1214:EMR>2.0.CO;2. ISSN 0022-3360.
- Flynn, J. J., Parrish, J. M. Rakotosamimanana, B., Simpson, W. F. and Wyss, A.R. (September 1999). "A Middle Jurassic mammal from Madagascar". Nature 401 (6748): 57–60. Bibcode:1999Natur.401...57F. doi:10.1038/43420. Retrieved 2008-09-08.
- MacLeod, N., Rawson, P. F., Forey, P. L., Banner. F. T., Boudagher-Fadel, M. K., Bown, P. R., Burnett, J. A., Chambers, P., Culver, S., Evans, S. E., Jeffery, C., Kaminski, M. A., Lord, A. R., Milner, A. C., Milner, A. R., Morris, N., Owen, E., Rosen, B. R., ,Smith, A. B., Taylor, P. D., Urquhart, E. and Young, J. R. (1997). "The Cretaceous–Tertiary biotic transition". Journal of the Geological Society 154 (2): 265–292. doi:10.1144/gsjgs.154.2.0265.
- Alroy, J. (March 1999). "The fossil record of North American mammals: evidence for a Paleocene evolutionary radiation". Systematic Biology 48 (1): 107–18. doi:10.1080/106351599260472. PMID 12078635.
- Archibald, J. D. and Deutschman, D. H. (June 2001). "Quantitative Analysis of the Timing of the Origin and Diversification of Extant Placental Orders". Journal of Mammalian Evolution 8 (2): 107–124. doi:10.1023/A:1011317930838. Retrieved 2008-09-24.
- Simmons, N. B., Seymour, K. L., Habersetzer, J. and Gunnell, G. F. (February 2008). "Primitive Early Eocene bat from Wyoming and the evolution of flight and echolocation". Nature 451 (7180): 818–821. Bibcode:2008Natur.451..818S. doi:10.1038/nature06549. PMID 18270539.
- Thewissen, J. G. M., Madar, S. I. and Hussain, S. T. (1996). "Ambulocetus natans, an Eocene cetacean (Mammalia) from Pakistan". Courier Forschungsinstitut Senckenberg 191: 1–86. ISBN 978-3-510-61084-6.
- Crane, P. R., Friis, E. M. and Pedersen, K. R. (2000). "The Origin and Early Diversification of Angiosperms". In Gee, H. Shaking the Tree: Readings from Nature in the History of Life. University of Chicago Press. pp. 233–250. ISBN 0-226-28496-4. Retrieved 2008-09-09
- Crepet, W. L. (November 2000). "Progress in understanding angiosperm history, success, and relationships: Darwin's abominably "perplexing phenomenon"". Proceedings of the National Academy of Sciences 97 (24): 12939–12941. Bibcode:2000PNAS...9712939C. doi:10.1073/pnas.97.24.12939. PMC 34068. PMID 11087846. Retrieved 2008-09-09.
- Hughes, W. O. H., Oldroyd, B. P., Beekman, M. and Ratnieks, F. L. W. (2008-05-30). "Ancestral Monogamy Shows Kin Selection Is Key to the Evolution of Eusociality". Science (American Association for the Advancement of Science) 320 (5880): 1213–1216. Bibcode:2008Sci...320.1213H. doi:10.1126/science.1156108. PMID 18511689. Retrieved 2008-08-04.
- Lovegrove, B. G. (January 1991). "The evolution of eusociality in molerats (Bathyergidae): a question of risks, numbers, and costs". Behavioral Ecology and Sociobiology 28 (1): 37–45. doi:10.1007/BF00172137. Retrieved 2008-09-07.
- Labandeira, C. and Eble, G. J. (2000). "The Fossil Record of Insect Diversity and Disparity" (PDF). In Anderson, J., Thackeray, F., van Wyk, B., and de Wit, M. Gondwana Alive: Biodiversity and the Evolving Biosphere. Witwatersrand University Press. Retrieved 2008-09-07
- Brunet, M., Guy, F., Pilbeam, D., Mackaye, H. T. et al. (July 2002). "A new hominid from the Upper Miocene of Chad, Central Africa". Nature 418 (6894): 145–151. doi:10.1038/nature00879. PMID 12110880. Retrieved 2008-09-09.
- de Heinzelin, J., Clark, J. D., White, T. et al. (April 1999). "Environment and Behavior of 2.5-Million-Year-Old Bouri Hominids". Science 284 (5414): 625–629. doi:10.1126/science.284.5414.625. PMID 10213682. Retrieved 2008-09-09.
- De Miguel, C. and Henneberg, M. (2001). "Variation in hominid brain size: How much is due to method?". HOMO - Journal of Comparative Human Biology 52 (1): 3–58. doi:10.1078/0018-442X-00019. Retrieved 2008-09-09.
- Leakey, Richard (1994). The Origin of Humankind. Science Masters Series. New York, NY: Basic Books. pp. 87–89. ISBN 0-465-05313-0.
- Mellars, Paul (2006). "Why did modern human populations disperse from Africa ca. 60,000 years ago? A new model". Proceedings of the National Academy of Sciences 103 (25): 9381–6. Bibcode:2006PNAS..103.9381M. doi:10.1073/pnas.0510792103. PMC 1480416. PMID 16772383.
- Benton, M. J. (2004). "6. Reptiles Of The Triassic". Vertebrate Palaeontology (3rd ed.). Blackwell. ISBN 978-0-632-05637-8.
- MacLeod, N. (2001-01-06). "Extinction!". Retrieved 2008-09-11.
- Martin, R. E. (1995). "Cyclic and secular variation in microfossil biomineralization: clues to the biogeochemical evolution of Phanerozoic oceans". Global and Planetary Change 11 (1): 1. Bibcode:1995GPC....11....1M. doi:10.1016/0921-8181(94)00011-2.
- Martin, R.E. (1996). "Secular increase in nutrient levels through the Phanerozoic: Implications for productivity, biomass, and diversity of the marine biosphere". PALAIOS (PALAIOS, Vol. 11, No. 3) 11 (3): 209–219. doi:10.2307/3515230. JSTOR 3515230.
- Rohde, R. A. and Muller, R. A. (March 2005). "Cycles in fossil diversity" (PDF). Nature 434 (7030): 208–210. Bibcode:2005Natur.434..208R. doi:10.1038/nature03339. PMID 15758998. Retrieved 2008-09-22.
- Field, C. B., Behrenfeld, M. J., Randerson, J. T. and Falkowski, P. (July 1998). "Primary Production of the Biosphere: Integrating Terrestrial and Oceanic Components". Science 281 (5374): 237–240. Bibcode:1998Sci...281..237F. doi:10.1126/science.281.5374.237. PMID 9657713. Retrieved 2008-09-13.
- Grant, B. S., and Wiseman, L. L. (2002). "Recent History of Melanism in American Peppered Moths". Journal of Heredity 93 (2): 86–90. doi:10.1093/jhered/93.2.86. ISSN 1465-7333. PMID 12140267. Retrieved 2008-09-11.
- Levin, B. R., Perrot, V. and Walker, N. (March 1, 2000). "Compensatory mutations, antibiotic resistance and the population genetics of adaptive evolution in bacteria". Genetics 154 (3): 985–997. PMC 1460977. PMID 10757748. Retrieved 2008-09-11.
- Rosen, Meghan. "Shorter-winged swallows evolve around highways". ScienceNews 183 (8). p. 17. Retrieved May 5, 2013.
- Hawks, J., Wang, E. T., Cochran, G. M., Harpending, H. C. and Moyzis, R. K. (December 2007). "Recent acceleration of human adaptive evolution". Proceedings of the National Academy of Sciences 104 (52): 20753–20758. Bibcode:2007PNAS..10420753H. doi:10.1073/pnas.0707650104. PMC 2410101. PMID 18087044. Retrieved 2008-09-11.
- Cowen, R. (2004). History of Life (4th ed.). Blackwell Publishing Limited. ISBN 978-1-4051-1756-2.
- Richard Dawkins (2004). The Ancestor's Tale, A Pilgrimage to the Dawn of Life. Boston: Houghton Mifflin Company. ISBN 0-618-00583-8.
- Richard Dawkins. (1990). The Selfish Gene. Oxford University Press. ISBN 0-19-286092-5. Unknown parameter
- Smith, John Maynard; Eörs Szathmáry (1997). The Major Transitions in Evolution. Oxfordshire: Oxford University Press. ISBN 0-19-850294-X.
- Ruse, Michael; Travis, Joseph (eds) (2009). Evolution: The First Four Billion Years. Cambridge, Massachusetts: Belknap Press of Harvard University Press. ISBN 978-0-674-03175-3. Retrieved 24 November 2012.
- General information on evolution- Fossil Museum nav.
- Understanding Evolution from University of California, Berkeley
- National Academies Evolution Resources
- Evolution poster- PDF format "tree of life"
- Everything you wanted to know about evolution by New Scientist
- Howstuffworks.com — How Evolution Works
- Synthetic Theory Of Evolution: An Introduction to Modern Evolutionary Concepts and Theories
History of evolutionary thought
- The Complete Work of Charles Darwin Online
- Understanding Evolution: History, Theory, Evidence, and Implications | http://www.bioscience.ws/encyclopedia/index.php?title=Evolutionary_history_of_life | 13 |
20 | In order to teach creativity, one must teach creatively; that is, it will take a great deal of creative effort to bring out the most creative thinking in your classes. Of course, creativity is not the only required element for creative instructors. They must also know their fields and know how to create an appropriate learning environment. When will it be most important for you to offer direct instruction? When is discovery most important? What are your expectations and how can you best communicate them?
Because answers to these questions are so diverse — even for individual instructors teaching different courses or at various times of the semester — no one technique will fit all needs. Here are several approaches or techniques for teaching creatively, both general and specific to certain fields. More examples of field-specific approaches or techniques appear in the Creative teachers section.
These creative thinking techniques were culled from the Internet and summarized by Yao Lu, a graduate student in AESHM (Apparel, Educational Studies, and hospitality Management). Some of the techniques listed below are used in business training or in K-12 settings but can easily be adapted for college students.
What: An assumption is an unquestioned, assumed truth. Assumption busting is particularly effective when one is stuck in current thinking paradigms or has run out of ideas.
Benefits: Everyone makes assumptions about how the world around us, which in creative situations, can prevent seeing or generating possibilities. Deliberately seeking out and addressing previously unquestioned assumptions stimulates creative thinking.
How: List assumptions associated with a task or problem, for example, that a solution is impossible due to time and cost constraints; something works because certain rules or conditions; and people believe, need or think of certain things. Then ask under what conditions these assumptions are not true, continue the process of examination as old assumptions are challenged and new ones are created. An alternative way of proceeding is to find ways to force assumptions to be true. This is the opposite of challenging assumptions in the previous step.
What: Brainstorming, a useful tool to develop creative solutions to a problem, is a lateral thinking process by which students are asked to develop ideas or thoughts that may seem crazy or shocking at first. Participants can then change and improve them into original and useful ideas. Brainstorming can help define an issue, diagnose a problem, or possible solutions and resistance to proposed solutions.
How: Define the problem clearly lay out any criteria to be met. Keep the session focused on the problem, but be sure that no one criticizes or evaluates ideas during the session, even if they are clearly impractical. Criticism dampens creativity in the initial stages of a brainstorming session. Ideas should be listed, rather than developed deeply on the spot; the idea is to generate possibilities. Accordingly, participants should be encouraged to pick up on ideas offered to create new ones. One person should be appointed as note-taker, and ideas should be studied and evaluated after the session.
Negative (or Reverse) Brainstorming
What: Negative brainstorming involves analyzing a short list of existing ideas, rather than the initial massing of ideas as in conventional brainstorming. Examining potential failures is relevant when an idea is new or complex or when there is little margin for error. Negative brainstorming raises such questions as: "What could go wrong with this project?"
Benefits: Reverse brain-storming is valuable when it is difficult to identify direct solutions to a problem.
How: After clearly defining a problem or challenge, ask "How could I cause this problem?" or "How could I make things worse?" As with brainstorming, allow ideas to flow freely without rejecting any. Evaluating these negative ideas can lead to possible positive solutions. See also Negative Brainstorming.
What: Concept maps represent knowledge graphic form. Networks consist of nods, which represent concepts, and links, which represent relationships between concepts.
Benefits: Concept maps can aid in generating ideas, designing complex structures, or communicating complex ideas. Because they make explicit the integration of old and new knowledge concept maps can help instructors assess students' understanding.
How: Create a focus question specifying the problem or issue the map should help resolve. List the key concepts (roughly 20-25) that apply to the area of knowledge. Put the most general, inclusive concepts at the top of the list, and most specific at the bottom.
Build a hierarchical organization of the concepts, using post-its on a wall or whiteboard, large sheets of paper, etc. Revision is a key element in concept mapping, so participants need to be able to move concepts and reconstruct the map. Seek cross links between concepts, adding linking words to the lines between concepts.
What: In most role-playing exercises, each student takes the role of a person affected by an issue and studies an issue or events from the perspective of that person.
How: Role plays should give the students an opportunity to practice what they have learned and should interest the students. Provide concrete information and clear role descriptions so that students can play their roles with confidence. Once the role play is finished, spend some time on debriefing. See also Role-Playing Games: An Overview.
What: Story-boarding can be compared to spreading students' thoughts out on a wall as they work on a project or solve a problem. Story boards can help with planning, ideas, communications and organization.
Benefits: This method allows students to see the interconnections, how one idea relates to another, and how pieces come together. Once the ideas flow, students become immersed in the problem and hitch-hike other ideas.
How: Use a cork board or similar surface to pin up index cards, or use software such as CorkBoard. Begin with a set of topic cards, and under each place header cards for general points, categories, etc. Under these, place sub-heading cards that will be contain ideas and details generated that support the headers.
During a story board session, consider all ideas relevant, no matter how impractical they appear.
What: DO IT stands for Define problems, be Open to many possible solutions, Identify the best solution and then Transform it into effective action.
Ten catalysts or prompts are designed to help students with each of these steps.
Benefits: DO IT accelerates and strengthens one's natural creative problem-solving ability and to stimulate a large number of good, diverse ideas.
When time allows, students can take advantage of incubation (unconscious thinking) and research processes (find out what ideas have already been tried).
What: Random input, a lateral thinking tool, is useful for generating fresh ideas or new perspectives during problem solving.
Benefits: It offers new perspectives on a problem, fosters creative leaps, and permits escape from restrictive thinking patterns.
How: Select a random noun, whether from a prepared set, from the dictionary, or one's own list of 60 words. It is helpful to get new insight by selecting a word from outside the field being studied. List the word's attributions or associations, then apply each to the problem at hand. With persistence, at least one of these may catalyze a creative leap.
Example: Students thinking about reducing car pollution have so far considered all the conventional solutions, e.g. catalytic conversion and clean fuels. Selecting a random noun from the titles of books in a bookcase, a student may see "Plants." Brainstorming from this, the class could generate a number of new ideas, such as planting trees on the side of roads or passing exhaust gases through a soup of algae, to reduce carbon dioxide.
What: A decision tree is a visual and analytical decision support tool, often taught to undergraduate students in schools of business, health economics, and public health.
Benefits: They are simple to understand and interpret, have value even in the absence of hard data, and can be combined with other decision techniques.
Example: A decision tree used in a finance class for deciding the better investment strategy.
What: In this exercise in questioning, students create a list of 100 questions. There are no directions regarding what questions to ask and no judgments or criticism of questions.
Benefits: Students will ask a wide range of questions, increasing student productivity and motivation. As students focus on what they want to discover and generate their own questions, they pursue answers without prodding. Questions can be general or based on a particular topic or reading; instructors can give several examples from their own lists.
What: This method can gather ideas from large groups, numbering from the dozens to the hundreds. Participants are given slips of paper and asked to write down ideas which are discussed or evaluated.
Benefits: This method collects a large number of ideas swiftly and creates a sense of participation or ownership at the same time.
How: Each student is given a stack or note-pad of at least 25 small slips of paper. The pads can contain idea-jogging graphics or be designed so that ideas can be sorted and separated easily. A question or problem is read to the group (e.g., "How do we?" or "What would it take to?"). Students write down one idea per sheet, in any order. When writing begins to slow down, collect pads from students and offer quick feedback in the form of examples. If the group is very large, present examples from a limited sample of booklets. After the early feedback, analysis and evaluation can continue at a steadier pace to identify the most useful ideas and develop them into practicable proposals.
What: Laddering or the "why method" involves toggling between two abstractions to create ideas. Laddering techniques involve the creation, reviewing and modification of hierarchical knowledge. In a ladder containing abstract ideas or concepts, the items lower down are members or sub-sets of the ones higher up, so one moves between the abstract and concrete.
Benefits: Laddering can help students understand how an expert categorizes concepts into classes, and can help clarify concepts and their relationships.
How: Beginning with an existing idea, "ladder up" by asking, of what wider category is this an example? "Ladder down" by finding more examples. Then "ladder up" again by seeking an even wider category from the new examples obtained from step 2.
Generally, "laddering up" toward the general allows expansion into new areas while "laddering down" focuses on specific aspects of these areas. Why questions are ladders up; so-what questions are ladders down.
See also Laddering Techniques.
What: Exaggeration includes the two forms of magnify (or "stretch") and minimize (or "compress"), part of the SCAMPER heuristic.
|Forms of Exaggeration||Type||Examples|
|Exaggerate upwards||Magnify||I have a million photocopiers standing idle|
|Exaggerate downwards||Minify||My photocopiers are barely used at all|
|Exaggerate scope||Invade context||The whole organization is underused|
|Exaggerate significance||Aggrandize||Our over-capacity is a nation scandal|
|Exaggerate selectively||Caricature||Reprographics Rest Home!|
Benefits: This method helps in building ideas for solutions. It is useful to illustrate a problem, by testing unspoken assumptions about its scale. It helps one think about what would be appropriate if the problem were of a different order of magnitude.
How: After defining a problem to be addressed or idea to develop, list all the component parts of the idea or if a problem, its objectives and constraints. Choosing one component, develop ways of exaggerating it and note them on a separate sheet.
What: To solve a specific problem, students make sketches and then pass evolving sketches to their neighbors.
How: Students sit in a group of 6-8 around a table or in a circle. Questions or problems should be well explained and understood by each student. Each participant privately makes one or more sketches and passes the sketch to the person on the right when it is finished or when a brief set time has passed. Participants develop or annotate the sketches passed to them, or use them to inspire new sketches which are also passed in turn. For effective learning, sketches could be posted are discussed by students.
What: The reversal method takes a given situation and turns it around, inside out, backwards, or upside down. Any situation can be "reversed" in several ways.
Benefits: Looking at a familiar problem or situation in a fresh way can suggest new solutions or approaches. It doesn't matter whether the reversal makes sense or not.
Example: In a marketing class, instead of asking "how can management improve the store?" reversal questions can ask: How can the store improve management? How can the store improve itself? How can management make the store worse?
What: The fishbone technique uses a visual organizer to identify the possible causes of a problem.
Benefits: This technique discourages partial or premature solutions and demonstrates the relative importance of, and interactions between, different parts of a problem.
How: On a broad sheet of paper, draw a long arrow horizontally across the middle of the page pointing to the right. Label the arrowhead with the title of the issue to be explained. This is the "backbone" of the "fish." Draw "spurs" from this "backbone" at about 45 degrees, one for every likely cause of the problem that the group can think of; and label each. Sub-spurs can represent subsidiary causes. The group considers each spur/sub-spur, taking the simplest first, partly for clarity but also because a simple explanation may make more complex ones unnecessary. Ideally, the fishbone is redrawn so that position along the backbone reflects the relative importance of the different parts of the problem, with the most important at the head.
The Mystery Spot
What: Instructors set up a mystery story (videos, animations) that evolves a key concept such as DNA. Students try to solve the mystery by applying their knowledge. Meanwhile, the story evolves as students investigate on the problem, allowing the instructor to incorporate different knowledge/concepts, and different knowledge depths.
Benefits: The mystery integrates science learning within an exciting narrative. The narratives have wide appeal and involve students in learning. It is also a very flexible tool with which instructors can invent stories based on their lesson purposes/ targeted key points.
Example: The Blackout Syndrome
In this exercise, students are medical investigators. And as a blackout paralyzes the city, they are called in to investigate outbreak of a new disease. They need to take steps to identify how it's transmitted, characterize it, and figure out how to treat it.
The mystery tests literacy, problem solving skills and deductive reasoning. Students investigate why people have fallen ill, do lab tests in order to decide what kind of pathogen is involved, and work on solutions and how to best counter the disease. A conclusion offers further research readings.
Creativity Based Information Resources is a searchable database which includes items on creativity in many disciplines. In addition, you may try | http://www.celt.iastate.edu/creativity/techniques.html | 13 |
42 | In logic, an argument is valid if and only if its conclusion is logically entailed by its premises and each step in the argument is logical. A formula is valid if and only if it is true under every interpretation, and an argument form (or schema) is valid if and only if every argument of that logical form is valid.
Validity of arguments
An argument is valid if and only if the truth of its premises entails the truth of its conclusion and each step, sub-argument, or logical operation in the argument is valid. Under such conditions it would be self-contradictory to affirm the premises and deny the conclusion. The corresponding conditional of a valid argument is a logical truth and the negation of its corresponding conditional is a contradiction. The conclusion is a logical consequence of its premises.
An argument that is not valid is said to be "invalid".
- All men are mortal.
- Socrates is a man.
- Therefore, Socrates is mortal.
What makes this a valid argument is not that it has true premises and a true conclusion, but the logical necessity of the conclusion, given the two premises. The argument would be just as valid were the premises and conclusion false. The following argument is of the same logical form but with false premises and a false conclusion, and it is equally valid:
- All cups are green.
- Socrates is a cup.
- Therefore, Socrates is green.
No matter how the universe might be constructed, it could never be the case that these arguments should turn out to have simultaneously true premises but a false conclusion. The above arguments may be contrasted with the following invalid one:
- All men are mortal.
- Socrates is mortal.
- Therefore, Socrates is a man.
In this case, the conclusion does not follow inescapably from the premises. All men are mortal, but not all mortals are men. Every living creature is mortal; therefore, even though both premises are true and the conclusion happens to be true in this instance, the argument is invalid because it depends on an incorrect operation of implication. Such fallacious arguments have much in common with what are known as howlers in mathematics.
A standard view is that whether an argument is valid is a matter of the argument's logical form. Many techniques are employed by logicians to represent an argument's logical form. A simple example, applied to two of the above illustrations, is the following: Let the letters 'P', 'Q', and 'S' stand, respectively, for the set of men, the set of mortals, and Socrates. Using these symbols, the first argument may be abbreviated as:
- All P are Q.
- S is a P.
- Therefore, S is a Q.
Similarly, the third argument becomes:
- All P are Q.
- S is a Q.
- Therefore, S is a P.
An argument is formally valid if its form is one such that for each interpretation under which the premises are all true, the conclusion is also true. As already seen, the interpretation given above (for the third argument) does cause the second argument form to have true premises and false conclusion (if P is a not human creature), hence demonstrating its invalidity.
Valid formula
Validity of statements
A statement can be called valid, i.e. logical truth, if it is true in all interpretations.
Validity and soundness
Validity of deduction is not affected by the truth of the premise or the truth of the conclusion. The following deduction is perfectly valid:
- All animals live on Mars.
- All humans are animals.
- Therefore, all humans live on Mars.
The problem with the argument is that it is not sound. In order for a deductive argument to be sound, the deduction must be valid and all the premises true.
Satisfiability and validity
Model theory analyzes formulae with respect to particular classes of interpretation in suitable mathematical structures. On this reading, formula is valid if all such interpretations make it true. An inference is valid if all interpretations that validate the premises validate the conclusion. This is known as semantic validity.
In truth-preserving validity, the interpretation under which all variables are assigned a truth value of 'true' produces a truth value of 'true'.
In a false-preserving validity, the interpretation under which all variables are assigned a truth value of 'false' produces a truth value of 'false'.
Preservation properties Logical connective sentences True and false preserving: Logical conjunction (AND, ) • Logical disjunction (OR, ) True preserving only: Tautology ( ) • Biconditional (XNOR, ) • Implication ( ) • Converse implication ( ) False preserving only: Contradiction ( ) • Exclusive disjunction (XOR, ) • Nonimplication ( ) • Converse nonimplication ( ) Non-preserving: Proposition • Negation ( ) • Alternative denial (NAND, ) • Joint denial (NOR, )
See also
- L. T. F. Gamut, Logic, Language, and Meaning: Introduction to logic, p. 115
- Robert Cogan,"Critical thinking: step by step", University Press of America, 1998, p48
- Barwise, Jon; Etchemendy, John. Language, Proof and Logic (1999): 42.
- Beer, Francis A. "Validities: A Political Science Perspective", Social Epistemology 7, 1 (1993): 85-105.
|Find more about Validity at Wikipedia's sister projects|
|Definitions and translations from Wiktionary| | http://en.wikipedia.org/wiki/Validity | 13 |
35 | What is a Conclusion?
Conclusion is the chronological end of any discussion. It is the stopping point of a detailed argument. Basically, its occurrence ends rest of the debate. Whether you are going through a research paper, essay or thesis, you always require strong concluding remarks to leave an impact on your readers.
Writing a good conclusion is not an easy deal. You should have a bulk of catchy words and phrases for conclusions that can make your writing more interesting for your audience. Moreover, you can also search online examples for writing your own conclusions.
Legal Definition of a Conclusion
Technically, we can define conclusion as, an end of any discussion and the point where final arguments are made. After concluding remarks, nothing new can be presented even forgotten by the writer. This is a place where final decisions or arguments are made.
Some more Definitions of a Conclusion
Here are some more definitions of a conclusion,
- An Ending or Finishing Point: As it has been already mentioned that conclusion is a finishing point. It sums up all the discussion within a paragraph.
- Conclusion as an Outcome: Conclusions can be defined as an outcome where results of an experiment, act or theory can be enclosed. So, it can be defined and taken in more technical terms.
- Conclusions as a Judgment: Those research papers or essays that involve discussion always require some decision or judgment at the end. So, conclusion can be defined as a portion of the paper where final decisions or judgment are made after complete discussion.
- Arrangement of Ideas: Ideas are also arranged and settled within the body of a conclusion.
What do Readers expect from a Conclusion?
If you want to know the true meaning and definition of a conclusion then you should concentrate what your readers expect from a closing or ending paragraph. Most often readers expect some particular points from a well written conclusion. Remember your readers have continued reading several pages before reaching at your conclusion so your ending paragraph should be catchy enough.
- The ending paragraph should summarize all the basic ideas presented within an essay or research paper.
- If you are talking about an issue then you should present personal opinion on that particular subject as concluding remarks.
- While drawing conclusions, you can also rephrase the questions.
- If you are considering an issue in your essay or research then add future implications if the situation changes or continues.
An Important Aspect of Writing a Good Conclusion
While drawing conclusions, keep in mind an important point that never add any new information in the last part of your essay. In ending paragraph, audience or readers do not expect anything new. Conclusion should have precise information. Do not repeat concepts and words that have been already mentioned. However, you should have enough synonyms and related words that can make your conclusion more attractive.
Writing a Good Conclusion
From above mentioned definition it is now crystal clear that the conclusion is made to indicate the end of a research or an essay. It is drawn to summarize the basic points of any discussion. So, it must be written keeping in mind the following points.
- To encourage the audience or readers
- To intensify the points made in an essay
- To arouse the emotions of the readers
- To restate arguments and facts with logic
Check out our article on how to write conclusions? for added tips and guidelines on writing better conclusions.
Difference between a Conclusion and a Summary
Some people confuse summary with conclusion. Be focused that both are absolutely different and diverse approaches ending an essay. Summary involves the points in precise manners that have been already mentioned within the essay body. On the other hand, conclusion can be defined as an ending paragraph that offers more interesting and clear approach. It does not simply replicate what you have already done in your essay.
So, you should be careful that your conclusion should neither be a simple restatement of your essay nor a summary. It must go beyond. It should be a judgment for your thesis or it may express your consent for an issue. Findings and implications are also discussed in a conclusion. Sometimes directives are also put forward as concluding remarks.
Drawing Conclusions as a Logical Reasoning
Conclusion can be further defined as a proposal that has been arrived after logical reasoning. It is a paragraph based on rational and logical statements that can deny or affirm something.
How Do I Cite Sources and Write My Paper? Library tutorials.
In essays or research papers, we use information from different sources including books, websites and our own life experiences. The end goal is always to draw a conclusion. For example consider the beautiful tale of Little Red Riding Hood. As the story depicts that the little girl was made fool by the cunning wolf. She badly trusted the black, sneaky wolf. We can understand from our experience that this is not the right way and does not make any sense to trust strangers.
So, with our own experiences and from those put forward in the story, we can draw a logical conclusion that we should be careful about those people whom we trust. We should not take chances and risks in life. We should never trust strangers. This is known as logical and rational approach while writing a conclusion.
Same happens with many other examples. For example, if we write a biography of a person, we can easily conclude the type of his personality by using information we already know about that person. So, biography can help us to draw a conclusion of the type of personality.
Words and Phrases for a Conclusion
Here are some strong concluding remarks and expressions that may assist you for the ending paragraph of your essay or research paper writing.
- In Conclusion to…
- Now that you are armed with the above mentioned information…
- This is the convenient way to…
- Now get out and start thinking…
- Believe me, once you start applying this…
- Go ahead…
- The net result of the discussion is…
So, you can search out or formulate several catchy remarks of the same sort based upon your information, issue, idea or discussion. | http://www.writeawriting.com/academic-writing/definition-of-conclusion/ | 13 |
31 | You will find critical thinking skills referenced in your courses, program objectives, information about job opportunities, and professional development and workplace training. What is critical thinking and why is it so important? Being able to think critically, which is evaluative rather than negative, allows us to make better decisions and solve complex problems in both personal and professional contexts. Definitions of critical thinking are not hard to find, but the following examples help us to understand the basic concept and how it applies to learning:
"Critical thinking is a habit of mind characterized by the comprehensive exploration of issues, ideas, artifacts, and events before accepting or formulating an opinion or conclusion." - Association of American Colleges and Universities
"Critical thinking is that mode of thinking – about any subject, content, or problem – in which the thinker improves the quality of his or her thinking by skillfully taking charge of the structures inherent in thinking and imposing intellectual standards upon them." - CriticalThinking.org
The specific components of thinking critically involve a set of skills and characteristics that are relevant across academic disciplines and types of employment. Critical thinking is:
- Open-minded – You are willing to engage in the critical thinking process and to consider multiple options, sources, perspectives, and possible solutions.
- Self-aware – We are each subject to biased thinking based on our own values, viewpoints, and experiences. To think critically you must recognize the potential of these factors to influence your decisions both positively and negatively.
- Inquisitive – Critical thinking requires the active gathering of information through probing questions and research as a basis for making decisions and judging quality; not just accepting an initial thought or proposed solution.
- Evaluative – Critical thinking includes careful assessment of the importance, relevance, and validity of all information gathered.
- Reasoned – Before drawing any conclusions about a given situation or problem, possible solutions are considered and evaluated with the information gathered and your prior knowledge.
Critical Thinking and Online Learning
Robert H. Ennis, Emeritus Professor of Philosophy of Education at the University of Illinois, presents 21 strategies for teaching critical thinking skills. This list highlights three underlying tactics: Reflection, Reasoning and Alternative Hypotheses. These strategies, when implemented into curricula, require you to purposefully think through your decisions, carefully weighing all of the available information. This process allows you to ask questions that encourage development of reasons for your views and those of others. You also become aware of other possible explanations and a more extended set of available, relevant resources.
Critical thinking is often included in standardized approaches to learning objectives. Take a closer look at several examples from Georgia State University, Clark State Community College, and Arizona Western College. You may find similar references in your course and program learning outcomes. These are addressed through a variety of activities and assessments such as case studies, discussion, and writing assignments. Here are a few online resources for further research on critical thinking skills in education:
- StudyGuides&Strategies.net includes a segment on critical thinking, complete with an interactive exercise that walks the user through specific steps of critical thinking as part of project development.
- The Foundation for Critical Thinking developed an Online Model for Learning the Elements and Standards of Critical Thinking that offers a detailed process for analyzing thinking. This approach provides a line of questioning for each of eight elements of thought, such as concepts, assumptions, and consequences.
- Argument mapping tools, such as TruthMapping.com and others, provide ways to dissect more complex problems and create visual representations of the components and their relationships to one another.
- The Association of American Colleges and Universities developed a Critical Thinking Value Rubric [PDF] to help assess inquiry and analysis across disciplines. Explanation, evidence, influence of context and assumptions, student perspective, and conclusions are each measured against four benchmarks.
Demonstrating Critical Thinking Skills
An infographic from Pearson Education lists specific job titles and categorizes them into three levels of critical thinking based on what is required to perform the jobs. Many vacancy announcements and job descriptions include a reference of some kind to critical thinking and related skills. How can you demonstrate your skills to a potential employer?
Describe your skills. Specifically list critical thinking examples and experience in your resume. Describe the types of decisions you made at various positions and problems you were responsible for solving. Search for job listings that include the term critical thinking for examples of descriptions and wording that might be helpful as you reflect on your own skills and abilities in these areas.
Tell a story about your skills. Prepare several brief narratives about your critical thinking skills in action. Be able to relay these stories about your experiences, and your use of critical thinking elements, in response to interview questions such as "Tell us about a current issue you are working on" and "Describe a challenging situation you faced and how you handled it." Your examples can come from a variety of experiences in school, internship, and job contexts. You can also anticipate a type of interview question that targets problem solving such as, "Why is a manhole cover round?"
Provide evidence of your skills. When possible, provide documented evidence of your critical thinking as part of a career portfolio. These artifacts may be assignments from academic courses, excerpts from reports and presentations of successful problem resolution on-the-job, and formal performance evaluations from past and current supervisors. All should illustrate or address your ability to apply critical thinking skills in the "real world."
For additional information and resources, review the 35 Dimensions of Critical Thought at CriticalThinking.org, Critical Thinking in Education presented by the American Scientific Affiliation, and creative and critical thinking resources from the U.S. Air Force's Air University website. Continue your exploration and refine your abilities to think critically. | http://www.onlinecollege.org/2011/10/19/wanted-critical-thinkers/ | 13 |
16 | Etymology:From Medieval Latin, "things mentioned before"
Examples and Observations:
- "Logic is the study of argument. As used in this sense, the word means not a quarrel (as when we 'get into an argument') but a piece of reasoning in which one or more statements are offered as support for some other statement. The statement being supported is the conclusion of the argument. The reasons given in support of the conclusion are called premises. We may say, 'This is so (conclusion) because that is so (premise).' Or, 'This is so and this is so (premises), therefore that is so (conclusion).' Premises are generally preceded by such words as because, for, since, on the ground that, and the like."
(S. Morris Engel, With Good Reason: An Introduction to Informal Fallacies, 3rd ed., St. Martin's, 1986)
- "Here is a simple example of reasoning about the nature/nurture issue:
Identical twins sometimes have different IQ test scores. Yet these twins inherit exactly the same genes. So environment must play some part in determining a person's IQ.Logicians call this kind of reasoning an argument. In this case, the argument consists of three statements:
- Identical twins often have different IQ scoeres.
- Identical twins inherit the same genes.
- So environment must play some part in determing IQ.
(Howard Kahane and Nancy Cavender, Logic and Contemporary Rhetoric, 8th ed., Wadsworth, 1998)
- "Here's another example of an argument. In fall 2008, before Barack Obama was elected US president, he was far ahead in the polls. But some thought he'd be defeated by the 'Bradley effect,' whereby many whites say they'll vote for a black candidate but in fact don't. Barack's wife Michelle, in a CNN interview with Larry King (October 8), argued that there wouldn't be a Bradley effect:
Barack Obama is the Democratic nominee.Once she gives this argument, we can't just say, 'Well, my opinion is that there will be a Bradley effect.' Instead, we have to respond to her reasoning. It's clearly valid--the conclusion follows from the premises. Are the premises true? The first premise was undeniable. To dispute the second premise, we'd have to argue that the Bradley effect would appear in the final election but not in the primaries, but it's unclear how one might defend this. So an argument like this changes the nature of the discussion. (By the way, there was no Bradley effect when the general election took place a month later.)"
If there was going to be a Bradley effect, Barack wouldn't be the nominee [because the effect would have shown up in the primary elections]
[Therefore] There isn't going to be a Bradley effect.
(Harry Gensler, Introduction to Logic, 2nd ed. Routledge, 2010) | http://grammar.about.com/od/pq/g/premiseterm.htm | 13 |
17 | 69 Slides! And notes!
Search words: DNA, deoxyribonucleic acid, RNA, ribonucleic acid, chromosomes, nucleotide, Watson and Crick, nitrogen base, adenine, thymine, cytosine, guanine, uracil, double helix, replication, transcription, translation, protein synthesis, genetic code, amino acids, codons, anticodons, gene mutations.
If you have never taught a lesson using a Powerpoint presentation, you have to give it a try! My students always prefer this to a traditional lecture.
PLEASE NOTE: I have listed on TpT two different versions of this PowerPoint. This version is slightly longer and is a little more advanced. Please read the description carefully. The shorter version can be found by clicking here: DNA, RNA Powerpoint and Notes - Shorter Version
This powerpoint is on DNA, RNA and Protein Synthesis. It consists of 69 slides that are colorful, informative and visually stimulating. Pictures and diagrams are included that will greatly enhance your instruction to your students. This product also includes a set of notes for the teacher (15 pages) and a set of notes for the student (17 pages). You receive two versions of the notes: A PDF version and a Word document so that you can make changes if needed.
PLEASE NOTE: This PowerPoint is also included in a bundled unit plan containing 20 different products on DNA, RNA, and protein synthesis. You can view the bundled unit plan by clicking here: DNA, RNA, and Protein Synthesis Complete Unit Plan.
Please download my free preview. The preview consists of the first 15 slides of the presentation with the corresponding notes for the teacher and corresponding notes for the student.
Topics covered are:
1) History of DNA Studies: Early concepts about genes, roles played by genes, Watson and Crick, Chargaff, Rosalind Franklin, Maurice Wilkins.
2) DNA nucleotides and their composition, purines, pyrimidines, how nucleotides are joined to one another.
3) The Watson and Crick model of DNA, the double helix, the base pairing rules, DNA as a carrier of information
4) Mechanisms by which a large amount of DNA can fit inside a small space of the nucleus, histones, nucleosomes
5) Replication of DNA: Drawings, steps in replication, DNA polymerase, hydrogen bonds, origins or replication, replication forks, complimentary strands, helicase
6) Proofreading and repairing DNA: the role of polymerases, nucleases, and ligases.
7) The Genetic Code: codons, amino acids
8) RNA: Differences between DNA and RNA, the functions of RNA, the three types of RNA (mRNA, tRNA, rRNA).
9) Transcription: Purpose of transcription, steps to transcription, promoter, terminator, RNA polymerase
10) RNA processing and editing, introns, exons
11) Understanding the genetic code, how codons translate into amino acids, start codons, stop codons, reading a table of codons
12) Translation: protein synthesis, ribosomes, steps in translation, the role of mRNA and the role of tRNA, structure of tRNA and its ability to transport amino acids, reading and deciphering the code.
13) Practice Problem: Given a DNA sequence, students will figure out the corresponding mRNA codon sequence, the tRNA anticodon sequence, and the amino acid sequence.
14) Ribosome structure: large subunit, small subunit, RNA binding sites, the entry and exit of tRNA to build a polypeptide, start codons, stop codons.
15) Mutations: definition of mutation, gene mutations, chromosome mutations, point mutations, base substitutions, base insertions or deletions, silent mutations, frameshift mutations.
16) The importance of mutations in natural selection.
This powerpoint was written with a typical biology I class in mind. It can easily be edited. Middle school teachers may simply want to delete a few of the slides. Honors teachers can easily include a few extra details.
Also included is a set of notes to accompany this powerpoint. It includes a complete set of notes for the teacher, and an outline of the notes for the students. Students will use the outline as the powerpoint is being presented and will fill in the notes as the lesson is being taught.
Also included is a chart of all 64 codons and the amino acids they code for.
Please let me know if you have any questions. Thanks for looking.
You might be interested in these related products:
DNA, RNA, and Protein Synthesis: Complete Unit Plan of 20 products
Lab: Effect of Environment on Gene Expression
DNA Powerpoint Jeopardy Game
Lab: Chromosome Squashes
Lab: Determining the Traits of a Mystery Organism Through Protein Synthesis
DNA, RNA and Protein Synthesis Crossword Puzzle
Test: DNA, RNA, and protein synthesis
DNA Quiz or Homework (Advanced)
Crossword Puzzle: DNA and Replication
DNA, RNA and Protein Synthesis Worksheet or Study Guide
Test: DNA and Replication
DNA, RNA, Protein Synthesis Set of 4 Homework Assignments
Homework #1: The Basics of DNA
Homework #2: DNA Replication
Homework #3: RNA and Transcription
Homework #4: Protein Synthesis and Translation
DNA, RNA, Protein Synthesis - Set of 3 quizzes
Quiz #1: DNA and Replication
Quiz #2: RNA and Transcription
Quiz #3: Translation and Protein Synthesis
RNA and Transcription Powerpoint Jeopardy Review Game
Translation and Protein Synthesis Powerpoint Jeopardy Review Game
DNA and RNA Powerpoint Jeopardy REview Games - Set of Three Games
Test: RNA, transcription and translation
Biology Unit on CD: DNA, RNA, Protein Synthesis
FREE Chart of Amino Acids and Their Codons
Amy Brown Science Stuff | http://www.teacherspayteachers.com/Product/DNA-Deoxyribonucleic-Acid-RNA-Protein-Synthesis-Powerpoint-Notes-108935 | 13 |
54 | - 1. Introduction
- 2. The Label
- 3. The Explanation and Links:
- 4. The Examples
- 5. Do’s and Don’ts of Examples
- 6. Link to Motion
- 7. Special Section on Different Analysis Paradigms in Constructive
What is an argument? We know that arguments form the backbone of a Debater’s stand on a particular motion. We also know that the arguments are directed to the judges with the intent of making them agree with a particular stance on the motion. Thus, arguments are communications directed at judges with the intent of influencing them. An argument is best opened with a label, which highlights what the argument is about. After that, the speakers will have to give an explanation, using logical links, as to why their position is correct. Next, they will have to use examples to prove that their explanation and links apply to real life. Finally, they will link the argument back to the motion. The flow of the arguments should look like this:
Label of Argument
Explanation and logic
(This is the most salient or obvious example to support your argument.)
Link example to logic
(This is intended as a follow up to the primary example to show a trend or pattern developing. This is also to avoid allowing the other team to say that you are using an isolated example.)
Link to the Motion
2. The Label
The label should immediately identify what the argument is and how it relates to the motion. It should encapsulate the argument to follow within a single sentence and make it clear at the start of the argument what the speaker will elaborate on.
To ensure that a label is representative of the argument and addresses the motion, a good tip is to connect the label to the motion using the word “because” and see if the sentence still makes sense. For example, a speaker wishes to argue in favour of the death penalty based on its value to the justice system in deterring crime and considers the following three labels:
b. Value to Justice system
c. Deters crime.
An application of the test above readily shows which label is the best. “THW support the death penalty because of justice” does not make too much sense. “THW support the death penalty because of its value to the justice system” makes more sense. However, it remains vague. “THW support the death penalty because it deters crime” will be the best approach, since it clearly signals that the ensuring argument will be.
2.1. Tip on Pre-labels
Some debaters use “pre-labels” for stylistic purposes. This will involve the use of quotes or phrases with a flourish to introduce the argument. For instance, an argument on the dangers of technology may be pre-labelled as the “Rage against the Machine” point and an argument on nuclear disarmament could be pre-labelled as “Turning Swords into Plowshares.” This technique is perfectly acceptable as long as the speakers
a. do not waste time doing so, and
b. remember to use an actual label immediately after the pre-label.
3. The Explanation and Links:
The explanation is the most critical part of the argument, where the speaker outlines the key reasons why the motion stands or falls. The most effective means of convincing judges that a particular argument is valid is to demonstrate that the argument is universal. This means that the explanation of the argument is usually done in theory and in principle. The proof will then be applied to this theory later on in the examples.
The best way to make the logic of the argument clear is to “walk” the audience and the judges through the logic step by step. By showing the “links” in these steps clearly, the debaters are able to establish that the argument stands. Within most debates, debaters seek to show that the subject of the debate, such as globalization or environmental protection, leads to a certain outcome, such as the developing world growing more prosperous.
Furthermore, the debater will need to show that it is a certain aspect, trait or characteristic of the subject, such as globalization’s transfer of technology or environmental protection’s ability to protect agriculture, which leads to the predicted outcome. To summarise this flow of events based on the example of capital punishment, the debater shows that:
Subject has a particular trait (causal factor)
Death penalty involves death
The trait leads to a certain outcome
Death scares people
The outcome leads to the desired effect
People deterred from committing crime through fear
Motion is proved
Death Penalty thus should be supported
It can be seen that Link C in fact also serves as the label of the argument. A proper argument will always come back to the label already established. Some cases may have more links in the argument set but will generally follow this framework.
4. The Examples
Arguments are only theories until they can be supported by examples. Examples show that the argumentation applies to the real world and that there is precedence for the case being made by the debaters. Without examples within a debate, it will be very difficult for a Debater to score high on content.
4.1. Types of Examples
4.1.1. Prominent Case
This is the most common type of example used in debate and makes use of a famous incident or case to support the argument. For instance, in arguing about the dangers of nuclear power due to the high risks of meltdowns, the debaters will cite the case of Chernobyl. These examples are easily recognized by the judges and audience and readily help to make the argument appear more real and vivid.
4.1.2. Trends & Statistics
This technique involves the use of a series of cases or statistics to showcase a trend. For instance, to showcase the dangers of nuclear power, debaters can cite how many nuclear accidents had taken place over the last two decades. Debaters will have to be precise with the statistics used here, as judges and opponents are well aware of the possibility that the statistics may have been made up.
4.1.3. Proof by authority
This method resorts to the use of authority figures within a related field to support the argument. For instance, to show that nuclear power is dangerous, debaters may cite studies conducted by the Nuclear Energy Institute or the International Atomic Energy Agency. Using such examples could be problematic if the opponents are able to cast doubt on the credibility of the “experts.” Furthermore, in most cases, only the opinions and findings of these experts are reflected, and they may not be historically verifiable facts.
4.1.4. Proof by analogy
This technique makes reference to another subject with similar traits in order to support the argument. For instance, nuclear power could be compared to crude oil in that both will damage the environment if released into the open. This approach is useful when trying to explain a particularly diffcult argument and a simplication will help to get the idea across better. However, this approach can always be attacked by an opponent showing that these two examples are not the same and are not related. Thus, this technique should only be used as a last resort.
4.1.5. Hypothetical examples
These refers to the use of possible scenarios to try to support the arguments. For instance, the speaker outlines the dangers of nuclear technology by stating that it could destroy all of humanity. However, since this is only a hypothesis, it is difficult to use it to support an argument.
5. Do’s and Don’ts of Examples
5.1. Do Have Variety
Many debaters stick to a certain region or timeframe for examples during a debate. They should avoid doing this. For instance, a team should not only cite examples from the United States. They should give examples from various countries to show that their argument is universal.
5.2. Do Use New Examples
Many debaters re-use examples that were already used by their teammates. This should be avoided as they will not get high enough content scores based on their inability to produce new examples.
5.3. Don't Use Examples as Logic
Some speakers go directly to the example when arguing without having the principal logic point articulated first. This allows the opponents to just attack the example easily in order to defeat the argument.
5.4. Don’t Lead with Examples
Some speakers begin the argument with examples and then try to follow them up with the logic links. This method tends to be problematic as the lack of time at the end sometimes forces the argumentative points to be dropped.
5.5. Do Explain Examples
Some debaters merely name the examples and then move on, assuming that the judges will automatically know what the example refers to. This again will lead to a lack of content scores because the Debaters have yet to demonstrate how the examples actually work and if they actually support the argument.
6. Link to Motion
At the conclusion of each argument, Debaters should link the point back to the motion. This will allow the Debaters to establish the relevance of the argument to the motion and demonstrate that these are not being raised in a vacuum. Judges will thus see that the speakers are able to show not only that the points raised are valid on their own but that they support or oppose the motion as well.
For instance, in a debate about the censorship of the arts, a speaker cannot just deliver an argument on the importance of free speech and leave it hanging. There is a need to show that free speech is important and that censorship of the arts will lead to the violation of this particular right. In debates where the link back to the motion had been absent, it is often not surprising to find that the debaters are unable even to recall the exact words of the motion.
7. Special Section on Different Analysis Paradigms in Constructive
by Hygin Fernandez, Co-Coach, Anglo-Chinese Junior College Debate Team 2011
What is a constructive/substantive?
It is an argument used to further your side’s case during a debate. It is an idea that is fully explained and elaborated to such an extent that it proves or disproves the motion. A good substantive, is succinct, clear and utilises a depth of analysis.
This means you don’t waste too much time with unnecessary words, your chain of logic is straightforward and the usage of this logic is coupled together with an analysis of the point in the context of the motion. For example, in a motion about smoking, ideas with regards to its addictive nature will help you further a point about how it is bad for long term health. This is analysis.
7.1. How to come up with a constructive/substantive?
- Think about the issues related to the motion
- Think about the individuals/societies/groups related to the motion
- Think about the ramifications of the motion to individuals/societies/groups
- Put your mind through the processes the motions entails
- E.g. THBT terrorism is justified, put yourselves in the processes of terrorism.
- Why are you doing it?
- Why is it necessary?
- Why is it justifiable to you (you = a personification of the motion)?
- Consider the possible impact in the following spheres: Social, Political, Economics, Environment, Regional, Medical, etc.
DISCLAIMER: This is not the only way to categorise substantives. It shouldn’t be a textbook from which you memorise and apply to all situations. Rather use it as a way to understand the basics so that more advance methods of analysis will come to you quicker by means of experience and practise.
7.2. Types of Constructive/Substantive
7.2.1. Logical analysis
7.2.2. Policy analysis
7.2.3. Comparison analysis
7.2.4. Time analysis
Share this Page
- Debate Topics
- Debate Strategies
Singapore National Debate Team Exhibition MatchSingapore National Debate Team Exhibition Match The National Debate Team representing Singapore at the 2011 World Schools Debating Championships in Dundee, Scotland, will be holding an exhibition debate on Wednesday 22 June 2011 at 1700 hours (5pm). The debate will be held at the Arts House. The National team will be facing a team of distinguished Sinapore National Team Alumni such as Aaron Maniam, Timothy Yap and Joshua Hiew. Everyone...Samuel23 May 2011 | http://www.debateable.org/debate-topics/constructing-arguments | 13 |
36 | [5.] Conditional and Indirect Proofs
[5.1.] “Conditional Proofs.”
[5.1.1.] Introducing CP.
Chapter 5 of your textbook begins with an example of an argument that is valid but that cannot be proved valid using only the rules covered so far (namely, the 8 implicational rules and the 10 equivalence rules):
A É B / \ A É (A · B)
Notice that the conclusion is a conditional: it does not assert that A is true, and it does not assert that A · B is true. What it asserts is that if A is true, then so is A · B.
So in order to derive this conclusion from the premise given, we don’t have to show either that A is true or that A · B is true. We merely have to show that (given the premise A É B) if A is true, then so is A · B.
In more general terms, what we want to prove is that (given specific premises), if one sentence is true, then another is true as well.
Chapter 5:1 introduces a new rule that allows us to prove this sort of conclusion: the rule of Conditional Proof (CP). See the front inside cover of your textbook for the form of this rule.
Here is how the rule works (I will use the argument given above as an example). You begin as with any other proof in sentential logic:
1. A É B p /\ A É (A · B)
The first step in using CP is to assume the antecedent of the conditional you want to prove (this is the only thing you should ever use as an assumed premise when using CP!):
2. A AP [for “Assumed Premise”] / \ A · B
The second thing to do is derive the consequent of the conditional from the antecedent that you have just assumed. In this case, the consequent is A · B:
3. B 1, 2 MP
4. A · B 2, 3 Conj
At this point, you have shown that if A is true, then A · B is true as well. In other words, you have shown “A É (A · B)” to be true. This licenses you to write that conditional down in your proof, citing “CP” as the rule you are using to do so:
5. A É (A · B) 2-4 CP
Note that correct notation requires that you draw a long arrow, beginning above the citation of CP in line five and pointing to the line on which you used AP. See p.124 in your book for an example of this notation.
This notation indicates that the sentence derived by way of CP (in this example, line 5) does not depend on the premise assumed on the line that uses AP (in this example, line 2). In other words, it indicates that line 2 (“A”) does not have to be true in order for line 5 [A É (A · B)] to be true.
It also indicates that everything between line 2 and the horizontal line between lines 4 and 5 does depend on line 2. You could not have proved lines 3 and 4 without assuming line 2, and the arrow indicates this.
Lines 3 and 4 are said to be within the scope of the assumed premise.
Beginning just underneath that horizontal line (from line 5 on), we say that by this point in the proof, the assumed premise has been discharged.
An additional CP proof, which employs some less commonly used rules…
1. (C · D) É B p / \ D É (~C Ú B)
2. D AP / \ ~C V B
3. (D · C) É B 1 Comm
4. D É (C É B) 3 Exp
5. C É B 3, 4 MP
6. ~C Ú B 5 Impl
7. D É (~C Ú B) 2-6 CP
[5.1.2.] Discharged Assumed Premises.
IMPORTANT: Once an assumed premise has been discharged, you can no longer use it, or any of the premises within its scope, in the proof.
For example, you cannot continue the previous proof with the following line:
8. D Ú C 2 Add
That move employs line 2, which is within the scope of the assumed premise, which at this point in the proof has been discharged.
[5.1.3.] Arguments with Conclusions that are not Conditionals.
CP can be useful, even when the conclusion of your argument is not a conditional. Suppose that you were asked to prove the following argument valid:
A É B / \ ~A Ú (A · B)
You might reason as follows: ~A Ú (A · B) is equivalent to the conditional A É (A · B), and thus can be replaced by that conditional using Impl. So if you can prove A É (A · B), then you are one step away from proving ~A Ú (A · B). Your proof would look like this:
1. A É B p /\ ~A Ú (A · B)
2. A AP / \ A · B
3. B 1, 2 MP
4. A · B 2, 3 Conj
5. A É (A · B) 2-4 CP
6. ~A Ú (A · B) 5 Impl
[5.1.4.] Multiple Uses of CP in a Single Proof.
You can use CP as many times as you need to in a given proof. As your textbook notes, you can use two applications of CP in order to derive a biconditional. For example (this is from exercise 5-1, #14, p.132):
1. D É G p /\ (D · G) º D
Since the conclusion you need to derive is equivalent to two conditionals:
(D · G) É D and D É (D · G)
then you can use CP two times, once to prove each of the conditionals. Then you can easily derive the biconditional you need by applying Equiv to those two conditionals:
1. D É G p /\ (D · G) º D
2. D · G AP / \ D
3. D 2 Simp
4. (D · G) É D 2-3 CP
5. D AP / \ D · G
6. G 1, 5 MP
7. D · G 5, 6 Conj
8. D É (D · G) 5-7 CP
9. [(D · G) É D ] · [D É (D · G)] 4, 8 Conj
10. (D · G) º D 9 Equiv
[5.1.5.] Nested Assumptions.
As your textbook explains on pp.127-28, it is possible to have more than one use of CP “active” at the same time. That is, it is possible to assume a premise and then assume a second premise before you have discharged the first assumed premise. In such cases, the second assumption is said to be “nested” inside the first and is therefore called a nested assumption. The following proof, from p.127, illustrates the simultaneous use of different assumptions:
1. C É D p /\ A É [ B É (C É D)]
2. A AP / \ B É (C É D)
3. B AP / \ C É D
4. C AP / \ D
5. D 1, 4 MP
6. C É D 4-5 CP
7. B É (C É D) 3-6 CP
8. A É [B É (C É D)] 2-7 CP
Exercise 5-1 (pp.131-132)
· complete this exercise for next time; as always, check your even-numbered answers, and we’ll cover some of the odds in class.
[5.2.] “Indirect Proofs.”
Review: a contradiction is a sentence that cannot possibly be true. Recall that on your first exam, you were required to use the truth table method to distinguish among sentences that are contradictions, tautologies (which cannot possibly be false) and contingent sentences (which can be either true or false). A sentence is a contradiction when there is no assignment of truth values to its atomic sentences, and therefore no row on the truth table, on which the sentence is true.
Any sentence of the form p · ~p is a contradiction. (You can use the truth table method to confirm this).
A valid argument that has all true premises must have a true conclusion. So if you come across a valid argument that has a contradiction as its conclusion, one (or more) of the premises of that argument must be false. In other words, if you can use valid reasoning to move from a set of premises to a contradictory conclusion, then not all of those premises are true.
This suggests another strategy for showing that a given argument is valid:
§ provisionally assume the negation of the claim you are trying to prove, e.g., if you are trying to prove p, then assume ~p;
§ derive a contradiction (any sentence of the form q · ~q) using that negation and the other premises;
§ if you can derive a contradiction, then you will have shown that, given the truth of the original premises of the argument, p must be true; in other words, you will have shown that the argument is valid.
1. D Ú E p
2. D É F p
3. ~E p /\ F
In order to prove F, you can assume ~F and then attempt to derive a contradiction:
4. ~F AP / \ F
5. ~D 2, 4 MT
6. E 1, 5 DS
7. E · ~E 3, 6 Conj
8. F 4-7 IP (for “Indirect Proof”)
The general strategy of Indirect Proof (ID): “add the negation of the conclusion of an argument to its set of premises and derive a contradiction.” (133)
NOTE 1: As with CP, you must use arrows and lines to “bracket off” your assumed premise and everything that follows from it from the rest of the argument. (In this online document, I’ve used shading instead, since I cannot easily create the arrows and lines required.)
NOTE 2: “If the conclusion is already a negation, you can just assume the conclusion minus the initial negation sign, instead of assuming a double negation.” (133) This is why there are two versions of this rule [see diagrams on p.133]
Exercise 5-3 (pp.136)
§ complete this exercise for next time; we’ll cover some of the odds in class
Exercise 5-4 (pp.136-37)
§ don’t worry about first doing them without CP or IP, as the instructions indicate-- just do them using IP
§ complete these for next time; we’ll cover some the odds in class
Stopping point for Thursday February 16, For next time:
· complete exercise 5-1, 5-3, & 5-4
· read ch.5:3-5 (137-140)
· go back to chapter 3 to read ch.3:7-9 (pp.74-80).
This page last updated 2/16/2012.
Copyright © 2012 Robert Lane. All rights reserved. | http://www.westga.edu/~rlane/symbolic/lecture10_proofs4.html | 13 |
19 | Thursday, 19 April 2012
1a : a social science concerned chiefly with description and analysis of the production, distribution, and consumption of goods and servicesb : economic theory, principles, or practices <soundeconomics>2: economic aspect or significance <the economics of building a new stadium>3: economic conditions <current economics>
From Encyclopedia Britannica "economics, Social science that analyzes and describes the consequences of choices made concerning scarce productive resources. Economics is the study of how individuals and societies choose to employ those resources: what goods and services will be produced, how they will be produced, and how they will be distributed among the members of society."
How Demand and Supply Works.
The following are a few extracts from an excellent article that explains how demand and supply works. Note: Demand is the desire to buy something (by an individual or group) and supply refers to the availability of that something. Economists use graphs to analyse changes in demand and supply and as a means to determine price and how variable its price may be based on the types of demand and supply situations being faced by an economy.
All the following extracts are from investopedia's article on demand and supply (please read the original article with it's graphs carefully as they will be used to explain economics in future posts);
A. The Law of Demand
The law of demand states that, if all other factors remain equal, the higher the price of a good, the less people will demand that good. In other words, the higher the price, the lower the quantity demanded. The amount of a good that buyers purchase at a higher price is less because as the price of a good goes up, so does the opportunity cost of buying that good. As a result, people will naturally avoid buying a product that will force them to forgo the consumption of something else they value more.
B. The Law of Supply Like the law of demand, the law of supply demonstrates the quantities that will be sold at a certain price. But unlike the law of demand, the supply relationship shows an upward slope. This means that the higher the price, the higher the quantity supplied. Producers supply more at a higher price because selling a higher quantity at a higher price increases revenue.
Time and Supply
Unlike the demand relationship, however, the supply relationship is a factor of time. Time is important to supply because suppliers must, but cannot always, react quickly to a change in demand or price. So it is important to try and determine whether a price change that is caused by demand will be temporary or permanent.
Let's say there's a sudden increase in the demand and price for umbrellas in an unexpected rainy season; suppliers may simply accommodate demand by using their production equipment more intensively. If, however, there is a climate change, and the population will need umbrellas year-round, the change in demand and price will be expected to be long term; suppliers will have to change their equipment and production facilities in order to meet the long-term levels of demand.
C. Supply and Demand Relationship Now that we know the laws of supply and demand, let's turn to an example to show how supply and demand affect price.
Imagine that a special edition CD of your favorite band is released for $20. Because the record company's previous analysis showed that consumers will not demand CDs at a price higher than $20, only ten CDs were released because the opportunity cost is too high for suppliers to produce more. If, however, the ten CDs are demanded by 20 people, the price will subsequently rise because, according to the demand relationship, as demand increases, so does the price. Consequently, the rise in price should prompt more CDs to be supplied as the supply relationship shows that the higher the price, the higher the quantity supplied.
What is a Market Economy?
Adam Smith popularized the idea of the 'invisible hand' that guides market forces. The idea is that the demand of individual and groups of consumers when matched with the supply of the goods that are demanded will create a fixed price which will distribute all the goods (supply) to all the consumers who can afford it (demand).
Adam Smith did say that government intervention may be needed to balance the market forces. i.e. a large business can force out a small business, or find a way to cheat or scam consumers etc. (will be explaining this more in 'perfect competition' next)
The following are some extracts to help explain what a market economy is, then I will be getting into some of the basics of economics...
What Does Market Economy Mean?
An economic system in which economic decisions and the pricing of goods and services are guided solely by the aggregate interactions of a country's citizens and businesses and there is little government intervention or central planning. This is the opposite of a centrally planned economy, in which government decisions drive most aspects of a country's economic activity.
Investopedia explains Market Economy
Market economies work on the assumption that market forces, such as supply and demand, are the best determinants of what is right for a nation's well-being. These economies rarely engage in government interventions such as price fixing, license quotas and industry subsidizations.
While most developed nations today could be classified as having mixed economies, they are often said to have market economies because they allow market forces to drive most of their activities, typically engaging in government intervention only to the extent that it is needed to provide stability. Although the market economy is clearly the system of choice in today's global marketplace, there is significant debate regarding the amount of government intervention considered optimal for efficient economic operations.
Since the government will always have some level of regulatory control, no country operates as a free market in the strict sense of the word, but we generally say that market economies are those in which governments attempt to intervene as little as possible, while mixed economies include elements of both capitalism and socialism.
Note above that in a centrally planned economy EVERYTHING is planned by the government, i.e. how many crops are to be produced, which crops are to be produced, who get how much of the crop and in what proportion. This means that if you don't want something (ex. rice) and want something else instead (ex. bread) you have no choice in the matter as the government has made that decision for you.
Also note that in a free market economy subsidies are unheard of as subsidies are meant to either protect a producer of goods or promote the development of an industry. In a free market economy the kind of oil subsidies that exist in the States would not exist.
What is Perfect Competition?
In competitive markets there are:
- Many buyers and sellers - individual firms have little effect on the price.
- Goods offered are very similar - demand is very elastic for individual firms.
- Firms can freely enter or exit the industry - no substantial barriers to entry.
Competitive firms have no market power. Recall that businesses are trying to maximize profits, and Profit = Total Revenue (TR) - Total Cost (TC).
Principle of Economics #7: Governments can sometimes improve market outcomes. Markets do many things well. With competition and no externalities, markets will allocate resources so as to maximize the surplus available. However, if these conditions are not met, markets may fail to achieve the optimal outcome. This is also known as "market failure".
If a big business is involved in activities that, say pollutes water, then it is harming fellow citizens and it's own consumers. This is called a 'negative externality'. This is an example of perfect competition NOT working efficiently. In such situations the government steps in a balances the situation so immoral business practices creating negative externalities can be controlled creating a better community and business atmosphere for everyone. That is why in practice (i.e. in the Real World) perfect competition with no regulation rarely exists, instead we have 'mixed economies' which is what exists in all democracies (and helps a democracy function better if the laws are set and administered appropriately)
What Does Perfect Competition Mean?
A market structure in which the following five criteria are met:
1. All firms sell an identical product.
2. All firms are price takers.
3. All firms have a relatively small market share.
4. Buyers know the nature of the product being sold and the prices
charged by each firm.
5. The industry is characterized by freedom of entry and exit.
Investopedia explains Perfect Competition
Perfect competition is a theoretical market structure. It is primarily used as a benchmark against which other market structures are compared. The industry that best reflects perfect competition in real life is the agricultural industry.
The Case for Monopoly/Oligopoly Regulation In The News Media Industry- Part 1
A basic definition of Monopoly and Oligopoly:
A situation in which a single company or group owns all or nearly all of the market for a given type of product or service. By definition, monopoly is characterized by an absence of competition, which often results in high prices and inferior products.
According to a strict academic definition, a monopoly is a market containing a single firm.
In such instances where a single firm holds monopoly power, the company will typically be forced to divest its assets. Antimonopoly regulation protects free markets from being dominated by a single entity.
Investopedia explains Monopoly
Monopoly is the extreme case in capitalism. Most believe that, with few exceptions, the system just doesn't work when there is only one provider of a good or service because there is no incentive to improve it to meet the demands of consumers. Governments attempt to prevent monopolies from arising through the use of antitrust laws.
Of course, there are gray areas; take for example the granting of patents on new inventions. These give, in effect, a monopoly on a product for a set period of time. The reasoning behind patents is to give innovators some time to recoup what are often large research and development costs. In theory, they are a way of using monopolies to promote innovation. Another example are public monopolies set up by governments to provide essential services. Some believe that utilities should offer public goods and services such as water and electricity at a price that is affordable to everyone.
A situation in which a particular market is controlled by a small group of firms. An oligopoly is much like a monopoly, in which only one company exerts control over most of a market. In an oligopoly, there are at least two firms controlling the market.
Investopedia explains Oligopoly
The retail gas market is a good example of an oligopoly because a small number of firms control a large majority of the market
Extracts are from here and are in italics:
Why do economists object to monopoly? The purely “economic” argument against monopoly is very different from what noneconomists might expect. Successful monopolists charge prices above what they would be with competition so that customers pay more and the monopolists (and perhaps their employees) gain. It may seem strange, but economists see no reason to criticize monopolies simply because they transfer wealth from customers to monopoly producers. That is because economists have no way of knowing who is the more worthy of the two parties—the producer or the customer. Of course, people (including economists) may object to the wealth transfer on other grounds, including moral ones. But the transfer itself does not present an “economic” problem.
Rather, the purely “economic” case against monopoly is that it reduces aggregate economic welfare (as opposed to simply making some people worse off and others better off by an equal amount). When the monopolist raises prices above the competitive level in order to reap his monopoly profits, customers buy less of the product, less is produced, and society as a whole is worse off. In short, monopoly reduces society’s income.
By having a monopoly in the news media you reduce the quality of media across a whole spectrum of media. for example; Murdoch's tabloid style newsmanship is a model that works for business, in that it attracts attention, and makes money and can direct attention of viewers for advertising purposes.
[Note: To assume that a business model is correct simply because it works is the equivalent of assuming that slavery should be accepted because it has economic value, i.e. cheap labor.]
The more Monopolistic or Oligopolistic power he gains, the more other news media has to follow him to compete with him for market share... and the more the news media will sound, 'like the gossiping of a 14 year old girl'. [Keeping in mind that the type of news structure that works best for a democracy is different from the gossipy /rumor mongering style - click here for more info.]
The growing monopoly of Murdochs rumor mongering type news media rests upon the deregulations put into effect by George W. Bush ...
By deregulating the market the result is that one type of 'newsman' gets more power over dessimination of news and thus the quality of the product (i.e. news) becomes lower. (i.e. Rupert's style is of a rumor mongerer, so, at the very least, that is what we will get). Especially notice that Colin Powell's son states that "[this deregulation] will advance our diversity and localism goals and maintain a vigourously competitive environement". Obviously, the opposite has happened - decrease in competition is the very definition of conglomeration.
Another reason for regulation:By removing regulation towards monopolies, the ones with large amounts of capital can control a larger and larger share of the global news market. This can create a bias in news that can blind the public from proper perspectives - for example; this video clip shows how a large segment of the news media was focused on an outdated 'new' product rather than news from a region that wasn't deemed interesting by high level exec's;
The market share for polticheck is much lower (i.e. less people 'view' the website about facts in politics than they do Fox News, so Fox News's view AND it's bias will dominate)... this also means that Fox News views will predominate the public debate and influence public policy (as the debates are centered around media manufactured issues). I think this is an iron-clad case to regulate monopolies and oligopolies in media, especially when you take these three pieces of evidence into account:
1. How a large media conglomerate can manufacture and promote news
Notice how one newspaper is used to comment on (or more accurately, 'advertise') another paper.
Stephen Colbert : The Post isn't saying she's a lesbian, they are just asking the question By asking a question you place a thought in someone's head. For example; I f I were to say, "DON'T think of a PINK elephant, for 5 minutes and I'll give you ten thousand dollars!" - Unless you are a yogi, you will be thinking of pink elephants for the next 5 minutes continuously. by constantly talking about an idea - whether it's true or not - will place that idea firmly in the minds of the listeners. Notice the placement of the story in relation to other stories, the idea of that woman (whoever she is) as a lesbian, is most certainly being promoted as a matter of policy - irrespective of whether it worked or not. The train of planting News goes as follows - First story in the Wall Street Journal, Second Story in the NY Post and finally, news story on Fox News - Seems like different news sources commenting on a story but in reality it's ONE news story, created and promoted by the SAME news source.
2. Fox News has a large market share for news in America and look at its types of news bias...
Partial list of Fox News's FALSE Statements as checked by PolitiFact.com
Less than 10 percent of Obama's Cabinet appointees "have any experience in the private sector."
Texas board of Education may eliminate references to Christmas and the Constitution from textbooks Healthcare reform is a government takeover of healthcare
The Muslim brotherhood has openly stated they want to declare war on israel
American troops have never been under the formal control of another nation
Florida's Gov. Rick Scott's approval ratings are up (lol!)
Massachusetts health care plan is wildly unpopular among state residents
There's been more debt under Obama than all other Presidents combined
Health care bill includes death panels
Cash for clunkers will give government complete access to your home computer
Halting gulf drilling costs 8 billion a day in imports
Democrats plan largest tax increase in history
John Holdren proposed forced abortions and putting sterilants in drinking water
Nobody at Fox News ever said you're going to jail if you don't buy health insurance
3. Fake perspectives on the countries economy can be created, thereby delaying taking action to remedy the economic situation - The United States of America led the whole planet into a depression with the antics of its last administrators (and ideologies begun there are beginning to show their fruits in Europe as well). The news media is just one example but it is still an important means for spreading disinformation,and, if it is not regulated can affect the entire political and economic debate across the nation causing allot of short-term and long-term political and economic damage;
Notice this statement: "Wall Street Journal has been sold to Rupert Murdoch, This is good news for the economy because from now on Wall Street Journal will only report good news on the economy" During George W. Bush's administration Fox News only reported good news about the economy (it's reporting has been so 'biased' that one commentator (Goldberg) even got fed up!). Even though the huge debt, the housing crises, the problems in wall street, the water wars, etc. All began or reached fruition with that administration. Because of Rupert Murdoch's Monopolistic position he has manged to influence the national (and global) debate away from science and on to ideology, creating, dare I say, the possible beginning's of a new Dark Age. More info on Fox News is here (sorry, some videos are blocked). Also, please note the war mongering stance taken by Hannity and Glenn Beck. If Rupert Murdoch is aware of these activities, understands their implications AND approves. Then this is very serious indeed.
Opportunity Cost and One Reason for 'Mixed' or 'Dual' economies
What Does Opportunity Cost Mean?
1. The cost of an alternative that must be forgone in order to pursue a certain action. Put another way, the benefits you could have received by taking an alternative action.
2. The difference in return between a chosen investment and one that is necessarily passed up. Say you invest in a stock and it returns a paltry 2% over the year. In placing your money in the stock, you gave up the opportunity of another investment - say, a risk-free government bond yielding 6%. In this situation, your opportunity costs are 4% (6% - 2%).
In economics, opportunity cost refers to what you can produce at the cost of another.
For example; you could allocate more of the budget to weapons or more to a social safety net such as medicare. In choosing one over the other you face a 'tade-off'. An economy with all resources used up for war or all resources used up for social services only exist in extremely unbalanced economies and political systems and even then don't exist in its pure forms.
In economics you seek to balance one choice against another to find a balance that is right for that particular society. Going to extremes is not something economists are supposed to do (at least not as I was taught in my O and A levels and my undergraduate economics studies... so this is 90's education onwards). Reducing taxes to a degree can stimulate the economy and even create jobs. After a certain point it ceases to be useful and that point will change for each time and situation (economics is about understanding the currant situation and formulating a solution. Politics often works in reverse, particularly new Republican economics - see True Republicanism ).
If tax cuts already havn't worked so increasing them will also not work. In other words, decreasing taxes can be useful to a point and beyond that point they are detreminental to society. No taxes (revenue for a central authority) in a society would mean no public goods, i.e. parks, no streets, no street lights, no libraries, no medicare, no social security, no military (Since no society can exist without taxes this scenario is reffered to as 'hypothetical' by economists). In the same way, raising taxes to 100% or 90% or even 75% would be too much and it would be bad for the economy.
In social applications of economics going to an extreme in either direction is generally NOT done. Politics does take extrme views, however, it is important to keep in mind that some political policies often have nothing to do with economics and are therefore extremely bad for an economy no matter how they are framed as the 'solution to everything'.
Balancing the public good with private enterprise (so that there is, at least some, opportunity for ALL citizens is one reason why economies exist as 'mixed economies'. i.e. government and the private sector is melded together to provide social and economic growth. This will hold true unless subverted by a few (such as an aristocracy) for personal gain.
An economic system in which both the private enterprise and a degree of state monopoly (usually in public services, defense, infrastructure, and basic industries) coexist. All modern economies are mixed where the means of production are shared between the private and public sectors. Also called dual economy.
Laissez-Faire Capitalism, Robber Barons and Another Reason for Mixed Economies
One reason for Mixed Economies is presented here. Another reason is presented below. First;
An economic system in which both the private enterprise and a degree of state monopoly (usually in public services, defense, infrastructure, and basic industries) coexist. All modern economies are mixed where the means of production are shared between the private and public sectors. Also called dual economy.
Laissez-Faire Capitalism is the idea that there should be no rules that in any way hinders private enterprise.
Laissez-faire capitalism is an economic system. Capitalism involves the ownership of property by individuals. The individual's goal is to use this property, or capital (buildings, machines, and other equipment used to produce goods and services), to create income. Individuals and companies compete with one another to earn money. This competition between companies determines the amount of goods produced and the prices company owners may demand for these goods. The French term laissez-faire literally means "to let people do as they wish." Thus, supporters of laissez-faire capitalism do not want the government to interfere in business matters, or if governments do involve themselves in business matters, to keep government influence to a minimum.
One result of no rules in private enterprise is that a person with money and thus the ability to buy influence (i.e. allot of money equals greater power, political and social) can destroy any chance of a competitor entering the marketplace. In fact, without rules and law, people tend to be greedy and selfish. Something which is normally well-known.
With no law there is no control over anyone with power using it unfairly against someone who is socially and politically weaker. Generally, this is seen as immoral by all religions and philosophies as being ruthless means there is no 'charity' and others must be manipulated to gain even more wealth and control (a never ending process).
That is why religious sources tend to be against the 'might is right' belief embedded in Laissez-Faire Capitalism.
The following is a religious explanation(from Christianity) which is naturally against the type of behavior which takes from fellow human beings to the detriment of society;
Darwin's ideas played a critically important role in the development and growth, not only of Nazism and communism, but also of the ruthless form of capitalism as best illustrated by the robber barons. While it is difficult to conclude confidently that ruthless capitalism would not have blossomed as it did if Darwin had not developed his evolution theory, it is clear that if Carnegie, Rockefeller, and others had continued to embrace the unadulterated JudeoChristian worldview of their youth and had not become Darwinists, capitalism would not have become as ruthless as it did in the late 1800s and early 1900s. Morris and Morris (p. 84) have suggested that other motivations (including greed, ambition, even a type of a missionary zeal) stimulated the fierce, unprincipled robber baron business practices long before Darwin. Darwinism, however, gave capitalism an apparent scientific rationale that allowed it to be taken to the extremes that were so evident in the early parts of last century.
The basic argument against Laissez-Faire Capitalism (as opposed to a mixed economy with laws to help keep the market fair - see definition above) is that without rules, ruthlessness and "might is right" reigns supreme. Allowing the rich and powerful to take advantage of the weak.
In the feudal system, allot of power (through control of land and resources) was given to a few people by a king. These people would then do everything they could to take whatever they could at the cost of society and every other human that could be conned (that is generally in a different clan or family, though this didn't hold in all cases).
Such economic behavior (and an effect of Laissez-Faire Capitalism) was and is referred to as "Robber Barons";
A disparaging term dating back to the 12th century which refers to:
1. Unscrupulous feudal lords who amassed personal fortunes by using illegal and immoral business practices, such as illegally charging tolls to passing merchant ships.
2. Modern-day businesspeople who allegedly engage in unethical business tactics and questionable stock market transactions to build large personal fortunes.
Regulating immoral business practices is an old function of modern democracies and modern economies. This is another reason why in the modern world government is mixed with private enterprise. To provide rules and a framework for a fairer society with more opportunity for all its citizens not just a few who happen to be on top and thus have the wealth and influence to get more and more while giving back less and less to the community that these individuals are from.
Taxes, Tax Burden and A Little Context for the Modern Tax "Debate" in The US
[The post on Opportunity Cost is important to understanding tax increases/decreases - note the nature of modern economies i.e. they are all 'mixed economies'].
Anyone who is a part of a society has to pay taxes. The taxes support the society by providing public goods (such as streets and street lights and libraries), social safety nets such as social security and medicare which help keep the elderly from dying on the streets (which, if allowed, would be a big moral and social problem - social safety nets makes society more livable and pleasant).
Anyone who doesn't want to pay taxes in their fair share is welcome to leave that society. [This sentiment is expressed by people saying "I'm American and Proud to Pay Taxes" - i.e. taxes are part of the American Society and its standard of living, as well as its way of life. You pay taxes to benefit on a social level. Third world countries where taxes go into government pockets and the military cannot offer a better society for its members for this very reason. A country using taxes appropriately for the benefit of all is like a beacon for democracy and of higher standard of living for the whole planet.]
For people who don't want to pay taxes, you can just say, "Just return all the profits you made by doing business in that society and you are even and you both can go your own ways". That is the benefit for a business paying taxes in a modern economy. You gain social benefits, infrastructure (such as roads so trucks or railways can carry your goods to market) and a population that has a higher standard of living and thus provide a market for your goods.
The Problem; Greed. We want all the benefits of a fully functioning and balanced society and democracy but we don't want to pay taxes. Thus you have the modern political problem in a mixed economy.
In the United States this natural problem with taxes is further compounded by people who are against tax increases for any reason, even if a decrease was bad economic policy. [Note: This modern economic problem can be understood as a result of 'echo chambers' employed by a small minority which seem to be above the law (at least as far as individuals in an environment without regulations/law can manipulate them to their own benefit. This has created an overly negative atmosphere. Sad thing is that this fake economics, if repeated often enough, can be believed as if it was actually economics.]
What is 'Tax Burden'?
[ Definition of tax burden: The amount of income, property, or sales tax levied on an individual or business. Tax burdens vary depending on a number of factors including income level, jurisdiction, and current tax rates. Income tax burdens are typically satisfied by deductions from an individual's paycheck each time he or she is paid. Depending on the amount of allowances claimed by the individual, a tax burden may exceed the total amount of money deducted during the taxable period.
Understanding tax burden is easy. If an increase in taxes weighs heaviest on the poor and middle class then these sections of society have a higher cost of living. For example; an increase in the price of bread by one dollar will effect the poor the most, middle class next and it will have no effect on the rich. So an increase in price by one dollar on bread has its heaviest tax burden on the poor.
[The following images to explain tax burden are from here .]
Another sign of a poorly balanced economy is a taxation system that presses heaviest on those least able to pay.
Larger amount of a poorer person's income is spent on food, so sales taxes press heaviest on the poor and middle class.
The following is an example of the tax burden being pushed onto the poor and middle class:
Note: A 10% tax on both rich and poor will have a greater tax burden on the poor as they have less money to buy basic goods for survival while it makes no difference to the rich (i.e. the rich get one less vacation or car while the poor has less to eat or no vacations or luxury items). To say that a rich person is paying more is correct as 10% or a rich person's wealth is allot more than 10% of a poor persons wealth. However, the tax burden is clearly on the less well off in society. When dealing with small percentages of tax increase, to fight it and push it onto the poor is the very essence of Laissez-Faire Capitalism.
Origin of today's tax debate;
4 notes on the above video:
1. Stephen Colbert keeps saying, "Taxes are job killers". This is a political slogan that has been emotionally charged with the use of echo chambers.
2. Important note about reversing a failed economic policy (i.e. The Bush Tax Cuts):
The 90's were good for the economy and did not require a tax decrease.
3. The Bush Tax cuts have not created jobs but have done the opposite. This is an example of a failed economic policy (see opportunity cost) 4. The deficit can be solved without raising taxes on the rich by decreasing medicare and social security (this trade-off represents a society that is moving more wealth into the hands of the few, i.e. Laissez-Faire Capitalism )
Other problems in raising taxes: Grover Norquist (in the interview below) is an example of someone who gets right-wing politicians, who are supported by a segment of the super rich who have the lowest taxes and tax burden in the States, to sign a pledge never to increase taxes.
Then Grover Norquist uses his access to huge monetary funds to attack anyone who goes back on their pledge to him for whatever reason. Thus many politicians refuse to raise taxes whatever the cost and will pretend there are other reasons that they don't raise taxes for. Add all of the above to the new 'conservative' echo chamber (controlled by a few) and you have the nonsense that is today's tax "debate" in the United States of America.
A subsidy is the opposite of a tax. You give a subsidy to someone who is facing a loss to help break even, maybe make a little profit for the sake of survival. OR you give a subsidy as an incentive to encourage a certain type of industry to grow (such as a tax break or tax loop hole).
A stimulus is also a form of a subsidy as it pours income into sectors of the economy (or parts of the country) that need income or jobs.
A benefit given by the government to groups or individuals usually in the form of a cash payment or tax reduction. The subsidy is usually given to remove some type of burden and is often considered to be in the interest of the public.
Politics play an important part in subsidization. In general, the left is more in favor of having subsidized industries, while the right feels that industry should stand on its own without public funds.
Investopedia explains Subsidy
There are many forms of subsidies given out by the government, including welfare payments, housing loans, student loans and farm subsidies. For example, if a domestic industry, like farming, is struggling to survive in a highly competitive international industry with low prices, a government may give cash subsidies to farms so that they can sell at the low market price but still achieve financial gain.
If a subsidy is given out, the government is said to subsidize that group/industry.
The following is an example of a type of subsidy that has been called "stimulus" (stimulus and investing in infrastructure - a vital part of an economy - has been given an extremely negative connotation through 'echo chambers'), that allows investment in environmentally friendly energy while creating a few jobs.
FLAGSTAFF, Ariz. (AP) — A major manufacturer of small wind turbines will continue to call Flagstaff home for at least the next five years because of a major federal stimulus grant. The $700,000 grant puts to rest fears that Southwest Windpower would leave higher-cost Flagstaff for another city or even another country. The funds were funneled through the Arizona Commerce Authority and are expected to be approved by the Flagstaff City Council on Tuesday. They will help retain the existing 65 jobs and help expand Southwest Windpower in Flagstaff as the North American headquarters for at least the next five years. The company is expected to use the funds to underwrite the retooling of equipment to begin production on a new turbine.
Government spending to balance an economy is normal in all economies around the world. There is echo chamber rhetoric saying government spending is bad which flies in the face of the evidence of using government money (subsidy/stimulus/grant/handout) to help the economy and individuals. For example; Rick Perry used government money to balance the Texas budget (there would unlikely have been a good job rate in Texas if Rick Perry didn't do that). Rick Perry did the smart thing as a politician in a mixed economy seeking to help the local (Texan) economy. The negative echo chamber rhetoric about using stimulus/subsidies to boost the economy is a political tactic to stall the economy to win an election.
Gov. Rick Perry used federal stimulus money to pay 97 percent of Texas's budget shortfall in fiscal 2010--which is funny, because Perry spent a lot of time talking about just how terrible the stimulus was. In fact, Texas was the state that relied most heavily on stimulus funds, CNN's Tami Luhby reports. "Even as Perry requested the Recovery Act money, he railed against it," Luhby writes. "On the very same day he asked for the funds, he set up a petition titled 'No Government Bailouts.'" It called on Americans to express their anger at irresponsible spending. Thanks to the stimulus funds, Texas didn't have to dip into its $9.4 billion rainy day fund. Still, now that the stimulus is spent, Texas, like many other states, is facing severe cuts--$31 million must be carved from the budget.
Government spending is normal in modern economies. The debate should be over which direction spending should be made in so that it boosts the economy with a minimum amount of inefficiency (basic economics principle). To be against government spending itself is not economics, it's an echo chamber tactic to create negative perceptions around a normal economic policy tool to win elections through increase an in negativity and hate.
[Note September 5th 2011: I don't know how Rick Perry used the money he got from the government. If he just paid off debts without any expenditure for long-term growth then the Texan economy will crash soon. The balancing the budget act will just postpone the inevitable. But that may be all Rick Perry needs for the upcoming election.]
Collective Bargaining: Labor Unions
Economics perspective: Generally large employers(such as large corporations or the government) dictate how much employees get in terms of wages and benefits. To maximize profit, the less labor costs the better it is. So on one side there are employers attempting to keep costs as low as possible to make as much money as possible (extreme example are 'sweat shops') and on the other side are labor unions which seek to maximize the welfare of its members (i.e. a higher paycheck or benefits). Having logical and rational discussions are key to attaining balance between the two to help economic and social balance.
Definition of LABOR UNION
: an organization of workers formed for the purpose of advancing its members' interests in respect to wages, benefits, and working conditions
Here is a fuller explanation from a basic economics tutoring page (in this example they call labor unions, 'trade unions'):
Trade unions are organisations of workers that seek through collective bargaining with employers to protect and improve the real incomes of their members, provide job security, protect workers against unfair dismissal and provide a range of other work-related services including support for people claiming compensation for injuries sustained in a job.
The following link is to a video that illustrates today's sudden economic challenges and the contradictions to a balanced economic approach one particular solution brings to the table... Click here To Watch Video: Are Public Sector Strikes Always Right?.
As far as economic and social balance goes, Ronald Reagan phrased it appropriately when he said, "Where free unions and collective bargaining are forbidden, freedom is lost".
Sectors of the Economy and The Chain of Production
Every economy is divided into at least 4 sectors. The primary sector which includes basic production such as farming, mining or drilling. Then there is the secondary sector which includes all of manufacturing, such as making wheat into bread or steel into cars. Next is the tertiary sector which includes retail outlets and all services associated with getting the basic production, through its manufactured state to the consumer (these first three sectors are also the basic divisions in the chain of production from basic product to finished good to distribution). Next is sector 4 which involves government services such as libraries and roads. Then there is a fifth sector, or so its argued, of the very top elite of decision makers on the global and political scene which is supposed to constitute a separate sector of the economy. Since entrepreneurship is covered in the first 4 as are government services I will just be using the first 4 as the basic sectors of an economy. [Read about these classifications in full on About.com]
The first 3 sectors of an economy, the primary, secondary and tertiary; constitute the chain of production that all businesses are concerned with:
Goods move through a “chain of production”. The chain of production follows the construction of a good from its extraction as a raw material through to its final sale to the consumer. So a piece of wood is cut from a felled tree (primary sector), made into a table by a carpenter (secondary) and finally sold in a shop (tertiary).
It is important to note that any one business can own the farm, the factory and the retail outlets (or services) related to a particular product. In such a case the business is large enough to have itself firmly establish in the whole chain of production.
It is possible to move a business to another country in every sector of the chain of production. Both for companies specializing in any one part of a sector and for those businesses that cover all the sectors and are thus somewhat self-sufficient (such as large multi-national conglomerates). In these cases, the global picture of the forth sector provide the structure for the business to maximize profit and countries will generally adjust their policies concerning all three sectors of the economy, concerned with the chain of production, to maximize their own economic growth (all other factors remaining equal).
In a democratic society economic growth is theoretically supposed to be balanced with appropriate distribution and burden sharing for balanced democratic structures (such as education increasing intellectual capital) and social balance (such as keeping society fair for all its citizens).
Basic question and answers of how the chain of production works are here.
Efficiency and Inefficiency in Economics
When you make plans you have to balance income with expenditure while making sure your plans for growth exclude inefficiencies that would make your economic policy ineffective on a small or large level depending on the size of the inefficiency.
1: the quality or degree of being efficient
2a : efficient operation b (1) : effective operation as measured by a comparison of production with cost (as in energy, time, and money) (2) : the ratio of the useful energy delivered by a dynamic system to the energy supplied to it
What Does Efficiency Mean?
A level of performance that describes a process that uses the lowest amount of inputs to create the greatest amount of outputs. Efficiency relates to the use of all inputs in producing any given output, including personal time and energy.
Investopedia explains Efficiency
Efficiency is an important attribute because all inputs are scarce. Time, money and raw materials are limited, so it makes sense to try to conserve them while maintaining an acceptable level of output or a general production level.
Being efficient simply means reducing the amount of wasted inputs.
When a project is efficient you minimize waste. You put in money into a project and attain specific short-term and long-term goals. In some cases, efficiency and inefficiency is difficult to estimate, however in other cases it is easy.
Here is an example of gross inefficiency from Greece (money wasted into unfulfilled projects):
In the southern suburbs of Athens, the abandoned terminal building at the city's old international airport stands as a symbol of promises unfulfilled.
Closed down a decade ago, the site includes 170 acres of prime coastal land on the shores of the Aegean sea and for some time there have been ambitious plans to sell it for redevelopment to the Gulf state of Qatar.
But so far they are just that - plans. And that makes the rest of the eurozone jumpy.
Here is another example of gross inefficiency (money wasted by putting it into a project that was known to fail - a level of inefficiency that really is impressive);
Note: 4 mins and 30 seconds into the video above you discover that a report actually shows that this project would run out of money this month of this year. Clear sign of inefficiency. The following is an extract from a news report about the same inefficiency in the video above:Energy-related loan guarantees arose from the stimulus legislation of 2009. Policy makers thought a huge infusion of low-cost loans would create many thousands of jobs at solar- panel factories, alternative-energy power plants and the like. There was an implicit assumption that most of these ventures would succeed. Barring fraud, Solyndra’s failure reflects the company’s bet on an inadequate technology. Its tubes, coated with an unusual four-metal compound, were supposed to cut power costs more than 20 percent. That wasn’t nearly enough. Production costs fell much faster for a rival technology, conventional flat silicon panels, and Solyndra couldn’t compete.
-------------------- Note: Oct 5 '11 An example of government inefficiency in tax implementation (and spin associated with corporate journalism):
------------- Added Oct 25, '11 - More incredible inefficiencies...
Social Safety Nets
Social safety nets are used for stabilization of a section of society.
Definition of a safety net (For this post we are only concerned with the second definition):
For many, summer means vacation, sports, camping or just time off to relax, but not for millions of kids living in poverty in the United States. There are few camps or beach trips for them, and sometimes not even three meals a day.
During the school year, public schools provide breakfast and lunch to millions of students in the United States. But when summer arrives, parents struggling to feed their children can no longer rely on those meals.
More than 21 million children receive free or reduced-price lunches at school. But in the summer, the number of kids participating in food programs drops to fewer than 3 million, despite efforts to raise awareness and increase community support, the U.S. Department of Agriculture said.
People living in poverty and hunger are extremely vulnerable to crises.
Social safety nets have traditionally been used to help people through
short-term stress and calamities. They can also contribute to long-range
development. Targeted programmes such as food-for-work, school feeding,
microcredit and insurance coverage can help alleviate long-term food and
financial insecurity, contributing to a more self-reliant, economically
An example of a few people surviving on low budget social safety nets such as a church:
"These people have been reduced to living on handouts from the local church and friendly restaurants and the community is a sad look at troubles caused as the world's most powerful country struggles with its finances."
[Other examples of social safety nets include medicare and social security]
An initiative that brings struggling families together to help each other out of poverty is providing a new model for social welfare.
What FII does is create a structure for families that encourages the sense of control, desire for self-determination, and mutual support that have characterized the collective rise out of poverty for countless communities in American history."
Possibility of this strategy: Social safety net plus an actual plan to help grass root economic development. [Solution has a hold and develop strategy which has promise as people working together in groups can help social development and therefore business development as well. ]
An Overview of "Short Selling"
Short selling involves 'selling' something you don't own expecting the price to go down in the future. In extreme cases this technique can destroy an economy like an old fashioned, panic induced, bank run.
What Does Short Selling Mean?
The selling of a security that the seller does not own, or any sale that is completed by the delivery of a security borrowed by the seller. Short sellers assume that they will be able to buy the stock at a lower amount than the price at which they sold short.
Investopedia explains Short Selling
Selling short is the opposite of going long. That is, short sellers make money if the stock goes down in price.
This is an advanced trading strategy with many unique risks and pitfalls. Novice investors are advised to avoid short sales.
Notes on above video:
1. 20 secs - Rumors can move markets
For example; "Every day cold hard facts are the fuel that drives the markets: News about a company or country, favorable or not. But markets are living organisms, made up of Alpha type men and women desperate to take advantage of situations real or perceived. So in the absence of facts, they will listen to rumors and try and judge whether those rumors are likely to be true; then buy or sell on the back of them.".
2. 1min 36 secs - 'professional investors would look at this with allot of misgiving... cause of suppression being forced in the market place... by agencies, typically market authorities or governments, which shouldn't be stopping natural market appetites and that will spook the professional investor' - note: if this 'professional investor' deals in rumors to spook markets to sell stuff they don't have in hope of making money... then maybe you have your priorities messed up?
3. Clear cut regulations with explanations are needed in areas where market manipulation is dangerous to an economy.
"The biggest bankruptcy in history might have been avoided if Wall Street had been prevented from practicing one of its darkest arts.
As Lehman Brothers Holdings Inc. struggled to survive last year, as many as 32.8 million shares in the company were sold and not delivered to buyers on time as of Sept. 11, according to data compiled by the Securities and Exchange Commission and Bloomberg. That was a more than 57-fold increase over the prior year’s peak of 567,518 failed trades on July 30.
The SEC has linked such so-called fails-to-deliver to naked short selling, a strategy that can be used to manipulate markets. A fail-to-deliver is a trade that doesn’t settle within three days.
“We had another word for this in Brooklyn,” said Harvey Pitt, a former SEC chairman. “The word was ‘fraud.’”
"The argument for restricting short-selling runs as follows: betting that catastrophe will befall a bank can become a self-fulfilling prophecy; if a bank's shares or bonds can be forced down to distressed levels, its cost of funding will increase as counterparties lose faith in its solidity; in this way real damage is done to the bank; thus the odds are skewed unfairly in favour of the short-seller."
"The trouble is, this argument is a little too unworldly. Short-sellers may not be the most cuddly creatures in the financial jungle, but they do contribute to biodiversity. Even the Committee of European Securities Regulators, the predecessor to the new EU regulator, acknowledged last year that legitimate short-selling helps markets run efficiently. It can help to prevent bubbles – miserable Eeyores are a useful check on excitable Tiggers."
Samantha from The Daily Show Illustrates Short Selling (with a bias towards fairness)
There seems to be some misunderstanding of what Corporations are and what they have become as Mitt Romney illustrates with his "Corporations Are People" comment... (Note: The Supreme Court ruled that restrictionson money that can flow to help/promote political campaigns is against the first amendment (i.e. money has free speech rights /first amendment protection) The above statement made many people angry, including Stephen Colbert...
So the following are definitions and comparative examples to better understand the modern Corporate structure...
What is a Corporation?
1 a : a group of merchants or traders united in a trade guild b : the municipal authorities of a town or city 2 : a body formed and authorized by law to act as a single person although constituted by one or more persons and legally endowed with various rights and duties including the capacity of succession 3 : an association of employers and employees in a basic industry or of members of a profession organized as an organ of political representation in a corporative state
A legal entity that is separate and distinct from its owners. Corporations enjoy most of the rights and responsibilities that an individual possesses; that is, a corporation has the right to enter into contracts, loan and borrow money, sue and be sued, hire employees, own assets and pay taxes. The most important aspect of a corporation is limited liability. That is, shareholders have the right to participate in the profits, through dividends and/or the appreciation of stock, but are not held personally liable for the company's debts. Corporations are often called "C Corporations".
Compare and Contrast:
Context to understand some of the strange rules Corporations are allowed (more than an average person/family for sureb)...
In other words, besides free speech rights for corporate money, there are ways for corporations to get out of paying taxes and a whole bunch of other nonsense that seems to be just part of the reason for the present financial chaos.
Compare and Contrast - Case Study: A Shell Corporation: Switching fundraising approach by creating a company (approx 4 mins into video) - this new company is called a 501 (c)4 - Allows donations from unknown donors , notice the dramatic increase in donations! (the example used is Karl Rove's Super Pac and a new one he created to shield it);
To compete with Karl Rove, Stephen creates his own Shell Corporation (this approach allows corporate donations in secrecy as their shareholders or customers may object to their practices)...
The following video is the amusing result of the above two videos...
Added December 28 2011: Useful Information About Modern Corporations...
Expectations, Market Volatility and the Emerging Gold Bubble
Markets today fluctuate at the drop of a hat. For example in the following video you see that "After Rupert Murdoch gets hit with a pie, News Corp.'s stocks spike"...
This shows that an incident that has nothing to do with business or the proceedings at hand can boost the prices in today's stock markets a great deal.
So when Senator Bernie Sanders writes; Why have oil prices spiked wildly? Some argue that the volatility is a result of supply-and-demand fundamentals. More and more observers, however, believe that excessive speculation in the oil futures market by investors is driving oil prices sky high. A June 2 article in the Wall Street Journal said it all: “Wall Street is tapping a real gusher in 2011, as heightened volatility and higher prices of oil and other raw materials boost banks’ profits.” ExxonMobil Chairman Rex Tillerson, testifying before a Senate panel this year, said that excessive speculation may have increased oil prices by as much as 40 percent. Delta Air Lines general counsel Richard Hirst wrote to federal regulators in December that “the speculative bubble in oil prices has concrete detrimental consequences for the real economy.” An American Trucking Association vice president, Richard Moskowitz, said, “Excessive speculation has caused dramatic increases in the price of crude oil, which harms end-users like America’s trucking industry.”
This is a claim that should be taken seriously about the unreliability of some of these stock market reactions (afterall, with much of the wealth with a small number of people (i.e. income inequality), it doesn't take much to skew a market) In fact, it has been shown that even 'rumors can move markets'.
The Emerging Gold Bubble
Gold price hits all-time high on US debt concerns The price of gold has risen to a fresh all-time high of $1,594.16 an ounce, and the dollar has fallen, on concerns the US may default on its debts. The moves came after ratings agency Moody's said it may cut the debt rating of the US, warning there was a "rising possibility" it will default. Uncertainty and concerns (fears) about the future cause the markets (which are free floating) to fluctuate. Since the supply of Gold is limited it has great 'store of value' (one of the measures/definitions that economists use to define money) As the article further explains... Gold is seen as the number one haven purchase in times of economic uncertainty, but analysts said its rise was also caused by the fall in the dollar, which makes the precious metal more affordable for holders of other currencies. Taking market variability into account and the promotions to buy gold to a large number of consumers (such as Rush Limbaugh promoting gold through this page), the following analysis about a possible gold bubble makes sense...
Fareed Zakria: This is bizarre. A lot of it is simply scaremongering. The truth is that for two and half decades, between 1980 and the mid 2000s - gold prices actually declined. Unlike many other commodities which actually have an end use - oil, minerals - gold is just a symbol, and as such its price rises have to do more with psychology and emotion than reason. So, when it falls out of fashion, the price could really collapse. The next time you watch Goldfinger or you hear of the antics of a Hugo Chavez or a Donald Trump, be a little wary. Gold isn't a stock with real earnings. It isn't a bond with interest payments. It isn't oil. It won't help you drive a car; it won't help you light a fire. Yes, you can wear it, but you can't eat it. If doomsday really arrives, a can of baked beans might be worth a lot more than a brick of gold.
Added October 28th 2011
Approx 4 mins - Stock volatility actually helps the traders. (This has to do with the variety of maneuvers at their disposal to make money on market fluctuations - Note: This does not include any big money players up to their own plans for better or worse).
[Environmental Economics] Case Study - Australia: Carbon Tax vs Camel Fart Regulation
Notes: Australia has the highest carbon emissions, slightly more than the States Carbon tax will be around for a few years Opposition pledges to repeal the tax The opposition opposes the Carbon Tax (fortunately there are some years to test its effectiveness, which will provide valuable information for economists) - the political problem is probably similar to the states as I read somewhere that Rupert Murdoch owns a great deal of media there. Even the report above, from Sky News, is of a company where Rupert Murdoch's son is the CEO. Anyways, applying a carbon tax, keeping the tax burden on the source that you want to regulate (i.e. reduce carbon emissions from) is basic economic theory. The idea is that the factories will invest in clean technology and even a societal shift in resources could occur leading to more clean energy generation. Possible problems include the tax being too low or the cost of the tax being shifted onto the middle class and poor consumers which would mean the level of carbon emitted by Australia's industries would remain the same.
I thought it was funny that this solution was actually offered... Australia Considers Killing Camels to Tackle Climate Change
1. Eliminating Camel Farts to help the environment: A commercial company, Northwest Carbon, has proposed culling more than one million camels in the Australian Outback to eliminate that gas emissions, according to AFP. 2. Innovative solutions from an innovative nation: "We're a nation of innovators and we find innovative solutions to our challenges -- this is just a classic example," Northwest Carbon managing director Tim Moore told Australian Associated Press. The motivations for such an action could be 1. laziness to shift resources, 2. don't want to lose even a dollar of profit in the short run, 3. hate animals or don't care about anything but themselves.
Here is why an effective carbon tax is important: Following extracts are from "Why high-carbon investment could be the next sub-prime crisis"
"More money is flowing into clean technologies than ever before – a record £150bn of investment last year – but money is also still pouring into coal, oil, gas, mining and other high-carbon sectors at a pace that severely undermines our efforts to tackle climate change and other environmental challenges. " The more money that is invested into a method that will choke us also sets up the next 'too big too fail' companies, since if we get too a point where we have to cut down on carbon release to survive then we have to bail out the companies or face another contraction [although effective government spending (stimulus) may be too difficult for the US, we may be better off letting them fail when we get to that point.]
Too big too fail? "But the known fossil fuel reserves declared by energy and mining companies is equivalent to 2,795 gigatonnes of CO2. That means, if the world acts on its climate change pledges, 80% of those reserves can never be burned and are stranded assets. If you look specifically at the UK, five of the top 10 companies in the FTSE 100 are almost exclusively high-carbon and together account for a staggering 25% of the index's entire market capitalisation."
Looking to the future... "National and international responses to systemic risks in our economy and financial systems only tend to occur after massive crises. But we cannot wait for our over exposure to high-carbon and polluting sectors to result in a global economic, as well as environmental, meltdown before we act. When markets wake up to the real value of high-carbon assets, the chaos wrought and value lost could be devastating. "Horizon scanners" in international financial institutions, central banks and financial regulators, together with policy makers and politicians, must take notice and act now to manage this bubble."
Environmental Problems in Australia - Extracts from = "Australia's carbon tax is a brave start by a government still gripped by fear":
1. "Australia's Black Saturday fires of February 2009 burned over a million acres of land and killed 173 people. It happened because of record high temperatures and a 20% drop in rainfall over the previous 12 years. It was what climate experts had been predicting for some years: the megafire. A megafire is hell come to earth. The energy equivalent to 1,500 Hiroshima-sized atomic bombs was released in a fire storm that saw rivers of flame – sometimes rising 100 metres in the air – flowing through the countryside, generating winds of up to 120km an hour with new fires spotlighting 35km ahead of the main fire front."
2. "Black Saturday reminded many Australians of what they know only too well: that of all the advanced economies, Australia is perhaps the one most vulnerable to climate change. And yet support for action on climate change, which was a key factor in the ending of 11 years of conservative government in 2007, has now largely collapsed."
Australia has begun taking measures to protect itself, the opposite is happening in the States...
Plus there is real damage being done to the environment in mountain top coal mining...
Interview: Author Daniel Yergin discusses hydrofracking, alternative energy sources and America's decreasing demand for oil.
From: The Case for #RonPaul
Ron Paul: Gold Standard 1/30/09
At 1 minute and 30 seconds - 'right now the transition to the gold standard would be difficult, a transition is needed where gold currency would be competing with a paper currency then obviously gold would win'
Explanation: Gold would win because of the price stability that precious metals affords - The supply of metal (gold and silver) will balance against the demand for its own place in the market while the dollar will have it's own place in the market (kinda like junk bonds). Overtime, people will 'invest' more in the gold currency and a stable monetary system (as far as inflation and trade is concerned) will emerge. Click here for a full explanation of market volatility and gold prices.
At 2 minutes 20 seconds - Ron Paul foresaw the gold bubble
At 3 minutes 50 seconds - 'Legal tender laws force people to use the dollar'
This is also covered my explanation of Ron Paul's plan = "Overview of Ron Paul's Plan To Restore America":
Conducts a full audit of the Federal Reserve and implements competing currency legislation to strengthen the dollar and stabilize inflation.
Having competing currency legislation would bring commodity based money to the market. As it is, the FED is privately owned (which I have been opposed to because of 'conflict of interest' for a long time), this will make them have to compete. As it is, bank monopolies have the same problems as media monopolies.
Currently what we have is "fiat money";
Definition of 'Fiat Money': Currency that a government has declared to be legal tender, despite the fact that it has no intrinsic value and is not backed by reserves. Historically, most currencies were based on physical commodities such as gold or silver, but fiat money is based solely on faith.
Investopedia explains 'Fiat Money': Most of the world's paper money is fiat money. Because fiat money is not linked to physical reserves, it risks becoming worthless due to hyperinflation. If people lose faith in a nation's paper currency, the money will no longer hold any value.
In addition to this we have secret money lending and bailouts which is being done by actually printing money. i.e. the money supply increases thereby inflation occurs making real wages (the amount of money you make) less in comparison to the increase of prices. This means that you can buy less for the same amount of money. Click here to learn more about how "demand and supply" works.
The media’s inscrutable brush-off of the Government Accounting Office’s recently released audit of the Federal Reserve has raised many questions about the Fed’s goings-on since the financial crisis began in 2008.
The audit of the Fed’s emergency lending programs was scarcely reported by mainstream media – albeit the results are undoubtedly newsworthy. It is the first audit of the Fed in United States history since its beginnings in 1913. The findings verify that over $16 trillion was allocated to corporations and banks internationally, purportedly for “financial assistance” during and after the 2008 fiscal crisis.
Sen. Bernie Sanders (I-VT) amended the Wall Street Reform law to audit the Fed, pushing the GAO to step in and take a look around. Upon hearing the announcement that the first-ever audit would take place in July, the media was bowled over and nearly every broadcast network and newspaper covered the story. However, the audit’s findings were almost completely overlooked, even with a number as high as $16 trillion staring all of us in the face.
Read more about the gold standard (with some videos) on Ron Paul.com
Useful to know: | http://explorer9360.xanga.com/761681601/reprint-of-my-quick-overview-of-economics/ | 13 |
34 | Almost as soon as we are born, we can use negation, indicating by gesture or other behavior that we reject, exclude, or disagree with something. A few months later, when infants are just learning to talk, their first ten words almost always include a negation operator (Westbury & Nicoladis, 1998). Because it is so common and so easily-mastered, negation may seem to be a simple concept. However, it has bedeviled all efforts to be easily defined and understood. Two researchers who have studied it extensively have described negation as "curiously difficult" (Wilden, 1980) and "far from simple and transparent" (Horn, 1989). One reason for its complexity is that negation serves a wide variety of roles. A logician uses the negation operator in the process of proving a complex logical syllogism. A pre-linguistic uses gestural negation to reject the broccoli being offered her. Do such disparate uses of negation have anything in common? If so, what is it? In trying to formulate an answer to these questions by defining negation, it is useful to consider two approaches to the topic: negation as a technical tool for use in logic, and negation in natural language. We begin with the former.
Negation in logic
Classical (Aristotelean) term logic is the earliest and simplest formal logic.
It is limited to single-predicate propositions that are necessarily either true
or false. A single-predicate proposition is one like 'Mary is beautiful' or
'Snow is red', in which one single thing is said (whether rightly or not) to
have a single characteristic, or predicate.
Negation of a proposition in term logic may be defined by listing two necessary
and sufficient properties of that function with respect to an object or set,
i.) X and its complement must include everything, and
ii.) The intersection of X and its negation must be empty.
In simple terms, this means that what a thing is and what it is not together
make up everything. Consider, for example, the proposition 'All men are happy'.
This proposition means that the set of all men that are either happy or not-happy
('X and its complement') contains all men, and that set of all men that are
both happy and not-happy ('the intersection of X and its negation') contains
nothing. This corresponds to what Kant would later call 'active negation', since
the use of this form of negation is an active affirmation of the opposite of
the negated term.
The astute reader will notice already that there are complications. One complication
arises because there are several ways to deny or contradict the truth value
of a proposition. In Aristotle's logic, no proposition is allowed to have more
than one negation operator. However, that single negation operator be attached
either to the predicate (the characteristic being ascribed) or to its subject
(the entity to which the characteristic is ascribed). Thus Aristotle's term
logic recognizes a second form of negation along with the one we have just considered:
one can negate the subject term, as in 'Not-man is happy', meaning 'Whatever
is not a man is happy'.
Aristotle also recognized that one can negate the predicate term by denying
it, without thereby asserting its contrary. For example, one can state 'Man
is not happy', and mean that 'Whatever is a man is not happy', but not that
'Whatever is a man is unhappy'. As a stranger noted in Plato's dialog Sophist
(§257B), the assertion that something is 'not big' does not necessarily
mean that it is small. This corresponds to what Kant called 'passive negation'
(see Elster, 1984), since it does not actively affirm the contrary of the negated
Aristotle's logical definition of negation is further complicated by the fact
that he recognized two other ways in which negation could vary: by quantity
or by mode. The first distinction (quantity) captures the differences between
universal predication ('All men are not happy'), particular predication ('Some
men are not happy'), singular predication ('I am not happy'), and indefinite
predication ('At least one man is not happy'). The second distinction (mode)
captures differences in the force of the predication, which Aristotle defined
as assertoric ('All men are [or 'are not'] happy'), apodeictic ('All men must
be [needn't be] happy') or problematic ('All men may be [cannot be] happy').
As the natural language translations indicate, all of the distinctions recognized
by Aristotle can be easily (and, in most cases, are naturally) expressed in
ordinary English. Despite this ease of translation, it has long been clear that
Aristotle's logical negation has a different function than natural language
negation in English. English allows negation constructions that would be disallowed
under the definition of negation given by classical logic (see Horn, 1989; Sharpe
et al, 1996, for a detailed discussion of this matter). For example, in English
it is not considered to be contradictory or improper to say that an entity is
both X and not(X). We can perfectly well understand the sentences "I did
and didn't like my college". Such contradictions are ruled out in logic,
since they allow one to deduce anything at all.
In the propositional logic introduced after Aristotle by the Stoics, logical
negation was defined in more powerful and more complex manner. In this propositional
logic, the negation operator need not be attached only to the subject or single
predicate of a simple proposition. Instead, it can be attached externally to
an entire proposition, which may itself contain many predicates. Moreover, in
propositional logic subjects and predicates may be quantified, by having descriptors
like 'every' and 'a little' attached to them. These complications unleash the
problem that Aristotle tried to control by definitional fiat when he limited
negation to subject and predicates in simple propositions: the problem of the
scope of negation. This is the problem of deciding which part of a proposition
is being negated by any negator.
This complication bedevils ordinary language negation. Consider the denial
of the proposition 'Everybody loves somebody a little bit sometimes'. What exactly
is denied is not absolutely clear. Is the denial intended to reflect that there
are some people who never love anyone at all? Or that there are some people
who only love a lot? Or that some people love all people a little bit all of
the time? Or that no one ever loves anyone at all?
This problem of scope of the negation operator over quantified subjects and
predicates is "one of the most extensively studied and least understood
phenomena within the semantics of negation" (Horn, 1989). Although we cannot
hope to clear up this complication here, it is important to address one aspect
of it: the claim that the negation of this predicate logic is simply equivalent
to assertion of falsity. Many people in both the philosophical and linguistic
literatures have adopted such a view at one time. Most notably, it was adopted
by Russell and Whitehead (1910) in their Principia Mathematica (for a most explicit
statement, see Russell, 1940; others who have advocated a similar position include
Apostel, 1972a, Givón, 1979; Pea, 1980a, Strawson, 1952).
Few contemporary logicians would equate negation with the assertion of falsity,
for two reasons. One is that there is a well-defined distinction to be drawn
between the syntax of negation- how a negator may be properly used and manipulated-
and the semantics of negation- what a negator means. Logicians deal mainly with
the syntax of a logical symbol, and the specific formal semantics prescribed
by that syntax, rendering many issues of interpretation moot.
The second reason that negation cannot be associated with asserion of falsity
has to do with logical levels. Russell and Whitehead's book introduced the distinction
between logical levels. It is therefore ironic that Frege (1919), Austin (1950),
Quine (1951), and Geach (1980), among others, have all argued that Russell and
Whitehead's view of negation as applying to propositions is an error resulting
from a confusion of logical levels. Specifically, the view confuses language
with meta-language. Austin wrote that "Affirmation and negation are exactly
on a level, in this sense, that no language can exist which does not contain
conventions for both and that both refer to the world equally directly, not
to statements about the world" (Austin, 1950, p, 128-129, emphasis added).
Statements of falsity, in contrast, are necessarily statements about statements-
that is, statements in a meta-language. A statement about the truth value of
a proposition is therefore not a form of negation at all. It is rather a meta-statement
about truth value. Negation is always an assertion about the state of the world.
It is never a statement about a proposition.
This assertion is complicated by two facts that lie at the root of the confusion
about whether negation is equivalent to an assertion of falsity of a proposition:
i.) The fact that negation may be a statement about the act of stating a proposition, since the act of stating a proposition constitutes a factual aspect of the state of the world which may be negated like any other fact about the world, and
ii.) The fact that any proposition about the act of stating a proposition admits of a simple transformation into a statement about the stated proposition itself.
For example, consider the proposition 'The former President Of The United States
did not tell his intern to lie'. That statement is a statement about what a
former President said- that it, it is a statement about the empirically-observable
physical act of a human being stating a proposition aloud. The error lies in
claiming that this sentence is semantically identical to the sentence 'The proposition
'The President told his intern to lie' is false', which is a proposition about
a proposition. The first statement is a statement about what phonemes actually
could have been heard to issue from the President's mouth. The second is a statement
about the truth value of a proposition. These cannot be semantically identical,
anymore than the set of all English sentences about an elephant could be semantically
identical to the elephant. One is a bunch of ordered letters, and the other
is a heavy grey mammal.
A second argument against the position that natural-language negation simply
negates the proposition to which it applies is given by Horn (1989, p. 58).
He points out that the error in equating statements about propositions with
statements about the world is very clear when we consider nondeclarative sentences.
Consider a cornered criminal who throws down his gun, yelling 'Don't shoot!'.
It is absurd to argue that this command is identical to the meta-statement 'Let
the statement 'You shot me' be false!'.
Quine (1976) gives a third reason that a great deal of ordinary discourse could
not admit of negation as a statement of the truth-value of a proposition, in
his discussion of what would be required to 'purify' ordinary language so that
it could be considered equivalent to the formalized language of science. Quine
argued that "we may begin by banishing what are known as indicator words
(Goodman) or egocentric particulars (Russell): 'I', 'you', 'this', 'that', 'here',
'there', 'now', 'then' and the like". He explained this banishment by writing:
"It is only thus...that we come to be able to speak of sentences, i.e. certain linguistic forms, as true and false. As long as indicator words are retained, it is not the sentence but only the several events of its utterance that can be said to be true or false" (p. 222. Emphasis of the final sentence added).
A great deal of ordinary speech contains indicator words of the type Quine
was objecting to. Quine is pointing out that these common sentences cannot bear
truth values on their own, but only bear truth when they are properly placed
in their extra-logical (real world) context.
The point of this discussion is that negation, as defined as a technical tool
for logicians, is not the same as the ordinary negation as used in natural language.
Some logicians have tried to re-define logical negation in such a way as to
capture its uses in natural language. La Palme Reyes et al (1994) defined a
non-classical logical model of natural language negation. It includes two negation
functions, neither of which is in its most general form equivalent to Aristotelean
negation. Those two negation functions take into account the fact that objects
to which one might apply negation have a structure whose components may be differentially
affected by that negation. The first negation function, which La Palme Reyes
et al call 'heyting' or strong negation, is used when the negation function
applies to all components of its negated object. The second, called 'co-heyting'
or weak negation, is used when the negation function refers to only some components
of the negated object. The formal aspects of this non-classical logic have been
worked out under certain highly idealized assumptions (La Palme Reyes et al,
1994). However it is not clear if or how that formal analysis could be widely
applied to real life natural language uses of negation, in situations where
those assumptions might not or clearly do not hold.
Let us now turn our attention to the development and use of negation in natural language.
Negation in natural language
The reader who has read this far will probably not be surprised to learn that
natural language negation is also complicated. There are many apparently different
forms of negation in natural languages. Natural language negation words such
as the English word 'not' can (but need not always) function in a way that is
closely analogous to the logical 'not' discussed above. Natural language also
contains words whose assertion functions as an implicit negation of their opposite,
as well as linguistic constructions which do not contain any negation markers,
but which can nevertheless function as negations for pragmatic reasons. For
example, the positive assertion "What Joe saw was an aircraft glittering
in the moonlight" functions as a negation when uttered in response to the
claim "Joe saw a UFO!"
Such complex constructions provide new means of using negation, but add no
new meanings to negation. For this reason, in this section we will concentrate
only upon forms of natural language negation that are explicit in the lexicon.
I present six categories of natural language negation, in roughly the order
they appear developmentally. Others have proposed distinctions and commonalties
that would increase or decrease this number. No definite and universally agreed-upon
i.) Negation as rejection/ emphasis of rejection of external entities
The simplest form of negation appearing in the lexicon is the use of the word
'no' (or its equivalent in other languages) in what Peirce (Horn, 1989, p.163)
called its subjective or pre-logical sense, to reject or to signal displeasure
with an undesirable situation or object. This use of negation as 'affective-volitional
function' was identified in the earliest study of the development of negation
(Stern & Stern, 1928) as the first form to appear. It is reliably present
by the age of 10-14 months (Pea, 1980b). The production of the word 'no' plays
roughly the same role for young human infants as do the gestures that often
accompany it (and that appear even earlier developmentally; see Pea, 1980b;
Ruke-Dravina, 1972), such as pushing away or turning the head away from an undesired
object. Such a gesture, either alone or accompanied by non-linguistic verbal
productions expressing displeasure, often suffices to communicate the desired
message. For this reason, the production of the word 'no' in this situation
may not necessarily be used as a rejection in itself, but may rather play a
role in emphasizing the rejection already being communicated non-linguistically.
I will expand on this notion in the next section.
Clearly such negation is very simple. Any animal able to recognize what it
does not want- and capable of acting on that recognition- is capable of this
first form of negation as a rejection of an undesirable external entity.
ii.) Negation as a statement of refusal to stop or start action
There are two superficially similar forms to negation as a rejection that,
however, function pragmatically in a markedly different way from the simple
rejection of external entities. Both necessarily involve an element of social
manipulation, which can also, but need not necessarily, play a role in object
rejection. The first form of such social negation is the use of the word 'no'
to signal a refusal to comply with a request or command for action or for a
cessation of a particular action. Such use is thereby an expression of personal
preference (Royce, 1917).
Three requirements must be satisfied for this form of negation to appear. The
first is that the negating organism must have the ability to associate a command
with a behavior or cessation of a behavior. The second is that the negating
organism's environment must provide the means by which that command is issued
in a regular manner. Although the first requirement is common enough among non-humans,
the latter is not. The appearance of negation as refusal to comply with a request
or command is missing is many mammals because there is a deficit in their natural
social environment that makes it unnecessary for them to grasp it. We must therefore
include among the necessary functionality for the appearance of these forms
of negation a third requirement: the appearance in another of the ability to
regularly recognize and enforce codes of behavior in the infant who is developing
negation. For these reasons, this form of negation is intimately tied to social
organization and environmental structure. Because of its intimate interaction
with such external factors, it becomes difficult to say whether it is 'innate'
iii) Negation as an imperative
The second of the two forms of negation that differ pragmatically from rejection
of an external object is the use of the word 'no' as a directive to others to
act differently. As well as denying a request or a command to act or cease acting,
and refusing objects offered to them, young infants are able to use negation
to refuse to accept the actions of others. Such denial often functions pragmatically
as a command, denying one action in the hopes of producing an alternate.
iv.) Negation as a comment on one's own unsuccessful or prohibited action
Gopnik & Meltzoff (1985) identified another form of negation, as the second
stage in their three-stage model of negation leading to negation of linguistically-stated
propositions. In the first stage infants use negation as a social device to
refuse parental requests, as discussed above. In the second stage, a child uses
negation to comment on his or her failure to achieve an intended goal. According
the Gopnik and Meltzoff, the word 'no' becomes a cognitive device for the first
time when it is used in such a manner. Many researchers have also noted early
uses of negation as self-prohibition, uttered by the child when he or she is
about to do something or is doing something that is prohibited. The use of negation
in this manner is typically of brief duration (Pea, 1980b).
v.) Negation as scalar predication
Negation may also be used in natural language to compare or quantify scalar
Negation is often used for the concept of zero, or non-existence, as when we
say 'there is no way to get there from here' or an infant notes an unexpected
absence by saying 'no car'. The general case of using negation to mark non-existence
includes sub-categories that are sometimes distinguished. For example, Pea (1980b)
distinguishes between disappearance negation, which is used to note something
that has just disappeared, and unfulfilled expectation negation, which is used
to mark the non-existence of an expected entity. Although there are individual
differences in the appearance of these subtypes (Pea, 1980b), the appearance
of negation as scalar predication appears reliably as the most highly developed
(i.e. latest-appearing) form of negation prior to the appearance of negation
of linguistic propositions.
The use of negation to mark nonexistence (in the sense of a referent not being
manifest in a context where it was expected) appears very early in children's
words. In their study of sententially-expressed negation (i.e. of negation which
appears after the one-word stage) McNeill and McNeill (1968) claimed that the
first uses of negation among Japanese children were all uses which marked nonexistence.
McNeill and McNeill claim that this finding is of particular interest because
Japanese has four common forms of negation that are differentiated in the lexicon.
One form functions as an assertion of non-existence, another as a denial of
a previous predication, a third as an expression of rejection, and a fourth
as a denial of a previous predication while implying that the speaker knows
something else to be true. Note, however, that there can be no question that
these infants were already displaying behavioral forms of negation by the time
they put words together to form a sentence.
Negation is not only used to indicate the total absence of a quality, but can
also be used to indicate a quantity less or greater than another to which it
is compared. For example, to say that something is 'not bad' is not to say that
it was entirely good, but only that it was 'less than all bad'. In appropriate
circumstances, the negation term may also indicate a greater quantity. Jespersen
(1924) identified the pragmatic circumstances that allow the negation operator
to function in this way. He noted that the word following 'not' must be strongly
stressed, and a more exact statement must immediately follow the negated statement,
as in the sentence: "He earns not twenty thousand, but thirty thousand
dollars per game".
The use of negation in natural language for scalar predication has a strong
constraint on its use, which shows how intimately negation is tied to other
cognitive functions: it can only be properly used as an expression of a departure
from an expected state of affairs. Neither an infant nor an adult will use negation
as a quantifier unless the value expressed thereby is or could be unexpected.
As many commentators (e.g. Sigwart, 1895; Bergson, 1911; Baldwin, 1928; Ryle,
1929; Wood, 1933; Strawson, 1952) have pointed out, to assert the negation of
a proposition is to imply that there is something surprising or unexpected at
the proposition's negation- to imply that some (imagined or real) interlocutor
believes, or might reasonably be expected to believe, the non-negated proposition
(see Horn, 1989, S1.2 for a detailed history of this ideas). To use a graphic
example suggested by Givón (1979): one cannot deny that ones wife is
pregnant without implying that one believes that ones listener has reason to
expect that she might be. The reason for this constraint is that "It is
no good knowing what something is not unless that helps to eliminate possibilities
of what it is." (Wason, 1959, p. 103). There is no use negating unless
the negation is informative. This is a specific case of the more general pragmatic
rule that utterances should be (or will be assumed to be) relevant (Grice, 1975;
Sperber & Wilson, 1986).
vi.) Negation of stated propositions
No one disputes that negation as denial of a stated utterance is the last form
of negation to appear developmentally. Indeed, since it is the only form of
negation to require sentence comprehension, it is predictable from its very
definition that it is likely to appear later in development than the other forms,
which can all be expressed with simpler components of language.
It is remarkable that children are able to negate propositions about as soon
as they can produce them. Many studies have estimated that the ability for this
form of negation appears between 1.5 years to 2.5 years (Hummer, Wimmer, &
Antes, 1993), which is about the same time that children are first able to put
two words together.
As discussed above, the ability to negate propositions should not be treated
as if it were equivalent to denial of the truth value of propositions. What
infants who are just putting together words are able to do is to deny that an
actual aspect of the world matches its linguistic description. If the child
screams 'No!' upon being told that it is bath-time, it is not to deny that the
sentence 'It is bath time' is a true sentence, nor is it to to assert the proposition
'The sentence 'It is bath time' is false'. What the child is doing is denying
that it is in fact a desirable plan to submerge his body in soapy water. To
assert otherwise is to impose a post-literate interpretive framework upon a
child who is very far from being able to understand such a framework.
Because of these considerations, there are two distinct forms of negation of sentences. The form that an infant exhibits might be termed referential negation, since the child is denying a fact of the world that has been described to him using language. Truth-functional negation - true logical negation- is a learned technical tool for which there is no evidence of innate or inevitably-developing ability. Indeed, the failure rate in college introductory logic classes suggests that truth-functional negation is extremely difficult for most human beings to grasp.
Is there a common meaning to natural language terms of negation?
The plethora of uses might make it seem that natural language negation does
not admit of any simple definition that covers all cases. However, numerous
philosophers have proposed the same unifying definition, that side steps many
of the logical complications discussed above. They have re-cast negation as
a positive assertion of the existence of a relevant difference- that is, they
have taken negation to mean 'other than', to use the pithy expression suggested
by Peirce (1869). This expression is similar to that put forth by Plato in Sophist
(§257B), in which he insisted that negation was not enantion (contrary)
but heteron (other). Hegel also characterized negation in a similar way (though
his heavily metaphysical views on negation are unique in other respects) when
he interpreted (or perhaps, as Horn, 1989, puts it, "stood on its head")
a dictum stated by Spinoza: Determinatio est negatio [Determination is negation].
Under Hegels' reading, Spinoza's dictum was taken as a statement of identity,
meaning that every negation is a determination or limitation, and vice versa.
The definition also appears in Brown's (1969) attempt to give a naturalized,
non-mathematical account of Boolean algebra. Brown begins by taking distinction
(defined as 'perfect continence') as his only primitive. He then proceeds to
define negation in terms of distinction. He presents this as an idea he had
Wilden (1980) also defined negation as distinction, again without mentioning
any earlier proposals to do so. The fact that this principle has apparently
been repeatedly independently discovered suggests that it may accurately capture
the meaning of negation.
Wilden's formulation of the definition of negation suggested that negation
should be considered as a rule about how to make either/or distinctions. Any
expression of negation divides the world into three parts: the negated object
or set (say, X), everything else (not-X), and the system which applies the rule
for drawing the distinction between X and not-X. That system itself belongs
neither to X nor to not-X, but stands outside (at a higher logical level than)
both, in virtue of the fact that it defines the composition of those two sets.
In discussing Wilden's definition of negation, Hoffmeyer (1993) implicitly
argues that the act of negation is equivalent to the creation of a sign, as
defined by Peirce: something which stands for something to somebody in some
respect. In order to assess this claim, it is necessary to understand something
of the distinctions Peirce drew between three different forms of representation:
iconic, indexical, and symbolic.
Iconic representation, the simplest form, is representation that occurs in
virtue of a perceptual similarity between the sign and the signified, as a picture
of a bird represents a bird. Indexical representation is representation in which
the signifier is associated with what it signifies by correlation in space or
time- i.e. in virtue of the fact that the signifier has a quality that is linked
with the entity that it signifies by some cognizable relation other than perceptual
similarity. Indexical representation is commonly used in animal learning studies,
as when a light is paired with punishment. The important defining feature of
both iconic and indexical representation is that the connection between the
primary sign and the signified exists independently of the representing organism.
Simplifying Peirce's own view somewhat, we may say that the connection is objective,
in the sense that an organism or machine with access only to the appropriate
sensory and temporal information about the object could in theory learn to connect
the signifier with the signified.
This is not the case with the third form of representation, symbolic representation.
Symbolic representation is (by definition) independent of the relations that
define iconic and indexical representation - similarity, contiguity, and correlation.
This means that symbolic representation can be sustained in the absence of any
objectively discernible relation between the structure of the sign or its production,
and the signifier. Human beings with symbolic representation are able to talk
about the dark side of the planet Mercury, Santa Claus's older sister, or integrity
in politics, despite the impossibility of ever having direct sensory acquaintance
with these non-existent entities.
One major limitation of iconic and indexical reference is that it is not possible
to use them make a statement about any entities that do not have an unambiguously
perceptible existence in space and time. Such entities have no perceptible qualities
in which their signifier could partake. In particular, therefore, there could
be no way to use iconic or indexical reference as scalar negation, to refer
to the abstract quality of a particular absence. As Wittgenstein (1953, §446)
pointed out "It would be odd to say: 'A process looks different when it
happens from when it doesn't happen.' Or 'A red patch looks different when it
is there from when it isn't there.'" (see also Russell, 1940).
This is why the complex forms of linguistic negation must be fundamentally symbolic. In the complex forms of linguistic negation, the boundary that marks the negated from the unnegated has no perceptible qualities of the kind that are necessary for reference by similarity or spatio-temporal contiguity (by iconic or indexical reference). The lack of relevant perceptible qualities is also what defines a symbol. Viewing a symbol 'as if' it stood for something requires that it be dissociated from what it actually is. There are (by definition) no hints from what a symbol is which help one decide to what it stands for (c.f. Premack and Premack's definition of a piece of plastic as a word for their monkeys "when the properties ascribed to it are not those of the plastic but of the object it signifies" (p. 32)). Since there can be no linguistic symbolism that is not built upon negation and since negation is itself a form of symbolism, the act of negation must be the first fundamentally linguistic symbolic act. It underlies the ability of language users to use a word to stand in for something that it in no way resembles and with which it it never co-occurs.
It seems simple to 'just say no', but negation is in fact astonishingly complicated.
In logic the role of negation is so complex as to have defied complete understanding
despite over two thousand years of concerted effort. In natural language, negation
proves impossible to bound, spilling over to take in constraints at the social
and environmental levels, and to be intimately tied to deep and complex issues
of memory, expectation, general cognition, and symbolic manipulation that are
themselves still largely mysterious. Because of these intimate ties, the function
of negation as heteron may be plausibly argued to be a fundamental building
block of human language.
In Jonothan Swift's novel Gulliver's Travels, the hero reports of meeting,
in the grand academy of Lagado, a group of nominalist philosophers. Those men
contended that "Since Words are only names for Things, it would be more
convenient for all Men to carry about them, such Things as were necessary to
express the particular business they are to discourse on." (Swift, 1735/1977,
p. 181). This, of course, proves to be difficult for those who have much to
say, since they are obliged to haul a huge bundle of objects everywhere they
go. If Swift's radical nominalists had thought about it a bit longer, they might
have arrived at slightly more convenient solution that would still save their
lungs from the 'Diminution by Corrosion' that they were trying to avoid by not
speaking. Instead of carrying the individual objects themselves, they could
simply carry around the means to quickly create any object they might need.
Perhaps they might carry a block of soft clay with them. By this expedient they
could lighten the load they had to carry while greatly extending their possible
range of reference. Whoever first began to carry the clay would be capable of
astonishing feats of communication, conversing easily about matters of which
his fellow philosophers, having failed to load precisely the required the object
into their sacks, were forced to remain silent.
The human ability to use symbolic reference differs from animal communication in an analogous fashion to the way that the clay language differs from the object language, and for an analogous reason. Whereas most animals are limited to distinguishing only those dimensions in the world that they are born 'carrying' or learned dimensions that have direct biological significance, human beings can construct an infinite number of dimensions. The clay that we use to construct those dimensions is negation as heteron: the ability to formulate rules about how to reliably make either/or distinctions. Although it is clear that many of the distinctions we make are made possible by language, the opposite relation holds true for some early forms of negation. Rather than being made possible by language, those forms of negation make language possible, in virtue of their role as a sine qua non of linguistic reference. Because we can carve up the world in such subtle ways, we humans have mastered our environment in ways no other animal can do. And because we can negate, we can so carve up the world. | http://www.semioticon.com/dse/encyclopedia/n/long/negationlong.html | 13 |
78 | Try the standard debate format. Includes adaptations of the format plus ten more strategies for engaging students!
understand the debate process.
play a variety of roles in a debate.
follow the rules and procedures of a good debate.
judge their own and their peers' debate performances.
debate, four corner, role play, Lincoln, constructive, constructor, affirmative, negative, fishbowl, cross-examine, summary, summarize, think-pair-share, inner circle, graphic organizer
copy of rules of debate (provided)
debate rubric for grading their own and/or peers' debate performances (provided)
This lesson presents several basic debate formats, including the popular Lincoln-Douglas format. In addition, it provides adaptation suggestions for using debates with whole classes and small groups. Plus, it offers ten strategies teachers can use to make the debate process more interesting to students.
In 1859, Senator Stephen A. Douglas was up for re-election to his Illinois Senate seat. His opponent was Abraham Lincoln. During the campaign, the two men faced off in a The Lincoln-Douglas Debates of 1858, a series of seven debates on the issue of slavery. On Election Day, Douglas was re-elected, but Lincoln's position on the issue and his inspiring eloquence had earned him wide recognition that would aid his eventual bid for the presidency in the presidential elections of 1860.
The basic format of the Lincoln-Douglas debates has long been used as a debate format in competition and in classrooms. The Lincoln-Douglas Debate format is a one-to-one debate, in which two sides of an issue are debated. It starts with a statement of purpose/policy. (For example, School uniforms should be required in all schools.) The debater who agrees with the statement (the Affirmative) begins the debate, which is structured in this way:
Affirmative position debater presents constructive debate points. (6 minutes)
Negative position debater cross-examines affirmative points. (3 minutes)
Negative position presents constructive debate points. (7 minutes)
Affirmative position cross-examines negative points. (3 minutes)
Affirmative position offers first rebuttal (4 minutes)
Negative position offers first rebuttal (6 minutes)
Affirmative position offers second rebuttal (3 minutes)
Generally speaking, in a Lincoln-Douglas competitive debate, debaters do not know the statement of purpose/policy in advance. The purpose is proposed, and each presenter is given 3 minutes to prepare for the face-off.
In the classroom, however, the Lincoln-Douglas debate format is adapted in a wide variety of ways. Following are some of the ways that procedure might be adapted in a classroom setting to involve small groups or an entire class.
Adapt the Lincoln-Douglas Format for Classroom or Small Group Use
Arrange the class into groups of six. Each group will represent one side -- the affirmative or negative -- of a debatable question or statement. In order to involve all six individuals, each member of the team will have a specific responsibility based on the Lincoln-Douglas debate format detailed above. Each team will include students who assume the following roles:
Moderator -- calls the debate to order, poses the debatable point/question, and introduces the debaters and their roles.
Lead Debater/Constructor -- presents the main points/arguments for his or her team's stand on the topic of the debate.
Questioner/Cross-Examiner -- poses questions about the opposing team's arguments to its Question Responder.
Question Responder -- takes over the role of the Lead Debater/Constructor as he or she responds to questions posed by the opposing team's Questioner/Cross-Examiner.
Rebutter -- responds on behalf of his or her team to as many of the questions raised in the cross-examination as possible.
Summarizer -- closes the debate by summarizing the main points of his or her team's arguments, especially attempts by the opposition to shoot holes in their arguments.
Note: In the standard Lincoln-Douglas debate format, the negative position is given a lengthy rebuttal time in which to refute the affirmative rebuttal and make a final summary argument for the position. Then the affirmative position has a brief opportunity to rebut the rebuttal (offer a closing argument, if you will) -- and the debate is over. In this format, adapted for the classroom, both teams offer a closing summary/argument after the rebuttals.
The six-student team format enables you to arrange a class of 24 students into four equal teams.
If your class is smaller than 24 students, you might adapt the format described above by having the teacher serve as moderator.
If your class is larger than 24 students, you might arrange students into more and/or smaller groups and combine some roles (for example, Moderator and Summarizer or Moderator and Questioner/Cross-Examiner).
You can apply the Lincoln-Douglas classroom debate adaptations above by having pairs of teams debate the same or different issues. If this is your first experiment with debate in the classroom, it would probably be wise to have both teams debating the same issue, or you can use your most confident students to model good debate form by using the fishbowl strategy described in the Additional Strategies section below.
The following strategies can be used to extend the Lincoln-Douglas debate structure by involving the entire class in different ways:
Three-Card strategy -- This technique can be used as a pre-debate strategy to help students gather information about topics they might not know a lot about. It also can be used after students observe two groups in a debate, when the debatable question is put up for full classroom discussion. This strategy provides opportunities for all students to participate in discussions that might otherwise be monopolized by students who are frequent participators. In this strategy, the teacher provides each student with two or three cards on which are printed the words "Comment or Question." When a student wishes to make a point as part of the discussion, the student raises a card; after making a comment or asking a question pertinent to the discussion, the student turns in the card. This strategy encourages participants to think before jumping in; those who are usually frequent participants in classroom discussions must weigh whether the point they wish to make is valuable enough to turn in a card. When a student has used all the cards, he or she cannot participate in the discussion again until all students have used all their cards.
Participation Countdown strategy -- Similar to the above technique, the countdown strategy helps students monitor their participation, so they do not monopolize the discussion. In this strategy, students raise a hand when they have something to say. The second time they have something to say, they must raise their hand with one finger pointing up (to indicate they have already participated once). When they raise their hand a third time, they do so with two fingers pointing up (to indicate they have participated twice before). After a student has participated three times, he or she cannot share again as long as any other student has something to add to the discussion.
Tag Team Debate strategy -- This strategy can be used to help students learn about a topic before a debate, but it is probably better used when opening up discussion after a formal debate or as an alternative to the Lincoln-Douglas format. In a tag team debate, each team of five members represents one side of a debatable question. Each team has a set amount of time (say, 5 minutes) to present its point of view. When it's time for the team to state its point of view, one speaker from the team takes the floor. That speaker can speak for no more than 1 minute, and must "tag" another member of the team to pick up the argument before the minute is up. Team members who are eager to pick up on or add to the team's argument, can put out a hand to be tagged. That way, the current speaker knows who might be ready to pick up the argument. No member of the team can be tagged twice until all members have been tagged once.
Role Play Debate strategy -- In the Lincoln-Douglas debate format, students play the roles of Constructor, Cross-Examiner, and so on. But many debate topics lend themselves to a different form of debate -- the role play debate. In a role play debate, students examine different points of view or perspectives related to an issue. See a sample lesson: Role Play Debate.
Fishbowl strategy -- This strategy helps focus the attention of students not immediately involved in the debate; or it can be used to put your most skilled and confident debaters center stage as they model proper debate form and etiquette. As the debaters sit center-stage (in the "fishbowl"), other students observe the action from outside the fishbowl. To actively involve observers, appoint them to judge the debate; have each observer keep a running tally of new points introduced by each side as the debate progresses. Note: If you plan to use debates in the future, it might be a good idea to videotape the final student debates your current students present. Those videos can be used to help this year's students evaluate their participation, and students in the videos can serve as the "fishbowl" group when you introduce the debate structure to future students. Another alternative: Watch one of the Online Debate Videos from Debate Central.
Inner Circle/Outer Circle strategy -- This strategy, billed as a pre-writing strategy for editorial opinion pieces, helps students gather facts and ideas about an issue up for debate. It focuses students on listening carefully to their classmates. The strategy can be used as an information-gathering session prior to a debate or as the structure for the actual debate. See a sample lesson: Inner Circle/Outer Circle Debate.
Think-Pair-Share Debate strategy -- This strategy can be used during the information-gathering part of a debate or as a stand-alone strategy. Students start the activity by gathering information on their own. Give students about 10 minutes to think and make notes about their thoughts. Next, pair each student with another student; give them about 10 minutes to share their ideas, combine their notes, and think more deeply about the topic. Then pair each student pair with another pair; give them about 10 minutes to share their thoughts and gather more notes Eventually, the entire class will come together to share information they have gathered about the topic. Then students will be ready to knowledgably debate the issue at hand. See the Think-Pair-Share strategy in action in an Education World article, Discussion Webs in the Classroom.
Four Corners Debate strategy -- In this active debate strategy, students take one of four positions on an issue. They either strongly agree, agree, disagree, or strongly disagree. See a sample lesson: Four Corners Debate.
Graphic Organizer strategy -- A simple graphic organizer enables students to compare and contrast, to visualize, and to construct their position on any debatable question. See a sample lesson using a simple two-column comparison graphic organizer in the Education World article Discussion Webs in the Classroom.
Focus Discussions strategy -- The standard rules for a Lincoln-Douglas style debate allow students 3 minutes to prepare their arguments. The debatable question is not introduced prior to that time. If your students might benefit from some research and/or discussion before the debate, you might pose the question and then have students spend one class period (or less or more) gathering information about the issue's affirmative arguments (no negative arguments allowed) and the same amount of time on the negative arguments (no affirmative arguments allowed). See a sample lesson: Human Nature: Good or Evil?.
More Debate Resources
Click here for resources concerning debate rules, rubrics for measuring student participation, a list of debate topics for classroom use, and additional debate lesson plan ideas.
Return to this week's Lesson Planning article, It's Up for Debate!, for more debate lesson plans.
Students use one of the debate rubrics on the resources page to rate their own debate performance and those of their peers.
Lesson Plan Source
LANGUAGE ARTS: English
GRADES K - 12
SOCIAL SCIENCES: Civics NSS-C.K-4.2
Reading for Perspective
Reading for Understanding
Developing Research Skills
Participating in Society
Applying Language Skills
GRADES K - 4
Values and Principles of Democracy
GRADES 5 - 8
Principles of Democracy
GRADES 9 - 12
Principles of Democracy
Roles of the Citizen
Find more Debate Resources or click to return to this week's Lesson Planning article, It's Up for Debate!
Last updated 09/06/2011 | http://www.educationworld.com/a_lesson/03/lp304-01.shtml | 13 |
43 | In the previous chapter we have seen one type of deductive argument, classical syllogism. Actually that is just one small part of deductive argument, so this chapter deals with another part, which is the logic of sentential connectives. Actually you have studied some basic ideas of this part before when you were in high school doing mathematics. Remember the p and the q and how they are joined together in various ways according to truth tables? Well, that is about what we are doing in this chapter. What we want to emphasize, though, is not how to do logic mathematically, but to study how valid deductive goes, and how we can separate valid from invalid arguments of this type.
Classical syllogisms deal with the relations among logical terms. And we have seen that terms work to group things together. Thus syllogism actually is about how groups of things are related to one another. One group may belong to another group, or some part of one group may be included in another group but not all of it, or at least one member of one group is also a member of another group, and so on. However, sentential logic has none of these. Instead it deals with the relations between propositions. And here propositions are related through the work of the wonderful little things called 'logical connectives.' There are a few of these, and different logic textbooks give slightly different versions of them. However, here we are going to talk about only four, namely 'and', 'or', 'if...then', and 'not'. What these connectives do is that they take on one or a pair of propositions and then manipulate their truth values. For example, 'not' will change the truth value of any proposition it is attached to. But what are truth values? Well, it's best not to give a formal definition of them. Let us agree that there are only two of them -- the True and the False. We will have a special symbol for each of them -- T for the True, and F for the False.
In sentential logic, we employ logical symbols as a shortcut to our discussion of propositions and the connectives. Thus we use letters p, q, r, s and t to stand for propositions (or more precisely for formulae of propositions, but at this stage we need not be that precise), and the symbols, '&', 'v', '-->', and '~' for 'and', 'or', 'if...then', and 'not' respectively.
Thus, very intuitively, these symbols represent propositions:
p ~q p v q (p v q) --> r [(p v q) --> r] & s
You are all familiar with truth tables, but here since we are going to discuss this issue in some detail, it's best to remind ourselves of the tables once again:
Here is the truth table for 'and':
p q | p & q ------------- T T | T T F | F F T | F F F | F
Here is the truth table for 'or':
p q | p v q ------------- T T | T T F | T F T | T F F | F
Here is the truth table for 'if...then':
p q | p --> q ------------- T T | T T F | F F T | T F F | THere is the truth table for 'not' (easiest):
p | ~p ------- T | F F | T
Before we get on to use these symbols to help us decide the validity of the deductive arguments of sentential connectives, some concepts and terminologies need to be spelled out. The first thing you need to know is that of tautology. A tautology is a proposition which is true no matter what. It is true simply because of its propositional form. Here is the classic example of a tautology: p v ~p. Next is satisfiability. A proposition or a group of propositions are said to be satisfiable when all of them can be true together. For example, the proposition, p, is satisfiable because its form does not preclude its being true. The propositions, p and q are satisfiable together (why?), so are the propositions in this group -- p, q, and p v q. These three propositions can all be true together. (You can see why, can't you?) If a group of propositions is not satisfiable, then they are inconsistent. That is, there is no way to interpret them to be all true. The proposition, p, and ~p, together form an inconsisten group because they obviously cannot be true together.
Now we are in the position to examine arguments of sentential logic and find a means to test their validity. Let us pay attention to this argument closely:
If it is raining, then Samorn must be wet.
But Samorn is not wet.
Therefore it is not raining.
If this argument is valid, then if we take all the premises to be true then there must be no way at all for the conclusion to be false. So if it is true that, if it is raining, then Samorn is wet, and true that Samorn is not wet, a conclusion can be conclusively drawn is that it is not raining. How is this argument valid? We can show that this argument is really valid by utilizing the symbols we have just seen. So let 'p' be 'It is raining' and 'q' be 'Samorn is wet'. Then tbe whole argument can be racast in symbolic form thus:
p --> q
Now when the argument is cast in symbolic form, it is straightforward to see whether it is valid or not. Remember that an argument is valid if and only if if all of its premises are true, then the conclusion is true. That is to say, if we take all the premises and hook them up together with the symbol '&', and join that large conjunction with the conclusion with the symbol '-->', then we will have a large proposition. This large proposition must be a tautology if the argument is valid, and is not a tautology if the argument is not valid. However, even though this method is straightforward, it is very cumbersome because you can see that if you have a number of premises then the whole resulting proposition becomes very long and it is tedious to test whether it is a tautology using the truth table method. Thus we will find a way to make testing easier and less involved. We know that, if an argument is valid, it is not possible for the conclusion to be false if all the premises are true. Thus, if we can find a way for the conclusion to be false although the premises are all true, then we can show that the argument is invalid. On the other hand, if we take the negation of the conclusion (the proposition of the conclusion [enclosed in a pair of parenthesis if needs be] attached in front by the symbol '~') and consider it with the premises and find that the group is not satisfiable, then we have to conclude that the argument is valid. (Why?)
Back to the argument above. Its two premises are 'p --> q' and '~q'. Let the negation of the conclusion be true. That is, let 'p' be true. Now we have three propositions -- 'p --> q', '~q', and 'p'. But if it is true that if p, then q, then if 'p' is true, then 'q' must be true. But if 'q' is true it will contradict with '~q'. So we have a contradiction and indeed the argument is valid.
Here is another example:
If Indonesia persists in defying the IMF and go ahead with setting up a Currency Board, then the IMF will withdraw its loan. Indonesia does persist in defying the IMF and ..., Therefore, the IMF will withdraw its loan.
This argument can be case in symbolic form as follows:
p --> q p _______ Therefore, q
This argument is also valid. If we suppose that the conclusion is false, that is, if we suppose '~q' to be true, then we have a group consisting of . However, according to the truth table, in the propositional form 'p --> q', if 'p' is true, then 'q' is true also; otherwise the whole propositional form is false. Thus it is not possible for 'p --> q', 'p' and '~q' to be all true together, and hence the argument is valid.
Here is another example:
Indonesia will follow the IMF's directions or risk massive political instability. Indonesia is being in a risk of massive political instability right now. Therefore, Indonesia will follow the IMF's directions.
Recast in symbolic form, this argument becomes:
p v q q ______ Therefore, p
This is an invalid argument, as you can see. From the premises that p or q and that q, it cannot be validly inferred that p. If 'p v q' is true, 'p' can be true or false. Thus there is no guarantee that p must be true if 'p v q' and 'q' are true.
Back to Top
Here are some rules of valid arguments, according to most logic textbooks. Note that these rules act rather as a shortcut when you try to prove arguments valid. They are not substitutes for the thinking that you have to do in order to find out whether an argument you are considering is valid or not. In other words, these rules are not given, but they themselves must be shown to be true too. Here we will focus on only a few such rules:
Back to Top
Back to the Home Page. | Go to Week Four. | http://pioneer.chula.ac.th/~hsoraj/PhilandLogic/WeekThree.html | 13 |
21 | The Constitution For The United States
Its Sources and Its Application A
Index Preface Preamble Article I Article II Article III Article IV Article V Article VI Article VII Letter of Transmittal Ratification 1st 12 Amendment Proposals "Bill of Rights" Amend. I - X Amend. XI -XXVII Landmark Court
Missing Original 13th Amendment Constitution History A Quiz for Loyal Americans
A Short History of
The Constitution For The United States
The Federal Convention and
The Creation of the U.S. Constitution
How It All Started . . .
After the Declaration of Independence in 1776, there were thirteen little free countries in place of the thirteen colonies. Most of the animosities and jealousies of colonial times still continued. There was a political atmosphere of the Balkan states about this aggregation of small republics.
They were flimsily held together by a document known as the Articles of Confederation. It was not a constitution; it did not make a nation; it was a sort of treaty, and was called by the men of the time "a league of friendship."
In the Continental Congress, which consisted of a single legislative chamber, each state had one vote. When a question was put, the delegates from a state would get together and agree on the vote from that state, if they could. If they could not agree the state did not vote. The affirmative vote of nine states was required to carry any measure.
The Continental Congress had no power to make laws binding individuals. Its action was upon the states as political entities, and not directly upon the citizens. But, even at that, it could not compel a state to do anything. The Congress was, in fact, not a legislative but a diplomatic body. Its members were really ambassadors, and their attitude toward Congress, and toward each other, bore all the historic distrust and caution which ambassadors are supposed to have.
Many disputes between the states arose, and on numerous occasions between 1788 and 1789 the air was explosive with threats of interstate wars.
New York, for example, considered herself much aggrieved because the farmers of New Jersey brought chickens and vegetables to New York City and sold them there. This took wealth away from New York, so the New Yorkers said, and transferred it to the foreign state of New Jersey.
Connecticut people, too, were draining the life-blood from Manhattan. They brought firewood down to the city and sold it from door to door. Then, with New York's money in their pockets, they would go back to the thrifty state of Connecticut, while New Yorkers worried through the sleepless nights wondering how it would all come out in the end. There was a thorough misunderstanding of the principles of trade; and an inordinate value was set on currency.
This narrow provincial sentiment was encouraged by the farmers of New York state. They had produce to sell, and it appeared monstrous to them that the New York state government, which ought to have protected them in their rights, allowed the citizens of other states to get the city trade. They might have got the trade themselves by making their prices lower, but that is exactly what they did not want to do.
After awhile the New York legislature acted. A tariff law was passed. Every chicken that came from New Jersey, every cabbage from Connecticut, had to go through a custom house and pay duty before being allowed to enter the state of New York.
Connecticut and New Jersey applied themselves to ways of retaliation. The Connecticut merchants decided to boycott New York. The New Jersey method was different. On Sandy Hook the state of New York had put up a lighthouse. The New Jersey legislature passed an act for taxing the little scrap of land on which this lighthouse stood, and the tax was made one hundred and fifty dollars a month.
There was another dispute between Connecticut and Pennsylvania which became so bitter that Connecticut was on the verge of declaring war. Long before the Revolution settlers from Connecticut had migrated to lands in northern Pennsylvania. Although they made their homes within the borders of that state they continued to follow the Connecticut laws; and, in fact, considered their communities as a part of the state of Connecticut. After the Revolution Connecticut claimed this Wyoming Valley, notwithstanding the fact that it did not touch the state of Connecticut at any point. In the quarrel that ensued men were killed and the dormant Pennsylvania militia bloomed into uniforms and bayonets. Connecticut was about to send an army to protect the people whom she supposed to be her citizens. Finally the matter came before a special federal court, organized under the Articles of Confederation, and the disputed territory was awarded to Pennsylvania.
Under the Confederation no state could keep a standing army. That was a function of the general government, but the revenue of Congress was so insignificant that it could never afford the luxury of soldiers, and its army was a very small, disarticulated skeleton. Congress could not levy a tax. Its authority in this direction was limited to requisitions on the states, apportioned on the basis of the real estate value of each state. The payment of the requisitions was farcical in the extent of its delinquency. In 1781 it was estimated that the continental government would require nine million dollars during the next year. It was thought that four millions might be borrowed, and the remaining five millions raised through requisitions. At the end of the year only $422,000 had been paid in the treasury. Of the requisitions levied in 1788, less than one-fifth had been received in the continental treasury by 1785.
Each state maintained its own custom house and laid duties on foreign goods according to its own notions. In 1781 Congress, worn to despair on the subject of money, asked permission of the states to collect, for continental expenses, a duty of five per cent on all imports. This seemed to be a feasible way out of the money trouble, but it came to nothing. All the states but New York agreed to this five per cent impost. New York flatly declined, so nothing could be done. In the meantime five years had been consumed in wrangling. There were times when the continental treasury did not possess a single dollar of coin, though a huge depreciated volume of continental paper currency was in circulation.
In 1786 New Jersey came out with a statement that she had been badly treated; and she declared that she did not intend to pay any more requisitions or to contribute in any manner to the general scheme of things until New York stopped collecting a tariff on New Jersey goods. Nothing could be done about it. Congress was entirely lacking in executive authority. The President was merely the presiding officer of Congress. He had no more power than the chairman of a mass meeting.
To get rid of the army, quietly and without trouble, was the chief un-public question that disturbed the ruling minds in 1788. It intruded itself sadly, like a ragged and unwelcome guest, in the victory festivals of the period. The soldiers had been treated shamefully, and there was an apprehension in Congress that these patriotic protectors of the nation might kick Congress out of doors and take charge of affairs. Washington managed to disband the army, or most of it, before it was aware of being disbanded. His method was to give furloughs to batches of men, to send them away in groups or singly, and then to send them their discharges later. The idea behind these tactics was to get rid of the soldiers without paying them, though Washington himself wanted them to be paid, as his letters prove. In behalf of Congress it must be said that they had no funds to pay the soldiers; on the other hand it must be said that they made very little effort to raise money for this purpose.
The plain fact is that the commercial element of the country which had come into authoritative prominence was tired of the whole crew of patriots. They wanted to disperse the army as soon as .possible. The country was full of business schemes and wartime fortunes were growing in arrogance.
Eighty soldiers mutinied at Lancaster, Pennsylvania, in June, 1788. They marched on Philadelphia and appeared in front of the State House where Congress was in session. Congress called on the Executive Council of Pennsylvania, meeting in the same building, for protection, but the Council was afraid to bring out the militia, as it was thought that the militia might join the mutineers. The soldiers declared that they wanted their pay and intended to take it from the treasury. They pointed their guns at the Congressional windows but did not fire them. There was a rough play-boy air about the whole proceeding. Congress sent an urgent message for help to Washington -- who was then at West Point -- and without waiting to see what the result would be, the members of Congress unheroically slipped through the back door and made their way through a golden June sunset to Princeton in New Jersey, thus abandoning the seat of government to eighty mutineers and a sergeant.
Washington was very efficient in cases of mutiny. He sent fifteen hundred troops -- best-fed and best-clothed of the army -- to Philadelphia at once. That ended the mutiny. Some of the mutineers were whipped, but nobody was shot. The fleeing Congress, wounded in the sphere of dignity, abjured Philadelphia and decided to remain at Princeton. The members were given tea and liquor by the delighted inhabitants, who assured them of protection.
Notwithstanding its grotesque adventures, the Continental Congress managed to exist. It held one large asset. The western territory -- everything from the Appalachians to the Mississippi -- had been turned over to Congress by the states. Nearly all the schemes for financing the government, and for liquidating the public debt, revolved around this western land. It was, indeed, an immense domain. It is interesting to reflect that if the government had held it to this day the entire expense of the national budget could have been met from its rentals.
The insignificance of Congress was well known in England. In negotiating the peace treaty the commissioners of the United States, representing Congress, agreed to recommend to the states that no bar or hindrance be placed against the collection of debts due to British merchants by American citizens. They also agreed to recommend that the seizure of loyalists' property be discontinued. At first, the British wanted to be paid for the effects of loyalists taken during the war. Franklin pointed out that sauce for the goose is sauce for the gander; and that, in such a case, he would expect the British government to pay for property destroyed by British troops. The argument was dropped by both sides, and the conclusion was that Congress should urge the states not to molest the loyalists any more.
Such a recommendation was made by Congress, but it had small effect. In some parts of the country, particularly in the South, loyalist property was seized right and left after the conclusion of peace. As for the collection of debts to British merchants made before the war, it was almost impossible to find either a judge or a jury who would pass favorably on these claims; and their validity drifted into the realm of legal fictions.
The British had agreed, on the signing of the treaty, to give up their forts in the Northwest Territory. Years passed, and they still held on to these posts, declaring that they would give them up when the United States had carried out its part of the compact. This attitude seems reasonable, but it did not appeal to the Americans of the 1780's.
Just at the end of the war, another blow had been given to New England commercial aspirations by a British Order in Council which closed the ports of the British West Indies to American ships. Thus, the question which began the war was lost in its successful conclusion.
But this closing of West Indian ports was a blessing in disguise. New England ships and sailors had to have some occupation, so in the 1780's we see Massachusetts ships sailing timidly to China with goods which it was thought the Chinese might want. By the year 1800 the Chinese trade was a roaring success, and Yankee merchants were cutting the ground from under the English in that faraway market. In closing the West Indian ports they effectually developed a rival for themselves in the Orient.
In the weighty troubles that grew out of the treaty Congress revealed its weakness at every turn. From the first many leading men were dissatisfied with the Confederation, and hoped eventually to replace this feeble "league of friendship" with a closely knit national government. Washington wrote to Hamilton in 1788:A few weeks later he wrote:
"It is clearly my opinion, unless Congress have powers competent to all general purposes, that the distresses we have encountered, the expense we have incurred, and the blood we have spilt, will avail us nothing."
"My wish to see the union of these states established upon liberal and permanent principles, and inclination to contribute my mite in pointing out the defects of the present constitution, are equally great. All my private letters have teemed with these sentiments, and whenever this topic has been the subject of conversation, I have endeavored to disuse and enforce them."
There was a body of intelligent opinion in favor of a monarchy. Strange to say, Washington was not considered, so far as we know, by these advocates of monarchy as a possible King, unless Hamilton had him in mind. He was, indeed, written to by an irresponsible colonel who hinted that he ought to be a King, though this man appears to have represented nobody but himself.
Washington was a republican at heart. What he wanted was a republic -- but an aristocratic one -- where the suffrage and the authority would be in the hands of the wellborn and the wealthy. The monarchy movement, if such a vague affair can be called a movement, is obscure. It became so unpopular in the end that its originators, among whom was Nathaniel Gorham and Rufus King, buried it as deeply as they could in their memories. These men felt, evidently, that no plebeian American, however distinguished he might be, was of sufficient prestige to occupy the throne. There is a strong probability that Prince Henry of Prussia, a brother of Frederick the Great, was approached on the subject before the matter was dropped. The spectacle of some gaudy European prince, coming over to occupy a throne in our land of raw liquor and trusty squirrel rifles would have been interesting.
This clumsy playing with the idea of a monarchy was hardly more than a sort of moral nostalgia. Certainly any such attempt would have failed miserably, but there was nevertheless a well-grounded effort to create a permanent nobility by an organization of former army officers called the Society of the Cincinnati, in which membership was to be hereditary. Washington was the first president of the society, though he accepted the office, I think, without being aware of the society's intention to influence public affairs. He was displeased with the early conduct of the organization, and resigned from its leadership, but not until he had persuaded the society to drop its hereditary feature. The Society of the Cincinnati exists today as a purely social, and praiseworthy organization.
The Confederation was a failure, but commerce and finance were riding on the crest of the wave. The close of the war had found the small farmers, as a class, in acute poverty. By taking advantage of the economic needs of these producers the money-holding groups in the coast cities had been able to get a tight financial grip on almost the whole of the producing class. There were counties in which nearly every acre was under mortgage at high rates of interest. Usury and profit molded themselves into large fortunes. The splendor of business began to shine.
The discontent of the common people was snapping at the heels of these primitive money kings. In every legislature there were proposals to repudiate debts, to issue floods of paper money, to impair the value of contracts. In Rhode Island the debtors captured the legislative machinery of the state, and repudiated virtually everything. They made paper money a legal tender and forced merchants and mortgage-holders to take it. Capital in that little state became so unsafe that it got out as quickly as it could. The "shameful conduct" of Rhode Island was a topic at teas and in counting-houses from New Hampshire to Georgia. It was the general opinion that something ought to be done about it. The lawbooks were thumbed, and spectacles rested on learned noses. It appeared that nothing could be done under the Articles of Confederation. Rhode Island was a free state and could act as she pleased.
This was bad enough, but even worse was coming. In 1786 an armed rebellion broke out in western Massachusetts and aroused the execration of all who loved peace and profits. The farmers of that region took up arms against the gaunt destitution of their lives. Their lands could not produce enough to pay the interest on mortgages and provide the food and raiment for human necessity. In addition to the burden of debt, taxes had gone up in Massachusetts until they were fifty dollars per capita. Compared with this, the taxes which Great Britain had attempted to impose on the colonies were nothing more than trifling small change. The rebels were mostly veterans of the Revolution. They put themselves under command of Daniel Shays, who had been an officer in the war. Organized with a sort of military discipline, they constituted in fact a formidable force. Lawyers took to their heels, and the frightened judges were ousted from the courts. Debts were to be abolished; everybody was to begin over with a clean slate. On this program of extreme simplicity the rebellion throve for a brief moment. The rebels invoked the Scriptures and pointed to the ancient Jewish law under which all lands were redistributed every seven years.
Shays' army of "desperate debtors," as these men were called, created terrific excitement, not only in New England, but everywhere else. Henry Knox, the Confederation's Secretary of War, was sent to Massachusetts at once; the militia was called out; the money-lenders of Boston fluttered in agitation.
Washington wrote to Henry Lee, Oct. 31 1786:
"You talk, my good Sir, of employing influence to appease these present tumults in Massachusetts. I know not where that influence is to be found, or, if attainable, that it would be a proper remedy for the disorders. Influence is not government. Let us have a government by which our lives, liberties, and properties will be secured, or let us know the worst at once."
Jefferson, on the contrary, does not appear to have been at all upset by the Shays episode and, in this, he stands alone among the notables. He wrote to W. S. Smith, "God forbid we should ever be twenty years without such a rebellion."
To Mrs. John Adams -- of all people -- he wrote these disturbing lines on Feb. 22, 1787:"I like a little rebellion now and then.... The spirit of resistance to government is so valuable on certain occasions that I wish it to be always kept alive. It will often be exercised when wrong, but better so than not to be exercised at all."
The Massachusetts troops eventually drove Shays and his foodless men over the deep-snow hills into New Hampshire. In the meantime Rhode Island sank a notch lower in public estimation by inviting the entire Shays outfit to come to Rhode Island and live. On a wintry day in the New Hampshire hills Shays' forlorn little army petered out. All the forces of society, represented by a swarm of bayonets, surrounded these rebels as they shivered in the snow. When they were asked by General Lincoln, in command of the Massachusetts troops, what they wanted to do, they said that they wanted to go home. He allowed them to go, and Shays' Rebellion came to an end.
There was some feeble effort to punish the ring-leaders, but nothing came of it. It seems quite clear that the money ring which ruled Massachusetts was afraid to proceed with prosecutions. After this occurrence the moneyed classes were convinced that affairs were in a sad plight. There was no protection anywhere for capital or investments. A strong, centralized government was urgently needed; the stronger the better. Now, for the first time in American history we see finance lifting its head above land as an object of attention. It had its origin in the public debt. Let us consider, for a moment, the status of the government's financial obligations in the later 1780's. First, there was the foreign debt -- that is, money borrowed abroad by the commissioners of the Continental Congress. This amounted, in 1789, to approximately ten million dollars, with arrears of interest of nearly two millions. Second: The domestic continental debt, which ran up to a principal sum of twenty-seven millions; to which must be added unpaid interest amounting to thirteen millions. Thus, the continental obligations, in all, were a little more than fifty millions of dollars. There is absolutely nothing different now except two hundred and ten years have gone by and zeros have been added.
The precise total of the combined state debts is unknown, owing to the slipshod character of the records, but it was around twenty millions of dollars... say, seventy millions for both continental and state debts. This sum of seventy millions represents only the funded obligations, represented by certificates, or bonds. Besides, there was the enormous volume of continental paper currency, which went down and down until it became entirely worthless and passed out of sight. More than two hundred million dollars of it has been issued. Very little of it was ever redeemed, on any terms.
The holders of the certificates, or interest-bearing obligations, considered them of small worth; they might be bought readily at prices ranging from one-sixth to one-twentieth of their face value. But, suppose a powerful national government could be put in place of the Confederation... a government in complete control of tariffs and indirect taxation. Let us suppose further that it could be done so quietly and so secretly that men with money would be able to buy up this whole mass of depreciated paper before its holders, principally ex-soldiers and very ordinary people, realized the import of the new authority. Then the next step would be a large fiscal operation by which the new and strong government would assume the entire volume of obligations, both state and continental.
In a short time these depreciated certificates would rise to par. Golden dream! A dream it was... but, as the virile, go-getting magazines tell us, there are men who make their dreams come true. The men behind the Constitution made theirs come true, to their great profit.
The first difficulty was how to begin. The Confederation had to be abolished. If a public campaign were started to do away with the government and put a new one in its place the substantial citizens of the country would have the whole pack of the debt-ridden and improvident clawing at them; and every little landless theorizer would put forth his plan. Perhaps there were more Shays than one in the country. Some of them might support their fatuous democratic ideas with armies. Besides, a general public knowledge of what they intended to do would vitiate the scheme from the outset. Even the most ignorant holder of certificates would hear of it eventually and keep his government paper, or sell it only at a high price. Into this circle of ideas there came the shrewd notion to call a conference of the states at Annapolis to consider commercial regulations, to devise a uniform system of duties, and so on. After commercial matters had been discussed, then other important matters might be taken up... or the convention might simply drift into a discussion of the general welfare. In this way it might be possible to devise a stronger government and one financially able to assume the debts -- under the guise of a commercial conference.
The Annapolis convention, which met in September, 1786, did nothing of any consequence. Only five states were represented. Alexander Hamilton was there... five states and Hamilton, personal secretary to Washington during the Revolution and who became our first Secretary of the Treasury. However, there was a sufficient interchange of views to give the delegates a sense of confidence, so before adjourning they passed a resolution in which they asked Congress to call a convention at Philadelphia, the following May, "to devise such further provisions as shall appear to them necessary to render the constitution of the federal government adequate to the exigencies of the Union, and to report to Congress such an act, as, when agreed to by them, and confirmed by the legislature of every state, would effectually provide for the same." This is how the Federal Convention, which created the Constitution for the United States, came into being.
There was something about the Federal Convention that makes one think of a meeting of the board of directors of a large and secretive corporation. Fifty-five sleek, well-to-do gentlemen sitting carelessly in a closed room. Gentlemen who know one another very well. Gentlemen of good manners who apologize for reading their letters in public. Esoteric jokes pass around with snuff-boxes of engraved silver. Mild and polite attendants. Tip-toeing doorkeepers, and keys that crunch loudly in their locks. In the chair of authority sits the impressive Washington... grey inside, but majestic outside. Boredom flickers in his eyes; he is grave, serious and bored. But he has the consciousness of doing his duty, the spiritual uplift of meeting expectations. Posterity has its eye on that assembly, and he knows it. No matter how others may act, posterity shall never say - with truth -- that George Washington failed in his part.
At Washington's side sits little Secretary Jackson, fumbling with his bewildered notes. He left them to posterity, too; but posterity has never been able to make head or tail of them. The delegates pledged themselves to secrecy, like a jury in a murder trial, and for four months they met behind locked doors with hardly a whisper of their proceedings reaching the open air. But some of the delegates took notes for their own use. Among the note-takers was James Madison. It is to him that we owe the most complete report of what occurred. During his lifetime he kept his notes in inviolable secrecy. They were published in 1840... fifty-three years after the Convention.
The general impression among the people of the country was that the Federal convention's powers were limited to a revision of the Articles of Confederation. This impression did not extend to the men in the Convention, nor to the well-informed elsewhere.
Dr. Charles A. Beard, outspoken and scholarly historian, has given us in his Economic Interpretation of the Constitution of the United States, a complete picture of the economic affiliations of each member of the Convention. Dr. Beard's book is based on first-hand research; it is ably documented and is streaming with footnotes and citations; one marvels at the patience of its author with fly-blown records. He has shown that at least five-sixths of the members of the Convention were holders of public securities or in some other economic sense were directly and personally interested in commercial affairs which would benefit by their labors. His data also prove that most of the members were lawyers by profession and that "not one member represented in his immediate personal economic interests the small farming or mechanic classes."
Among the members was Robert Morris, friend of Washington, and by far the most important financial figure of the nation. At that time the position of Robert Morris was something like the position of the elder J. P. Morgan one hundred and twenty-five years later. He was the acknowledged arbiter of American business.
Gouverneur Morris -- of the same name but not a relative of Robert Morris -- was also a member. This is the Morris, it will be remembered, who complained at the beginning of the Revolution that the people "begin to think and to reason." Slaveholders and Southern aristocrats were in evidence in numbers. Among them were the two Pinckneys and John Rutledge, of South Carolina. George Mason, Washington's former political mentor, sat with the Virginia delegates. He and Washington were not so friendly as in the old days. Mason was a states' rights man, jealous of crystallized authority and looking with a suspicious eye on all kinds of shrewd manipulation.
Land speculation and money-lending were among the economic interests of at least thirty-eight of the delegates, including Washington, Franklin, Gerry, Gorham and Wilson. Of the fifty-five delegates Dr. Beard says that the names of forty appear on the records of the Treasury Department as holders of public certificates.
The Convention was singularly lacking in doctrinaires, in idealists. Jefferson, the great idealist and humanitarian of the epoch, was in France as the official representative of the government. It was also lacking in a spirit of inquiry. One would naturally think that a body of men engaged in a constructive work of such immense possibilities would summon before them, day after day, citizens of all degrees and from all sections, in an effort to find out what was wrong and what was required to set it right. But they did not do this; they never summoned anybody. To have done so might have revealed the purport of the Convention, and they could not risk that contingency. The Constitution was planned like a coup d'etat; and that was its effect, in truth.
As soon as the Convention was organized Edmund Randolph rose and proposed an entirely new plan of government. It had the backing of the Virginia delegates -- including Washington, presumably -- and was supposed to have been the work of James Madison.
According to the Virginia plan the national legislature was to consist of two houses. The members of the lower house were to be chosen directly by male citizens entitled to vote. The members of the Senate were to be elected by the members of the lower house from a list of persons suggested by the legislatures of the various states. In both the upper and lower houses the votes were to be by individuals, and not by states.
The number of representatives from each state was to be in proportion to its wealth, or to its free inhabitants. An interesting feature was the power of Congress to veto state laws. This would have reduced the state legislatures to rather absurd nonentities.
There was to be a national executive, chosen by Congress for a short term, and ineligible for a second term. The Supreme Court, according to the Virginia plan, was to be chosen by Congress, and was to hold office during good behavior.
The Virginia plan was, in reality, the basis of the Constitution as it was finally shaped, though it was twisted and turned about so completely in the four months' discussion that it was all but forgotten.
The first dissension occurred over the relative representation of the states. Small states like Delaware and New Jersey saw themselves completely overshadowed and outvoted by the large delegations from the more populous states, if representation on the basis of population or wealth were conceded. Rhode Island would have probably taken this side, too, if that little state had been represented at the Convention. Rhode Island was invited, but declined to send delegates.
A compromise was reached eventually by giving two Senators to each state, irrespective of size. Alexander Hamilton thought the Virginia plan too liberal. On June 18th, he made a long and interesting speech, in which he set forth his theory of government. His remarks are too extensive for quotation in full, but I shall give a few pertinent extracts:
"My situation is disagreeable, but it would be criminal not to come forward on a question of such magnitude . . . I am at a loss to know what must be done -- I despair that a republican form of government can remove the difficulties . . . All communities divide themselves into the few and the many. The first are the rich and well-born, the other the mass of the people. The voice of the people has been said to be the voice of God; and however generally this maxim has been quoted and believed, it is not true in fact . . . It is admitted that you cannot have a good executive upon a democratic plan. See the excellency of the British executive -- he is placed above temptation -- he can have no distinct interests from the public welfare . . ."
He went on to say that he was in favor of having an Executive, or President, elected for life on good behavior. The lower house to be elected for three years by the voters of the states; and the Senate to be elected by electors who should be chosen for that purpose by the people. The Senators to remain in office during life. But all that is mere introduction. Here is the milk in this coconut:
"The Executive to have the power of negativing all laws -- to make war or peace with the advice of the Senate -- to make treaties with their advice, but to have the sole direction of all military operations, and to send ambassadors and appoint all military officers, and to pardon all offenders, treason excepted, unless by advice of the Senate. On his death or removal, the President of the Senate to officiate with the same powers, until another is elected. Supreme judicial officers to be appointed by the Executive and the Senate. The Legislature to appoint courts in each State, so as to make the State governments unnecessary to it.
All State laws to be absolutely void which contravene the general laws. An officer to be appointed in each State to have a negative on all State laws."
If we look into these ideas we observe that what he proposes is a monarchy with an elected king. The Executive would have an absolute veto power over the acts of Congress; and this veto would extend even to the state legislatures, as he suggests "an officer to be appointed in each state to have a negative on all state laws." A little too much, this was, for the gentlemen of the Convention. Hamilton never understood the American people, but most of the members of the Convention "knew them very well."
However attractive Hamilton's plan may have been to some of the delegates, all realized that it could never hope for adoption in its raw form. Something much more evasive and soft-spoken would have to be framed. Hamilton's plan was received so coldly that his feelings were hurt, and he left the Convention and went home, where he stayed until the Convention had nearly finished its work.
Washington felt Hamilton's absence, and wanted him to return. On July 10th Washington wrote to him:
"When I refer you to the state of the counsels which prevailed at the period you left this city, and add that they are now, if possible, in a worse train than ever, you will find but little ground on which the hope of a good establishment can be formed. In a word, I almost despair of seeing a favorable issue to the proceedings of our convention, and do therefore repent having had any agency in the business."
Very characteristic of Washington. He was the most pessimistic great man in all history. Does not this letter remind you of his complaining epistles written during the Revolution, when he wishes that he had never accepted his position, and how much happier he would be as a private soldier? The letter files of America today are full of communications of this type, written by millionaires and heads of large enterprises. No accomplishment is quite good enough to satisfy them, the country is going to the dogs, profits are falling, salaries are high, labor is getting out of bounds, affairs are beginning to look panicky . . . The true executive mind is not a mind of optimism. Its tendency is to undervalue, to depreciate. Existing on the achievements of others, on the constructive work of subordinates, it spurs those around it to action by taking a gloomy view of their functions and their future.
On August 7th the Convention considered the qualifications for suffrage under the new Constitution. The wisdom of Mr. Gouverneur Morris spread around the room. He said, "The time is not distant when this Country will abound with mechanics and manufacturers [he means factory hands] who will receive their bread from their employers. Will such men be the secure and faithful Guardians of liberty?"
He concluded that they would not. Dr. Franklin thought that, "It is of great consequence that we should not depress the virtue and public spirit of our common people, of which they displayed a great deal during the war." There was a good deal of talk about "the dangers of the leveling spirit." Most of the members were of the opinion that there should be a property qualification, but the subject was such a delicate one that they decided to leave the matter to the states.
Charles Pinckney thought there ought to be a property qualification for members of the national legislature, for the president and the judges -- enough property, he said, to make them "independent and respectable." The trouble with this theory is that experience has shown that wealth does not necessarily make a man either independent or respectable. Mr. Pinckney's assertion inspired the aged Franklin to struggle to his feet and say that some of the worst rogues he ever knew were the richest rogues.
There was a bitter North-and-South contest over the question as to whether slaves should be counted in apportioning the number of representatives to the states. This discussion eventually broadened out into a general argument on the slavery question. All the states, except South Carolina, wanted to put "an end to the importation of slaves." If the slave trade was stopped the South Carolina delegates declared that their state would not enter the Union.
That moment was the opportunity of Washington's life, and he missed it. He had declared on many occasions that he was opposed to slavery -- and his influence was incalculably great. What would have happened if he had turned over the chair to some one else for a moment and had said from the floor of the Convention that as far as he was concerned South Carolina could stay out of the Union if she wanted to continue in the slave trade?
Certainly the Union could have existed without South Carolina... and it is equally certain that South Carolina could not have existed long without the Union. Washington said nothing; he did not express an opinion at the Convention but once, and that was on a trifling question of no weight.
What would Jefferson have done in Washington's place? We do not know, we can only guess... and guessing is not the province of the historian. Jefferson wrote in opposition to the slave trade before the Revolution. In writing the Declaration of Independence he inserted a clause against the slave trade which was stricken out by Congress. In 1784 he succeeded in putting the Northwest Ordinance through Congress in the face of bitter opposition. The Northwest Ordinance prohibited slavery forever in the territory north of the Ohio. Whether this would have been his attitude in the Federal Convention -- had he been there -- is a question without a solution.
In the end it was decided that, in determining the population basis for representation, a slave should be counted as three-fifths of a person. The New England delegates were not satisfied. They maintained that, if such a provision was adopted, they wanted every horse in New England to be counted as three-fifths of a person. Their argument seems reasonable, for slaves had no more to say about the government than horses.
In the matter of slave importation a compromise was made to the effect that slave-catching should continue until 1808. This satisfied South Carolina. The Constitution evolved a series of compromises; everybody was willing to concede something, provided that the main object of a strong government -- and one able to cash in the depreciated paper -- should be the result.
The Constitution is a remarkably able production. The intention of its framers was to create an economic document which would protect established and acquired rights; and this intention has been carried out successfully under the smooth flow of phrases. It is highly self-protective, and is skillfully designed to break up and dissipate radical attacks on any of its fundamental axioms, while at the same time it permits a large freedom of movement to those who are entrenched behind it. It is full of curious subtleties which are revealed only after an earnest study. The keystone of the conservatism that it embodies is in the judiciary and the Senate. The judiciary -- the Supreme Court -- possesses a veto power over all acts of Congress which do not fall within the Constitution's narrow limits; and the Supreme Court is a body which maintains an unchanging existence during the lives of its members. But, even before the Supreme Court is encountered, a radical measure must run the gauntlet of the Senate... and the Senate is elected for six years. In the Senate the smaller states have an enormous preponderance, owing to the constitutional principle of equal representation. At the present time (1996) the Senators from twenty-six small states, having an aggregate population of less than one-fifth of the total number of inhabitants of the country, can negative any proposition, although it may have passed the House and represents the desire of four-fifths of the nation's citizens.
The Senate is always subject, more or less, to indirect manipulation through the ease with which strong financial interests may control the small states. Gladstone, who represented commercial interests all his life, said that the American Constitution is "the most wonderful work ever struck off at a given time by the brain and purpose of man."
Much of the credit for this wonderful work must be given to the brain and purpose of Gouverneur Morris. He was a member of the Committee for Style and Arrangement, which had charge of the actual drafting of the Constitution. Channing makes an interesting remark about the part played by Morris.
The actual phrasing seems to have been left to Morris; but he sometimes followed suggestions made by persons who were not members of the committee. The draft of the Constitution when it reappeared in the Convention was widely different in many respects from the project that had been committed to it. By changes in phraseology and arrangement and by the introduction here and there of phrases like "impair the obligation of contracts" the friends of strong government accomplished a large part of the purpose that had brought them to Philadelphia. Washington's part in shaping the Constitution was negligible. He was the presiding officer of the Convention, and, as such, he refrained from taking part in the debates. Lodge and other adulators hazard the opinion that he must have exercised great influence through private conversation, and unofficially, but this is pure assumption. The records of the time do not mention any such conversations. Washington's Diaries contain no comment on the work of the Convention, but are given up chiefly to memoranda as to teas and dinners. On one occasion he went on a fishing trip with Gouverneur Morris to Valley Forge. While Morris was fishing, Washington rode around the old Valley Forge encampment, then silent and haunted by memories. What his emotions were -- if any -- he does not say.
There was considerable difficulty in getting some of the states to adopt the Constitution. Some of the legislatures were unruly, and its adoption wavered in the balance. In this crisis Hamilton, Madison and Jay came forward with a series of masterly essays which were published under the title of The Federalist.
We have seen what Hamilton's ideas of government were, but now he was all for the Constitution, though he said in private letters that the plan was as remote from his own ideas as anything could be.
When the question of adoption was in doubt the opponents of the Constitution circulated a report that Washington was not in favor of it. Upon hearing this rumor he came out with a statement -- expressed in various letters -- that he was for the Constitution, and urged its adoption.
One of the most significant facts about Washington's long and distinguished career is that he never formulated any coherent theory of government. Hamilton and Jefferson both worked out distinctly articulated systems of politics. Each stood for a definite, cogent set of ideas of social structure. But there is nothing in the body of American political thought that we can call Washingtonism.
At first impression his political character appears utterly nebulous. His writings are a vast Milky Way of hazy thoughts. We turn their thousands of pages, marking sentences and paragraphs here and there, hoping to assemble them and build up a substantial theory of the common weal. Can it be that this huge aggregation of words has no impressive import? We are about to think so; however, when we study them in detail we find that his observations are sensible, sane and practical. Yet, somehow, they do not coalesce; they lack a fundamental idea, a spirit that binds them all together. That was our first conclusion, but then we were thinking in terms of the great philosophies... of Rousseau, of Locke, of Adam Smith, of Voltaire, of Ricardo. Later, one day, we thought of the mind of the large city banker, and we saw Washington's political personality in a flash of revelation. Washington thought as almost any able banker who might find himself in the eighteenth century would think. The banker stands for stability, and Washington was for that. The banker stands for law and order, for land and mortgages, for substantial assets -- and Washington believed in them, too. The banker wants the nation to be prosperous; by that he means that he wants poor people to have plenty of work and wealthy people to have plenty of profits. That was Washington's ideal.
The banker does not want the under-dog to come on top; not that he hates the under-dog, but he is convinced that people who have not accumulated money lack the brains to carry on large affairs, and he is afraid they will disturb values. The banker is not without human sympathy; but he is for property first, and humanity second. He is a well-wisher of mankind, though in a struggle between men and property, he sympathizes with property. In this we see Washington's mind. A coherent political philosophy is not an impelling necessity to this type of intellect.
Taken from George Washington, the Image and the Man, by W.E. Woodward,
Pennsylvania State House - Independence Hall
And so it began. . .
May 25, 1787, Freshly spread dirt covered the cobblestone street in front of the Pennsylvania State House, protecting the men inside from the sound of passing carriages and carts. Guards stood at the entrances to ensure that the curious were kept at a distance. Robert Morris of Pennsylvania, the "financier" of the Revolution, opened the proceedings with a nomination--Gen. George Washington for the presidency of the Constitutional Convention. The vote was unanimous. With characteristic ceremonial modesty, the general expressed his embarrassment at his lack of qualifications to preside over such an august body and apologized for any errors into which he might fall in the course of its deliberations.
To many of those assembled, especially to the small, boyish-looking, 36-year-old delegate from Virginia, James Madison, who had arrived May 3rd, one of the first delegates on the scene, the general's mere presence boded well for the convention, for the illustrious Washington gave to the gathering an air of importance and legitimacy. But his decision to attend the convention had been an agonizing one. The Father of the Country had almost remained at home.
Suffering from rheumatism, despondent over the loss of a brother, absorbed in the management of Mount Vernon, and doubting that the convention would accomplish very much or that many men of stature would attend, Washington delayed accepting the invitation to attend for several months. Torn between the hazards of lending his reputation to a gathering perhaps doomed to failure and the chance that the public would view his reluctance to attend with a critical eye, the general finally agreed to make the trip. James Madison was pleased.
The determined Madison had for several years insatiably studied history and political theory searching for a solution to the political and economic dilemmas he saw plaguing America. The Virginian's labors convinced him of the futility and weakness of confederacies of independent states. America's own government under the Articles of Confederation, Madison was convinced, had to be replaced.
In force since 1781, established as a "league of friendship" and a constitution for the 13 sovereign and independent states after the Revolution, the articles seemed to Madison woefully inadequate. With the states retaining considerable power, the central government, he believed, had insufficient power to regulate commerce. It could not tax and was generally impotent in setting commercial policy It could not effectively support a war effort. It had little power to settle quarrels between states. Saddled with this weak government, the states were on the brink of economic disaster. The evidence was overwhelming. Congress was attempting to function with a depleted treasury; paper money was flooding the country, creating extraordinary inflation--a pound of tea in some areas could be purchased for a tidy $100; and the depressed condition of business was taking its toll on many small farmers. Some of them were being thrown in jail for debt, and numerous farms were being confiscated and sold for taxes.
In 1786 some of the farmers had fought back. Led by Daniel Shays, a former captain in the Continental army, a group of armed men, sporting evergreen twigs in their hats, prevented the circuit court from sitting at Northampton, MA, and threatened to seize muskets stored in the arsenal at Springfield. Although the insurrection was put down by state troops, the incident confirmed the fears of many wealthy men that anarchy was just around the corner. Embellished day after day in the press, the uprising made upper-class Americans shudder as they imagined hordes of vicious outlaws descending upon innocent citizens. From his idyllic Mount Vernon setting, Washington wrote to Madison:
"Wisdom and good examples are necessary at this time to rescue the political machine from the impending storm."
Madison thought he had the answer. He wanted a strong central government to provide order and stability. "Let it be tried then," he wrote, "whether any middle ground can be taken which will at once support a due supremacy of the national authority," while maintaining state power only when "subordinately useful." The resolute Virginian looked to the Constitutional Convention to forge a new government in this mold.
The convention had its specific origins in a proposal offered by Madison and John Tyler in the Virginia assembly that the Continental Congress be given power to regulate commerce throughout the Confederation. Through their efforts in the assembly a plan was devised inviting the several states to attend a convention at Annapolis, MD, in September 1786 to discuss commercial problems. Madison and a young lawyer from New York named Alexander Hamilton issued a report on the meeting in Annapolis, calling upon Congress to summon delegates of all of the states to meet for the purpose of revising the Articles of Confederation. Although the report was widely viewed as a usurpation of congressional authority, the Congress did issue a formal call to the states for a convention. To Madison it represented the supreme chance to reverse the country's trend. And as the delegations gathered in Philadelphia, its importance was not lost to others. The squire of Gunston Hall, George Mason, wrote to his son, "The Eyes of the United States are turned upon this Assembly and their Expectations raised to a very anxious Degree. May God Grant that we may be able to gratify them, by establishing a wise and just Government."
Seventy-four delegates were appointed to the convention, of which 55 actually attended sessions. Rhode Island was the only state that refused to send delegates. Dominated by men wedded to paper currency, low taxes, and popular government, Rhode Island's leaders refused to participate in what they saw as a conspiracy to overthrow the established government. Other Americans also had their suspicions.
Patrick Henry, of the flowing red Glasgow cloak and the magnetic oratory, refused to attend, declaring he "smelt a rat." He suspected, correctly, that Madison had in mind the creation of a powerful central government and the subversion of the authority of the state legislatures. Henry along with many other political leaders, believed that the state governments offered the chief protection for personal liberties. He was determined not to lend a hand to any proceeding that seemed to pose a threat to that protection.
With Henry absent, with such towering figures as Jefferson and Adams abroad on foreign missions, and with John Jay in New York at the Foreign Office, the convention was without some of the country's major political leaders. It was, nevertheless, an impressive assemblage. In addition to Madison and Washington, there were Benjamin Franklin of Pennsylvania -- crippled by gout, the 81-year-old Franklin was a man of many dimensions printer, storekeeper, publisher, scientist, public official, philosopher, diplomat, and ladies' man; James Wilson of Pennsylvania -- a distinguished lawyer with a penchant for ill-advised land-jobbing schemes, which would force him late in life to flee from state to state avoiding prosecution for debt, the Scotsman brought a profound mind steeped in constitutional theory and law; Alexander Hamilton of New York -- a brilliant, ambitious former aide-de-camp and secretary to Washington during the Revolution who had, after his marriage into the Schuyler family of New York, become a powerful political figure; George Mason of Virginia -- the author of the Virginia Bill of Rights whom Jefferson later called "the Cato of his country without the avarice of the Roman"; John Dickinson of Delaware -- the quiet, reserved author of the "Farmers' Letters" and chairman of the congressional committee that framed the articles; and Gouverneur Morris of Pennsylvania -- well versed in French literature and language, with a flair and bravado to match his keen intellect, who had helped draft the New York State Constitution and had worked with Robert Morris in the Finance Office.
There were others who played major roles - Oliver Ellsworth of Connecticut; Edmund Randolph of Virginia; William Paterson of New Jersey; John Rutledge of South Carolina; Elbridge Gerry of Massachusetts; Roger Sherman of Connecticut; Luther Martin of Maryland; and the Pinckneys, Charles and Charles Cotesworth, of South Carolina. Franklin was the oldest member and Jonathan Dayton, the 27-year-old delegate from New Jersey was the youngest. The average age was 42. Most of the delegates had studied law, had served in colonial or state legislatures, or had been in the Congress. Well versed in philosophical theories of government advanced by such philosophers as James Harrington, John Locke, and Montesquieu, profiting from experience gained in state politics, the delegates composed an exceptional body, one that left a remarkably learned record of debate.
Fortunately we have a relatively complete record of the proceedings, thanks to the indefatigable James Madison. Day after day, the Virginian sat in front of the presiding officer, compiling notes of the debates, not missing a single day or a single major speech. He later remarked that his self-confinement in the hall, which was often oppressively hot in the Philadelphia summer, almost killed him.
The sessions of the convention were held in secret--no reporters or visitors were permitted. Although many of the naturally loquacious members were prodded in the pubs and on the streets, most remained surprisingly discreet. To those suspicious of the convention, the curtain of secrecy only served to confirm their anxieties. Luther Martin of Maryland later charged that the conspiracy in Philadelphia needed a quiet breeding ground. Thomas Jefferson wrote John Adams from Paris, "I am sorry they began their deliberations by so abominable a precedent as that of tying up the tongues of their members."
On Tuesday morning, May 29, Edmund Randolph, the tall, 34-year- old governor of Virginia, opened the debate with a long speech decrying the evils that had befallen the country under the Articles of Confederation and stressing the need for creating a strong national government. Randolph then outlined a broad plan that he and his Virginia compatriots had, through long sessions at the Indian Queen tavern, put together in the days preceding the convention. James Madison had such a plan on his mind for years. The proposed government had three branches--legislative, executive, and judicial--each branch structured to check the other. Highly centralized, the government would have veto power over laws enacted by state legislatures. The plan, Randolph confessed, "meant a strong consolidated union in which the idea of states should be nearly annihilated." This was, indeed, the rat so offensive to Patrick Henry.
The introduction of the so-called Virginia Plan at the beginning of the convention was a tactical coup. The Virginians had forced the debate into their own frame of reference and in their own terms.
For 10 days the members of the convention discussed the sweeping and, to many delegates, startling Virginia resolutions. The critical issue, described succinctly by Gouverneur Morris on May 30, was the distinction between a federation and a national government, the "former being a mere compact resting on the good faith of the parties; the latter having a compleat and compulsive operation." Morris favored the latter, a "supreme power" capable of exercising necessary authority, not merely a shadow government, fragmented and hopelessly ineffective.
This nationalist position revolted many delegates who cringed at the vision of a central government swallowing state sovereignty. On June 13 delegates from smaller states rallied around proposals offered by New Jersey delegate William Paterson. Railing against efforts to throw the states into "hotchpot," Paterson proposed a "union of the States merely federal." The "New Jersey resolutions" called only for a revision of the articles to enable the Congress more easily to raise revenues and regulate commerce. It also provided that acts of Congress and ratified treaties be "the supreme law of the States."
For 3 days the convention debated Paterson's plan, finally voting for rejection. With the defeat of the New Jersey resolutions, the convention was moving toward creation of a new government, much to the dismay of many small-state delegates. The nationalists, led by Madison, appeared to have the proceedings in their grip. In addition, they were able to persuade the members that any new constitution should be ratified through conventions of the people and not by the Congress and the state legislatures -- another tactical coup. Madison and his allies believed that the constitution they had in mind would likely be scuttled in the legislatures, where many state political leaders stood to lose power. The nationalists wanted to bring the issue before "the people," where ratification was more likely.
On June 18 Alexander Hamilton presented his own ideal plan of government. Erudite and polished, the speech, nevertheless, failed to win a following. It went too far. Calling the British government "the best in the world," Hamilton proposed a model strikingly similar an executive to serve during good behavior or life with veto power over all laws; a senate with members serving during good behavior; the legislature to have power to pass "all laws whatsoever." Hamilton later wrote to Washington that the people were now willing to accept "something not very remote from that which they have lately quitted." What the people had "lately quitted," of course, was monarchy.
Some members of the convention fully expected the country to turn in this direction. Hugh Williamson of North Carolina, a wealthy physician, declared that it was "pretty certain . . . that we should at some time or other have a king." Newspaper accounts appeared in the summer of 1787 alleging that a plot was under way to invite the second son of George III, Frederick, Duke of York, the secular bishop of Osnaburgh in Prussia, to become "king of the United States."
Strongly militating against any serious attempt to establish monarchy was the enmity so prevalent in the revolutionary period toward royalty and the privileged classes. Some state constitutions had even prohibited titles of nobility. In the same year as the Philadelphia convention, Royall Tyler, a revolutionary war veteran, in his play The Contract, gave his own jaundiced view of the upper classes:
Exult each patriot heart! this night is shewn
A piece, which we may fairly call our own;
Where the proud titles of "My Lord!" "Your Grace!"
To humble Mr. and plain Sir give place.
Most delegates were well aware that there were too many Royall Tylers in the country, with too many memories of British rule and too many ties to a recent bloody war, to accept a king. As the debate moved into the specifics of the new government, Alexander Hamilton and others of his persuasion would have to accept something less.
By the end of June, debate between the large and small states over the issue of representation in the first chamber of the legislature was becoming increasingly acrimonious. Delegates from Virginia and other large states demanded that voting in Congress be according to population; representatives of smaller states insisted upon the equality they had enjoyed under the articles. With the oratory degenerating into threats and accusations, Benjamin Franklin appealed for daily prayers. Dressed in his customary gray homespun, the aged philosopher pleaded that "the Father of lights . . . illuminate our understandings." Franklin's appeal for prayers was never fulfilled; the convention, as Hugh Williamson noted, had no funds to pay a preacher.
On June 29 the delegates from the small states lost the first battle. The convention approved a resolution establishing population as the basis for representation in the House of Representatives, thus favoring the larger states. On a subsequent small-state proposal that the states have equal representation in the Senate, the vote resulted in a tie. With large-state delegates unwilling to compromise on this issue, one member thought that the convention "was on the verge of dissolution, scarce held together by the strength of an hair."
By July 10 George Washington was so frustrated over the deadlock that he bemoaned "having had any agency" in the proceedings and called the opponents of a strong central government "narrow minded politicians . . . under the influence of local views." Luther Martin of Maryland, perhaps one whom Washington saw as "narrow minded," thought otherwise. A tiger in debate, not content merely to parry an opponent's argument but determined to bludgeon it into eternal rest, Martin had become perhaps the small states' most effective, if irascible, orator. The Marylander leaped eagerly into the battle on the representation issue declaring, "The States have a right to an equality of representation. This is secured to us by our present articles of confederation; we are in possession of this privilege."
Also crowding into this complicated and divisive discussion over representation was the North-South division over the method by which slaves were to be counted for purposes of taxation and representation. On July 12 Oliver Ellsworth proposed that representation for the lower house be based on the number of free persons and three-fifths of "all other persons," a euphemism for slaves. In the following week the members finally compromised, agreeing that direct taxation be according to representation and that the representation of the lower house be based on the white inhabitants and three-fifths of the "other people." With this compromise and with the growing realization that such compromise was necessary to avoid a complete breakdown of the Convention, the members then approved Senate equality. Roger Sherman had remarked that it was the wish of the delegates "that some general government should be established." With the crisis over representation now settled, it began to look again as if this wish might be fulfilled.
For the next few days the air in the City of Brotherly Love, although insufferably muggy and swarming with blue-bottle flies, had the clean scent of conciliation. In this period of welcome calm, the members decided to appoint a Committee of Detail to draw up a draft constitution. The convention would now at last have something on paper. As Nathaniel Gorham of Massachusetts, John Rutledge, Edmund Randolph, James Wilson, and Oliver Ellsworth went to work, the other delegates voted themselves a much needed 10-day vacation.
During the adjournment, Gouverneur Morris and George Washington rode out along a creek that ran through land that had been part of the Valley Forge encampment 10 years earlier. While Morris cast for trout, Washington pensively looked over the now lush ground where his freezing troops had suffered, at a time when it had seemed as if the American Revolution had reached its end. The country had come a long way.
On Monday August 6, 1787, the convention accepted the first draft of the Constitution. Here was the article-by-article model from which the final document would result some 5 weeks later. As the members began to consider the various sections, the willingness to compromise of the previous days quickly evaporated. The most serious controversy erupted over the question of regulation of commerce. The southern states, exporters of raw materials, rice, indigo, and tobacco, were fearful that a New England-dominated Congress might, through export taxes, severely damage the South's economic life. C. C. Pinckney declared that if Congress had the power to regulate trade, the southern states would be "nothing more than overseers for the Northern States."
On August 21 the debate over the issue of commerce became very closely linked to another explosive issue--slavery. When Martin of Maryland proposed a tax on slave importation, the convention was thrust into a strident discussion of the institution of slavery and its moral and economic relationship to the new government. Rutledge of South Carolina, asserting that slavery had nothing at all to do with morality, declared, "Interest alone is the governing principle with nations." Sherman of Connecticut was for dropping the tender issue altogether before it jeopardized the convention. Mason of Virginia expressed concern over unlimited importation of slaves but later indicated that he also favored federal protection of slave property already held. This nagging issue of possible federal intervention in slave traffic, which Sherman and others feared could irrevocably split northern and southern delegates, was settled by, in Mason's words, "a bargain." Mason later wrote that delegates from South Carolina and Georgia, who most feared federal meddling in the slave trade, made a deal with delegates from the New England states. In exchange for the New Englanders' support for continuing slave importation for 20 years, the southerners accepted a clause that required only a simple majority vote on navigation laws, a crippling blow to southern economic interests.
The bargain was also a crippling blow to those working to abolish slavery. Congregationalist minister and abolitionist Samuel Hopkins of Connecticut charged that the convention had sold out:
"How does it appear . . . that these States, who have been fighting for liberty and consider themselves as the highest and most noble example of zeal for it, cannot agree in any political Constitution, unless it indulge and authorize them to enslave their fellow men . . . Ah! these unclean spirits, like frogs, they, like the Furies of the poets are spreading discord, and exciting men to contention and war." Hopkins considered the Constitution a document fit for the flames.
On August 31 a weary George Mason, who had 3 months earlier written so expectantly to his son about the "great Business now before us," bitterly exclaimed that he "would sooner chop off his right hand than put it to the Constitution as it now stands." Mason despaired that the convention was rushing to saddle the country with an ill-advised, potentially ruinous central authority. He was concerned that a "bill of rights," ensuring individual liberties, had not been made part of the Constitution. Mason called for a new convention to reconsider the whole question of the formation of a new government. Although Mason's motion was overwhelmingly voted down, opponents of the Constitution did not abandon the idea of a new convention. It was futilely suggested again and again for over 2 years.
One of the last major unresolved problems was the method of electing the executive. A number of proposals, including direct election by the people, by state legislatures, by state governors, and by the national legislature, were considered. The result was the electoral college, a master stroke of compromise, quaint and curious but politically expedient. The large states got proportional strength in the number of delegates, the state legislatures got the right of selecting delegates, and the House the right to choose the president in the event no candidate received a majority of electoral votes. Mason later predicted that the House would probably choose the president 19 times out of 20.
In the early days of September, with the exhausted delegates anxious to return home, compromise came easily. On September 8 the convention was ready to turn the Constitution over to a Committee of Style and Arrangement. Gouverneur Morris was the chief architect. Years later he wrote to Timothy Pickering: "That Instrument was written by the Fingers which wrote this letter." The Constitution was presented to the convention on September 12, and the delegates methodically began to consider each section. Although close votes followed on several articles, it was clear that the grueling work of the convention in the historic summer of 1787 was reaching its end.
Before the final vote on the Constitution on September 15, Edmund Randolph proposed that amendments be made by the state conventions and then turned over to another general convention for consideration. He was joined by George Mason and Elbridge Gerry. The three lonely allies were soundly rebuffed. Late in the afternoon the roll of the states was called on the Constitution, and from every delegation the word was "Aye."
Printers John Dunlap and David Claypoole had been hired at the beginning of the work of the convention, and had printed successive drafts for study and correction. They, and the delegates, were sworn to secrecy while the work was in progress, and that secrecy was never breached. Finally, on September 15, 1787, the final printed draft was reviewed, and the Convention finished its work.
George Washington ordered 500 copies printed and distributed. Then he ordered that the Constitution should be engrossed, rendered into handwriting as a final legal formality.
On September 17 the members met for the last time, and the engrossed copy, prepared by Jacob Sallus, assistant clerk of the Pennsylvania Assembly, was read in Convention.
The venerable Franklin had written a speech that was delivered by his colleague James Wilson. Appealing for unity behind the Constitution, Franklin declared, "I think it will astonish our enemies, who are waiting with confidence to hear that our councils are confounded like those of the builders of Babel; and that our States are on the point of separation, only to meet hereafter for the purpose of cutting one another's throats. Thus I consent, Sir, to this Constitution because I expect no better, and because I am not sure, that it is not the best."
With Mason, Gerry, and Randolph withstanding appeals to attach their signatures, the other delegates in the hall formally signed the Constitution, and the convention adjourned at 4 o'clock in the afternoon.
Weary from weeks of intense pressure but generally satisfied with their work, the delegates shared a farewell dinner at City Tavern. Two blocks away on Market Street, Dunlap and Claypoole worked into the night on the final imprint of the six-page Constitution, 500 copies of which would leave Philadelphia on the morning stage. The debate over the nation's form of government was now set for the larger arena.
As the members of the convention returned home in the following days, Alexander Hamilton privately assessed the chances of the Constitution for ratification. In its favor were the support of Washington, commercial interests, men of property, creditors, and the belief among many Americans that the Articles of Confederation were inadequate. Against it were the opposition of a few influential men in the convention and state politicians fearful of losing power, the general revulsion against taxation, the suspicion that a centralized government would be insensitive to local interests, and the fear among debtors that a new government would "restrain the means of cheating Creditors."
Because of its size, wealth, and influence and because it was the first state to call a ratifying convention, Pennsylvania was the focus of national attention. The positions of the Federalists, those who supported the Constitution, and the anti-Federalists, those who opposed it, were printed and reprinted by scores of newspapers across the country. And passions in the state were most warm. When the Federalist-dominated Pennsylvania assembly lacked a quorum on September 29 to call a state ratifying convention, a Philadelphia mob, in order to provide the necessary numbers, dragged two anti-Federalist members from their lodgings through the streets to the State House where the bedraggled representatives were forced to stay while the assembly voted. It was a curious example of participatory democracy.
On October 5 anti-Federalist Samuel Bryan published the first of his "Centinel" essays in Philadelphia's Independent Gazetteer. Republished in newspapers in various states, the essays assailed the sweeping power of the central government, the usurpation of state sovereignty, and the absence of a bill of rights guaranteeing individual liberties such as freedom of speech and freedom of religion. "The United States are to be melted down," Bryan declared, into a despotic empire dominated by "well-born" aristocrats. Bryan was echoing the fear of many anti-Federalists that the new government would become one controlled by the wealthy established families and the culturally refined. The common working people, Bryan believed, were in danger of being subjugated to the will of an all-powerful authority remote and inaccessible to the people. It was this kind of authority, he believed, that Americans had fought a war against only a few years earlier.
The next day James Wilson, delivering a stirring defense of the Constitution to a large crowd gathered in the yard of the State House, praised the new government as the best "which has ever been offered to the world." The Scotsman's view prevailed. Led by Wilson, Federalists dominated in the Pennsylvania convention, carrying the vote on December 12 by a healthy 46 to 23.
The vote for ratification in Pennsylvania did not end the rancor and bitterness. Franklin declared that scurrilous articles in the press were giving the impression that Pennsylvania was "peopled by a set of the most unprincipled, wicked, rascally and quarrelsome scoundrels upon the face of the globe." And in Carlisle, on December 26, anti-Federalist rioters broke up a Federalist celebration and hung Wilson and the Federalist chief justice of Pennsylvania, Thomas McKean, in effigy; put the torch to a copy of the Constitution; and busted a few Federalist heads.
In New York the Constitution was under siege in the press by a series of essays signed "Cato." Mounting a counterattack, Alexander Hamilton and John Jay enlisted help from Madison and, in late 1787, they published the first of a series of essays now known as the Federalist Papers. The 85 essays, most of which were penned by Hamilton himself, probed the weaknesses of the Articles of Confederation and the need for an energetic national government. Thomas Jefferson later called the Federalist Papers the "best commentary on the principles of government ever written."
Against this kind of Federalist leadership and determination, the opposition in most states was disorganized and generally inert. The leading spokesmen were largely state-centered men with regional and local interests and loyalties. Madison wrote of the Massachusetts anti-Federalists, "There was not a single character capable of uniting their wills or directing their measures . . . They had no plan whatever." The anti-Federalists attacked wildly on several fronts: the lack of a bill of rights, discrimination against southern states in navigation legislation, direct taxation, the loss of state sovereignty. Many charged that the Constitution represented the work of aristocratic politicians bent on protecting their own class interests. At the Massachusetts convention one delegate declared, "These lawyers, and men of learning and moneyed men, that . . . make us poor illiterate people swallow down the pill . . . they will swallow up all us little folks like the great Leviathan; yes, just as the whale swallowed up Jonah!" Some newspaper articles, presumably written by anti-Federalists, resorted to fanciful predictions of the horrors that might emerge under the new Constitution; pagans and deists could control the government; the use of Inquisition-like torture could be instituted as punishment for federal crimes; even the pope could be elected president.
One anti-Federalist argument gave opponents some genuine difficulty -- the claim that the territory of the 13 states was too extensive for a representative government. In a republic embracing a large area, anti-Federalists argued, government would be impersonal, unrepresentative, dominated by men of wealth, and oppressive of the poor and working classes. Had not the illustrious Montesquieu himself ridiculed the notion that an extensive territory composed of varying climates and people, could be a single republican state? James Madison, always ready with the Federalist volley, turned the argument completely around and insisted that the vastness of the country would itself be a strong argument in favor of a republic. Claiming that a large republic would counterbalance various political interest groups vying for power, Madison wrote, "The smaller the society the fewer probably will be the distinct parties and interests composing it; the fewer the distinct parties and interests, the more frequently will a majority be found of the same party and the more easily will they concert and execute their plans of oppression." Extend the size of the republic, Madison argued, and the country would be less vulnerable to separate factions within it.
By January 9, 1788, five states of the nine necessary for ratification had approved the Constitution -- Delaware, Pennsylvania, New Jersey, Georgia, and Connecticut. But the eventual outcome remained uncertain in pivotal states such as Massachusetts, New York, and Virginia. On February 6, with Federalists agreeing to recommend a list of amendments amounting to a bill of rights, Massachusetts ratified by a vote of 187 to 168. The revolutionary leader, John Hancock, elected to preside over the Massachusetts ratifying convention but unable to make up his mind on the Constitution, took to his bed with a convenient case of gout. Later seduced by the Federalists with visions of the vice presidency and possibly the presidency, Hancock, whom Madison noted as "an idolater of popularity," suddenly experienced a miraculous cure and delivered a critical block of votes. Although Massachusetts was now safely in the Federalist column, the recommendation of a bill of rights was a significant victory for the anti-Federalists. Six of the remaining states later appended similar recommendations.
When the New Hampshire convention was adjourned by Federalists who sensed imminent defeat and when Rhode Island on March 24 turned down the Constitution in a popular referendum by an overwhelming vote of 10 to 1, Federalist leaders were apprehensive. Looking ahead to the Maryland convention, Madison wrote to Washington, "The difference between even a postponement and adoption in Maryland may . . . possibly give a fatal advantage to that which opposes the constitution." Madison had little reason to worry. The final vote on April 28, 63 for, 11 against. In Baltimore, a huge parade celebrating the Federalist victory rolled through the downtown streets, highlighted by a 15 foot float called "Ship Federalist." The symbolically seaworthy craft was later launched in the waters off Baltimore and sailed down the Potomac to Mount Vernon.
On July 2, 1788, the Confederation Congress, meeting in New York, received word that a reconvened New Hampshire ratifying convention had approved the Constitution. With South Carolina's acceptance of the Constitution in May, New Hampshire thus became the ninth state to ratify. The Congress appointed a committee "for putting the said Constitution into operation."
In the next 2 months, thanks largely to the efforts of Madison and Hamilton in their own states, Virginia and New York both ratified while adding their own amendments. The margin for the Federalists in both states, however, was extremely close. Hamilton figured that the majority of the people in New York actually opposed the Constitution, and it is probable that a majority of people in the entire country opposed it. Only the promise of amendments had ensured a Federalist victory.
The call for a bill of rights had been the anti-Federalists' most powerful weapon. Attacking the proposed Constitution for its vagueness and lack of specific protection against tyranny, Patrick Henry asked the Virginia convention, "What can avail your specious, imaginary balances, your rope-dancing, chain-rattling, ridiculous ideal checks and contrivances." The anti-Federalists, demanding a more concise, unequivocal Constitution, one that laid out for all to see the rights of the people as declaratory and restrictive limitations of the power of government, claimed that the brevity of the document only revealed its inferior nature. Richard Henry Lee despaired at the lack of provisions to protect "those essential rights of mankind without which liberty cannot exist." Trading the old government for the new without such a bill of rights, Lee argued, would be trading Scylla for Charybdis.
A bill of rights had been barely mentioned in the Philadelphia convention, most delegates holding that the fundamental rights of individuals had been secured in the state constitutions. James Wilson maintained that a bill of rights was superfluous because all power not expressly delegated to the new government was reserved to the people. It was clear, however, that in this argument the anti-Federalists held the upper hand.
Thomas Jefferson, on the scene in Europe at the dawn of the French Revolution, and generally in favor of the new government, saw things in a different light. The people were always entitled to the most sovereign guarantees of their personal liberties. He wrote to Madison that whatever the deficiencies of a "parchment barrier" they were better than no barriers. "The inconveniences of the Declaration [of Rights] are that it may cramp government in its useful exertions. But the evil of this is shortlived, moderate and reparable. The inconveniences of the want of a Declaration are permanent, afflicting and irreparable; they are in constant progression from bad to worse." A bill of rights was "what the people are entitled to against every government on earth."
By the fall of 1788, aided by his own reflection and Jefferson's letter, Madison became convinced that not only was a bill of rights necessary to ensure acceptance of the Constitution but that it would have positive effects. He wrote, on October 17, that such "fundamental maxims of free Government" would be "a good ground for an appeal to the sense of community" against potential oppression and would "counteract the impulses of interest and passion."
Madison's support of the bill of rights was of critical significance. in the face of formidible apathy, even of those who had paraded in this cause, Madison almost single-handedly framed and pushed thru the First Congress those amendments to the Constitution that compose the "Bill of Rights" As one of the new representatives from Virginia to the First Federal Congress, as established by the new Constitution, he worked tirelessly to persuade the House to enact amendments. Defusing the anti-Federalists' objections to the Constitution, Madison was able to shepherd through 17 amendments in the early months of the Congress, a list that was later trimmed to 12 in the Senate. On October 2, 1789, President Washington sent to each of the states a true bill of the resolve regarding of the 12 amendments as adopted by the Congress in September. By December 15, 1791, three-fourths of the states had ratified the 10 amendments now so familiar to Americans as the "Bill of Rights."
Benjamin Franklin told a French correspondent in 1788 that the formation of the new government had been like a game of dice, with many players of diverse prejudices and interests unable to make any uncontested moves. Madison wrote to Jefferson that the welding of these clashing interests was "a task more difficult than can be well conceived by those who were not concerned in the execution of it." When the delegates left Philadelphia after the convention, few, if any, were convinced that the Constitution they had approved outlined the ideal form of government for the country. But late in his life James Madison scrawled out another letter, one never addressed. In it he declared that no government can be perfect, and "that which is the least imperfect is therefore the best government."
The fate of the United States Constitution after its signing on September 17, 1787, can be contrasted sharply to the travels and physical abuse of America's other great parchment, the Declaration of Independence. As the Continental Congress, during the years of the revolutionary war, scurried from town to town, the rolled-up Declaration was carried along. After the formation of the new government under the Constitution, the one-page Declaration, eminently suited for display purposes, graced the walls of various government buildings in Washington, exposing it to prolonged damaging sunlight. It was also subjected to the work of early calligraphers responding to a demand for reproductions of the revered document. As any visitor to the National Archives can readily observe, the early treatment of the now barely legible Declaration took a disastrous toll.
The Constitution, in excellent physical condition after more than 200 years, has enjoyed a more serene existence. By 1796 the Constitution was in the custody of the Department of State along with the Declaration and traveled with the federal government from New York to Philadelphia to Washington. Both documents were secretly moved to Leesburg, VA, before the imminent attack by the British on Washington in 1814.
Following the war, the Constitution remained in the State Department while the Declaration continued its travels--to the Patent Office Building from 1841 to 1876, to Independence Hall in Philadelphia during the Centennial celebration, and back to Washington in 1877. On September 29, 1921, President Warren Harding issued an Executive order transferring the Constitution and the Declaration to the Library of Congress for preservation and exhibition.
The next day the Librarian of Congress, Herbert Putnam, acting on authority of Secretary of State Charles Evans Hughes, carried the Constitution and the Declaration in a Model-T Ford truck to the library and placed them in his office safe until an appropriate exhibit area could be constructed. The documents were officially put on display at a ceremony in the library on February 28, 1924. On February 20, 1933, at the laying of the cornerstone of the future National Archives Building, President Herbert Hoover remarked, "There will be aggregated here the most sacred documents of our history--the originals of the Declaration of Independence and of the Constitution of the United States."
The two documents however, were not immediately transferred to the Archives. During World War II both were moved from the library to Fort Knox for protection and returned to the library in 1944. It was not until successful negotiations were completed between Librarian of Congress Luther Evans and Archivist of the United States Wayne Grover that the transfer to the National Archives was finally accomplished by special direction of the Joint Congressional Committee on the Library.
On December 13, 1952, the Constitution and the Declaration were placed in helium-filled cases, enclosed in wooden crates, laid on mattresses in an armored Marine Corps personnel carrier, and escorted by ceremonial troops, two tanks, and four servicemen carrying submachine guns down Pennsylvania and Constitution avenues to the National Archives. Two days later, President Harry Truman declared at a formal ceremony in the Archives Exhibition Hall.
"We are engaged here today in a symbolic act. We are enshrining these documents for future ages. This magnificent hall has been constructed to exhibit them, and the vault beneath, that we have built to protect them, is as safe from destruction as anything that the wit of modern man can devise. All this is an honorable effort, based upon reverence for the great past, and our generation can take just pride in it."
Taken from U.S. Archives The Documents Themselves are Preserved for Posterity,
But Are Liberty and Freedom?
ONLY TRUE PATRIOTS CAN ANSWER THAT QUESTION.
Return to the Preface
In Honor and Loving Memory of My Teacher on the Constitution
Dean Lewis Hardison, "Pop"
August 2nd, 1910 - September 18th, 1982
Reproduction of all or any parts of the above text may be used for general information.
This HTML presentation is copyright by Barefoot, October 1996
Mirroring is not Netiquette without the Express Permission of Barefoot
Visit Barefoot's World and Educate Yo'Self
This set of pages on the Constitution started Mar 23, 1996
Completed Oct 10, 1996 - Last Revised July 4, 2006
Three mighty important things, Pardn'r, LOVE And PEACE and FREEDOM | http://www.barefootsworld.net/consti15.html | 13 |
59 | Critical Thinking Strategies Guide
Bloom (Bloom, Englehart, Furst, Hill & Krathwohl, 1956) developed a classification of levels of intellectual behavior in learning. This taxonomy contained three domains: the cognitive, psychomotor, and affective. Within the cognitive domain, Bloom identified six levels: knowledge, comprehension, application, analysis, synthesis, and evaluation. This domain and all levels are still useful today in developing the critical thinking skills of students. Teaching critical thinking skills is one of the greatest challenges facing teachers in the classroom today. The most widely used model for the development of higher level thinking skills is Bloom’s Taxonomy.
Students must be guided to become producers of knowledge. An essential instructional task of the teacher is to design activities or to create an environment that allows students opportunities to engage in higher-order thinking (Queensland Department of Education, 2002). With the Critical Thinking Strategies Guide, the teacher can incorporate all levels of the taxonomy to plan questions and learning activities in every subject area. This resource allows a teacher to individualize learning according to the interests, abilities, and specific learning needs present in the differentiated classroom, from special needs students to students in gifted education. There are a number of independent and collaborative activities where students can become active participants while acquiring and applying critical thinking.
The Critical Thinking Strategies Guide also supports the differing learning styles within a classroom. Students learn and excel when provided multiple, varied opportunities. A classroom that offers an array of learning experiences increases the likelihood of success for more students (Gardner, 1983; Dunn and Dunn, 1978). Studies involving multi-sensory teaching experiences show students achieve more gains in learning than when taught with a single approach, whether it is a visual or an auditory approach (Farkas, 2003; Maal, 2004). Multi-sensory instruction or a combination of approaches appears to create the optimal learning setting, even for students with disabilities (Clark and Uhry, 1995). The variety in formats for students to demonstrate their learning has the potential to improve student interest, increase student interaction, and extend classroom learning. This educational tool contributes to the creation of a powerful learning environment by allowing students to be active participants and take more responsibility in their own learning.
Critical thinking is cited as an important issue in education today. Attention is focused on good thinking as an important element of life success (Huitt, 1998; Thomas and Smoot, 1994). “Perhaps most importantly in today’s information age, thinking skills are viewed as crucial for educated persons to cope with a rapidly changing world. Many educators believe that specific knowledge will not be as important to tomorrow’s workers and citizens as the ability to learn and make sense of new information”(Gough, 1991).
The ability to engage in careful, reflective thought is viewed in education as paramount. Teaching students to become skilled thinkers is a goal of education. Students must be able to acquire and process information since the world is changing so quickly. Some studies purport that students exhibit an insufficient level of skill in critical or creative thinking. In his review of research on critical thinking, Norris (1985) surmised that students’ critical thinking abilities are not widespread. Most students do not score well on tests that measure ability to recognize assumptions, evaluate controversy, and scrutinize inferences.
Thus, students’ performances on measures of higher-order thinking ability reveal a critical need for students to develop the skills and attitudes of effective thinking. Furthermore, another reason that supports the need for incorporating thinking skills activities is the fact that educators appear to be in general agreement that it is possible to increase students' creative and critical thinking capacities through instruction and practice. Presseisen (1986) asserts the basic premise is that students can learn to think better if schools teach them how to think. Adu-Febiri (2002) agrees that thinking can be learned. The Critical Thinking Strategies Guide encourages teachers to actually teach students how to think rather than provide them with content knowledge alone.
Research indicates that thinking skills instruction makes a positive difference in the achievement levels of students. Studies that reflect achievement over time show that learning gains can be accelerated. These results indicate that the teaching of thinking skills can enhance the academic achievement of participating students (Bass and Perkins, 1984; Bransford, 1986; Freseman, 1990; Kagan, 1988; Matthews, 1989; Nickerson, 1984). Critical thinking is a complex activity and we should not expect one method of instruction to prove sufficient for developing each of its component parts. Carr (1990) acknowledges that while it is possible to teach critical thinking and its components as separate skills, they are developed and used best when learned in connection with content knowledge. To develop competency in critical thinking, students must use these skills across the disciplines or the skills could simply decline and disappear. Teachers should expect students to use these skills in every class and evaluate their skills accordingly. Hummel and Huitt (1994) stated, "What you measure is what you get." The assessment section in the Critical Thinking Strategies Guide suggest varied assessment tools to measure the progress of critical thinking in students.
Students are not likely to develop these complex skills or to improve their critical thinking if educators fail to establish definite expectations and measure those expectations with some type of assessment. Assessments (e.g., tests, demonstrations, exercises, panel discussions) that target higher-level thinking skills could lead teachers to teach content at those levels, and students, according to Redfield and Rousseau (1981), to perform at those levels. Students not only need to know an enormous amount of facts, concepts, and principles, they also must be able to effectively process knowledge in a variety of increasingly complex ways. The Critical Thinking Strategies Guide suggests numerous strategies and activities that engage the learner in processing knowledge at each level of thinking. The questioning stems and strategies in this valuable teacher resource can be used to plan daily instruction as students explore content and gather knowledge; they can be used as periodic checkpoints for understanding; they can be used as a practice review; or they could be used as ongoing assessment tools as teachers gather formative and summative data.
Teachers play a key role in promoting critical thinking between and among students. Questioning stems in the content areas act as communication tools. Four forms of communication are affected in critical thinking: speaking, listening, reading, and writing. The Critical Thinking Strategies Guide contains a wide range of stems and strategies to encourage students to think critically which contributes to their intellectual growth. This educational resource relates to any content that is presented to students and saves teachers activity preparation time. A teacher must examine what he/she fully intends to achieve from the lesson and then select the appropriate critical thinking stem(s) to complement the instructional purpose or the cognitive level of thinking. The questioning stem itself influences the level of thinking or determines the depth of thinking that occurs.
Solving problems in the real world and making worthwhile decisions is valued in our rapidly changing environment today. Paul (1985) points out that “thinking is not driven by answers but by questions.” The driving forces in the thinking process are the questions. When a student needs to think through an idea or issue or to rethink anything, questions must be asked to stimulate thought. When answers are given, sometimes thinking stops completely. When an answer generates another question then thought continues.
Teachers need to ask questions and design learning experiences to turn on students’ intellectual thinking engines. Students can generate questions from teachers’ questions to get their thinking to move forward. Thinking is of no use unless it goes somewhere, and again, the questions asked or the activities selected to engage students in learning determine the direction of their thinking. While students are learning, the teacher could ask questions to draw meaning from the content. The higher-order stems contained in the Critical Thinking Strategies Guide (analysis, synthesis, and evaluation) drive students’ thinking to a deeper level and lead students to deal with complexity, rather than just search through text to find an answer.
Questions lead to understanding. Many students typically have no questions. They might sit in silence with their minds inactive as well. Sometimes the questions students have tend to be shallow and nebulous which might demonstrate that they are not thinking through the content they are expected to be learning. If we, as educators, want students to think, we must stimulate and cultivate thinking with questions (Paul, 1990). By engaging students in a variety of questioning that relates to the idea or content being studied, students develop and apply critical thinking skills. Consequently, by using the analysis, synthesis, and evaluation levels, students are challenged to work at tasks that are more demanding and thought-provoking. These kinds of tasks result in students making real-life connections.
Teachers need to plan for the type of cognitive processing they wish to foster and then design learning environments and experiences accordingly. Studies suggest that the classroom environment can be arranged to be conducive to high-level thinking. The findings include the following: an environment free from threats, multi-level materials, acceptance of diversity, flexible grouping, the teacher as a co-learner, and a nurturing atmosphere. A climate which promotes psychological safety and one in which students respect each other and their ideas appears to be the most beneficial (Klenz, 1987; Marzano, Brandt, Hughes, Jones, Presseisen, Rankin, and Suhor, 1988). Sometimes it is necessary to lecture. Other times, the teacher balances methods of instruction by providing opportunities for the students to take some ownership of their learning. Lovelace (2005) concluded that matching a student’s learning style with the instruction can improve academic achievement and student attitudes toward learning. Various learning styles or activities that focus on the strengths of how students best learn need to be addressed in the classroom. The Critical Thinking Strategies Guide offers suggested strategies establish a thinking-centered environment. In addition, there are stems identified that allow students to demonstrate learning and thinking using visual, auditory, or tactile/kinesthetic modes. The range of activities or tasks run the gamut from creative opportunities (writing a poem, composing a song, designing an advertisement, constructing a model) to participating in a panel discussion, presenting a speech, conducting a survey, holding an interview, using a graphic organizer, or simply compiling a list.
“Multiple forms of student engagement exist when high-level thinking is fostered. Examples of engagement include: collaborative group activities, problem-solving experiences, open-ended questions that encourage divergent thinking, activities that promote the multiple intelligences and recognize learning styles, and activities in which both genders participate freely. Brain researchers suggest teachers use a variety of higher-order questions in a supportive environment to strengthen the brain” (Cardellichio and Field, 1997). “Meaningful learning requires teachers to change their role from sage to guide, from giver to collaborator, from instructor to instigator” (Ó Murchú, 2003). “Since students learn from thinking about what they are doing, the teacher’s role becomes one who stimulates and supports activities that engage learners in critical thinking” (Bhattacharya, 2002). The Critical Thinking Strategies Guide represents such findings as the above.
All teachers can develop questions and learning activities at various times that span the levels of Bloom’s Taxonomy. The difficult part is to address each level in the same lesson, although it is not necessary to do this in every lesson. The main point is that teachers help students advance beyond simple repetition to self-regulated learning. Students are not empty vessels waiting to be filled with information. The intent of the Critical Thinking Strategies Guide is for students to take an active role in learning as they locate, organize, synthesize, evaluate, and present information, transforming it into knowledge in the process. Students can work independently or collaboratively with classmates to explore a problem. This makes it possible for each student to come to his or her own understanding of a particular topic as he or she constructs knowledge. This type of environment is focused on learning and is more student-centered than the traditional classroom. Strategies in this critical thinking guide assist teachers in developing a climate conducive to critical and creative thinking.
If the classroom becomes more student-centered, then what does this mean for the teacher? Is he or she no longer necessary? The role of the teacher is just as important as it has always been, perhaps more so. With an understanding of learning styles and of Bloom’s Taxonomy, the teacher works with the students. Teachers scaffold learning so that students can assume a more participatory role in their own learning. This means that lessons are in fact more carefully constructed to guide students through the exploration of content using Bloom’s Taxonomy. Attention to Bloom’s Taxonomy does not mean that every class period must be optimally designed to place students in inquiry-based roles. Teaching requires that we constantly assess where students are and how best to address their needs.
“Recognizing that there are different levels of thinking behaviors important to learning, Benjamin Bloom and his colleagues developed Bloom’s Taxonomy, a common structure for categorizing questions and designing instruction. The taxonomy is divided into six levels, from basic factual recall, or Knowledge, to the highest order, Evaluation, which assesses value or asks the teacher or learner to make judgments among ideas. In the 1950s, Bloom found that 95% of the test questions developed to assess student learning required them only to think at the lowest level of learning, the recall of information” (Hobgood, Thibault and Walbert, 2005). Today, a considerable amount of attention is given to students’ abilities to think critically about what they do. Leaders in various businesses, medical fields, and other professions have voiced their concern that schools are not preparing students to be critical thinkers. Having knowledge of the procedure for CPR, how to estimate expenses, or being able to calculate elapsed time is no longer enough.
These skills have little value without the ability to know how, when, and where to apply them. The utilization of the Critical Thinking Strategies Guide provides direction to teachers as they apply the levels of Bloom’s Taxonomy and strengthen the abilities of students to think at higher levels. By using this critical thinking guide as a planning tool for high-quality instruction, the teacher can structure learning experiences to promote complexity of thought as well as teaching students how to learn as opposed to simply what to learn.
The No Child Left Behind Act of 2001 emphasizes the need for evidence-based materials. The Mentoring Minds Product Development team sought to develop a tool that teachers could use to develop students who value knowledge and learning. The development of the Critical Thinking Strategies Guide incorporates evidence-based findings about teaching and learning. Critical and creative questioning stems are identified in the content areas of Mathematics, Reading/Writing, Science, and Social Studies. Such stems guide students in finding solutions and answers rather than simply using memorization. This flip chart provides ideas for establishing a thinking-centered classroom and contains multiple strategies to encourage, develop, extend, and assess critical and creative thinking in students. This critical thinking resource will help foster critical thinking skills that lead to greater comprehension for all students using the original and revised Bloom’s Taxonomy (Anderson, et.al, 2001).
Mentoring Minds’ Critical Thinking Strategies Guide is based on the six levels of Bloom’s Taxonomy. Studies over the last 40 years have confirmed Bloom’s Taxonomy of the Cognitive Domain as a framework to establish intellectual and educational outcomes. The conclusions reached by researchers substantiate the fact that students achieve more when they manipulate topics at the higher levels of Bloom’s Taxonomy. Our goal at Mentoring Minds is to support educators in their endeavors to help students acquire life-long skills of becoming independent thinkers and problem solvers.
Bibliography for Critical Thinking Strategies Guide
Adu-Febiri, F. (2002). Thinking skills in education: ideal and real academic cultures. CDTL Brief, 5, Singapore: National University of Singapore.
Anderson, L., et al. (2001). A taxonomy for learning, teaching, and assessing – A revision of Bloom’s Taxonomy of educational objectives. New York: Addison Wesley Longman, Inc.
Bass, G. , Jr. & Perkins, H. (1984). Teaching critical thinking skills with CAI. Electronic Learning,14, 32, 34, 96.
Bhattacharya, M. (2002). Creating a meaningful learning environment using ICT. CDTL Brief, 5, Singapore: National University of Singapore. Retrieved March 2007, from http://www.cdtl.nus.edu.sg/brief/v5n3/sec3.htm
Bloom, B. , Englehart, M. , Furst, E. , Hill, W. , & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive Domain. New York: Longmans Green.
Bransford, J.D. , Burns, M. , Delclos, V. & Vye, N. (1986) Teaching thinking: evaluating evaluations and broadening the data base. Educational Leadership,44, 68-70.
Carr, K. (1990). How can we teach critical thinking? ERIC Digest. ERIC NO. : ED326304.
Cardellichio, T. & Field, W. (1997). Seven strategies to enhance neural branching. Educational Leadership, 54, (6).
Clark, D. , & Uhry, J. (1995). Dyslexia: Theory and Practice of Remedial Instruction (2nd Ed. ). Baltimore: York.
Dunn, R. & Dunn, K. (1978). Teaching students through their individual learning styles: A practical approach. Englewood Cliffs, NJ: Prentice Hall.
Education Queensland. (2002). What is higher-order thinking? A guide to Productive Pedagogies: Classroom reflection manual. Queensland: Department of Education.
Farkas, R.D. (2003). "Effects of traditional versus learning-styles instructional methods on middle school students. Journal of Educational Research, 97, 43-81.
Freseman, R. (1990). Improving Higher Order Thinking of Middle School Geography Students By Teaching Skills Directly. Fort Lauderdale, FL: Nova University.
Gardner, H. (1983). Frames of Mind: The Theory of Multiple Intelligences. New York: NY, BasicBooks.
Gough, D. (1991). Thinking about Thinking. Alexandria, VA: National Association of Elementary School Principals.
Hobgood, B. , Thibault, M. , & Walbert, D. (2005). Kinetic connections: Bloom’s taxonomy in action. University of North Carolina at Chapel Hill: Learn NC.
Huitt, W. (1998). Critical thinking: An overview. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved May 5, 2007 from, http://chiron.valdosta.edu/whuitt/col/cogsys/critthnk.html. [Revision of paper presented at the Critical Thinking Conference sponsored by Gordon College, Barnesville, GA, March, 1993. ]
Hummel, J., & Huitt, W. (1994). What you measure is what you get. GaASCD Newsletter: The Reporter, 10-11.
Kagan, D. (1988). Evaluating a language arts program designed to teach higher level thinking skills. Reading Improvement (25), 29-33.
Klenz, S. (1987). Creative and Critical Thinking, Saskatchewan Education Understanding the Common Essential Learnings, Regina, SK: Saskatchewan Education.
Lovelace, M. (2005). Meta-analysis of experimental research based on the Dunn and Dunn model. Journal of Educational Research, 98: 176-183.
Maal, N. (2004). Learning via multisensory engagement. Association Management. Washington, D.C. : American Society of Association Executives.
Marzano, R. , Brandt, R. , Hughes, C. , Jones, B. , Presseisen, B. , Rankin, S. & Suhor, C. (1988). Dimensions of Thinking: A Framework for Curriculum and Instruction. Alexandria, VA: Association for Supervision and Curriculum Development.
Matthews, D. (1989).The effect of a thinking-skills program on the cognitive abilities of middle school students. Clearing House,62, 202-204.
Nickerson, R. (1984). Research on the Training of Higher Cognitive Learning and Thinking Skills. Final Report # 5560. Cambridge, MA: Bolt, Beranek and Newman, Inc.
Norris, S.P. (1985). Synthesis of research on critical thinking. Educational Leadership, 42, 40-45.
Ó Murchú, D. (2003). Mentoring, Technology and the 21st Century’s New Perspectives, Challenges and Possibilities for Educators. Second Global Conference, Virtual Learning & Higher Education, Oxford, UK.
Paul, R.W. (1985). Bloom’s taxonomy and critical thinking instruction. Educational Leadership, 42, 36-39.
Paul, R. (1990). Critical Thinking: What Every Person Needs to Survive in a Rapidly Changing World. Rohnert Park, CA: Center for Critical Thinking and Moral Critique.
Presseisen, B.Z. (1986). Critical Thinking and Thinking Skills: State of the Art Definitions and Practice in Public Schools. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.
Redfield, D. L. , & Rousseau, E. W. (1981). A meta-analysis of experimental research on teacher questioning behavior. Review of Educational Research, 51, 181-193.
Tama, C. (1989). Critical thinking has a place in every classroom. Journal of Reading, 33, 64-65.
Thomas, G. , & Smoot, G. (1994, February/March). Critical thinking: A vital work skill. Trust for Educational Leadership, 23, 34-38.
- Accomodations Wheel
- ADD/ADHD Wheel
- Behavior Guide
- Bully Guide
- Critical Thinking Strategies Guide
- Critical Thinking Educator Wheel
- Intervention Strategies Guide
- Master Instructional Strategies
- STAAR Motivation Math
- Motivation Math Case Study
- STAAR Motivation Reading
- Response to Intervention (RtI) Strategies
- Vocabulary Products
- STAAR Motivation Science
- STAAR Motivation Writing
- Motivation Math
- Motivation Reading
- ¡Escribir Como Estrellas!
- Motivation Math Middle School Case Study
- Math Benchmark Assessments | http://www.mentoringminds.com/research/critical-thinking-strategies-guide | 13 |
15 | - slide 1 of 5
Before we look at examples of Scatter Plots, let’s first answer some basic questions.
- What is a Scatter Plot?: A scatter plot is a graphical representation of two variables taken from a data set.
- Why are They Used?: Scatter plots are used to detect correlation between the two variables being examined. This is crucial for determining Cause and Effect relationships, somewhat similar to the use of a Fishbone Diagram.
- Who uses Scatter Plots?: Anyone interested in examining the relationship between two variables to detect correlation. Therefore, a broad array of professionals would use them, including Six-Sigma Professionals, Researchers, Project Quality Management professionals, and Project Managers.
Note: For project management certifications, such as PMP, it is critical for you to know how to interpret the correlation of Scatter Plots. This skill is a part of the Project Quality Management knowledge area.
Now that you know What, Why and Who, let’s look at the types of Scatter Plots.
- slide 2 of 5
Zero Degrees of Correlation in Scatter Plots
The look of a scatter plot is dictated by the degree of correlation between two variables. For example, in the diagram shown below, there is no correlation. This is evident by the fact that the data points of the variables are spread across the Scatter Chart. Hence, you can't create a straight line from the scatter plot shown below.
The Scatter Plot (Scatter Diagram) shown above could be of project team motivation and the car driven by team members. There is no correlation, hence the data points are scattered. Or consider the value of the cost performance index and giving project team members training on Mathematics, i.e. the training on mathematics will not help the cost performance index of the project.
- slide 3 of 5
Low Degrees of Correlation in Scatter Plot
In the diagram shown below, the data points form some sort of line. Therefore, there is a vague correlation between the two variables. Other variables may contribute to the effect.
The Scatter Plot (Scatter Diagram) shown above could be of project team motivation and compensation. Project team motivation has several other factors. For some project team members, compensation may be the only factor for motivation. While, for others, training, management style, and even project management methodology, probably plays a role. In this case, it is highly improbable that an increase in compensation would yield significantly higher project team motivation.
- slide 4 of 5
High Degrees of Correlation in Scatter Plot
In the diagram shown below, the data points are grouped more closely together. The line is more evident here. Therefore, there is a strong correlation between the two variables.
The Scatter Plot (Scatter Diagram) shown above could be that of two related variables, such as the probability of getting a contract and the skill-set within the project team, i.e. if the contract requires Prince2 certified professionals, then getting such professionals in the team is probably going to help you get the contract. Another example could be the relationship between managing project risks and delivering on-time, on-budget.
In this article, I’ve been using the term “probable” to describe the correlation. When all data points lie on a straight line, the correlation between the two variables is not probable, but certain!
- slide 5 of 5
Type of Correlation in Scatter Plots
Scatter plots can either have a positive or negative slant, as shown below. A positive slant would indicate a positive correlation. For example, the more a person studies the higher the score on an exam. Or, the more testing you do, the more defects you’ll probably find. For a negative slant, an example could be the decrease in project profits as the cost of project resources increases.
To learn to create a Scatter Plot in Excel, read Microsoft Excel 2007: Create a Scatter Plot. | http://www.brighthubpm.com/certification/73731-interpreting-scatter-plots-or-scatter-charts-in-project-quality-management/ | 13 |
27 | In linguistics, an argument is an expression that helps complete the meaning of a predicate. Most predicates take one, two, or three arguments. A predicate and its arguments form a predicate-argument structure. The discussion of predicates and arguments is associated most with (content) verbs and noun phrases (NPs), although other syntactic categories can also be construed as predicates and as arguments. Arguments must be distinguished from adjuncts. While a predicate needs its arguments to complete its meaning, the adjuncts that appear with a predicate are optional; they are not necessary to complete the meaning of the predicate. Most theories of syntax and semantics acknowledge arguments and adjuncts, although the terminology varies, and the distinction certainly exists in all languages. In syntax, the terms argument and complement overlap in meaning and use to a large extent. Dependency grammars sometimes call arguments actants, following Tesnière (1959).
The area of grammar that explores the nature of predicates, their arguments, and adjuncts is called valency theory. Predicates have a valence; they determine the number and type of arguments that can or must appear in their environment. The valence of predicates is also investigated in terms of subcategorization.
Arguments and adjuncts
The basic analysis of the syntax and semantics of clauses relies heavily on the distinction between arguments and adjuncts. The clause predicate, which is often a content verb, demands certain arguments. That is, the arguments are necessary in order to complete the meaning of the verb. The adjuncts that appear, in contrast, are not necessary in this sense. The subject phrase and object phrase are the two most frequently occurring arguments of verbal predicates. For instance:
- Jill likes Jack.
- Sam fried the meat.
- The old man helped the young man.
Each of these sentences contains two arguments (in bold), the first noun (phrase) being the subject argument, and the second the object argument. Jill, for example, is the subject argument of the predicate likes, and Jack is its object argument. Verbal predicates that demand just a subject argument (e.g. sleep, work, relax) are intransitive, verbal predicates that demand an object argument as well (e.g. like, fry, help) are transitive, and verbal predicates that demand two object arguments are ditransitive (e.g. give, loan, send) .
When additional information is added to our three example sentences, one is dealing with adjuncts, e.g.
- Jill really likes Jack.
- Jill likes Jack most of the time.
- Jill likes Jack when the sun shines.
- Jill likes Jack because he's friendly.
The added phrases (in bold) are adjuncts; they provide additional information that is not centrally necessary to complete the meaning of the predicate likes. One key difference between arguments and adjuncts is that the appearance of a given argument is often obligatory, whereas adjuncts appear optionally. While typical verb arguments are subject or object nouns or noun phrases as in the examples above, they can also be prepositional phrases (PPs) (or even other categories). The PPs in bold in the following sentences are arguments:
- Sam put the pen on the chair.
- Larry does not put up with that.
- Bill is getting on my case.
We know that these PPs are (or contain) arguments because when we attempt to omit them, the result is unacceptable:
- *Sam put the pen.
- *Larry does not put up.
- *Bill is getting.
Subject and object arguments are known as core arguments; core arguments can be suppressed, added, or exchanged in different ways, using voice operations like passivization, antipassivization, application, incorporation, etc. Prepositional arguments, which are also called oblique arguments, however, do not tend to undergo the same processes.
Syntactic vs. semantic arguments
An important distinction acknowledges both syntactic and semantic arguments. Content verbs determine the number and type of syntactic arguments that can or must appear in their environment; they impose specific syntactic functions (e.g. subject, object, oblique, specific prepositon, possessor, etc.) onto their arguments. These syntactic functions will vary as the form of the predicate varies (e.g. active verb, passive participle, gerund, nominal, etc.). In languages that have morphological case, the arguments of a predicate must appear with the correct case markings (e.g. nominative, accusative, dative, genitive, etc.) imposed on them by their predicate. The semantic arguments of the predicate, in contrast, remain consistent, e.g.
- Jack is liked by Jill.
- Jill's liking Jack
- Jack's being liked by Jill
- the liking of Jack by Jill
- Jill's like for Jack
The predicate 'like' appears in various forms in these examples, which means that the syntactic functions of the arguments associated with Jack and Jill vary. The object of the active sentence, for instance, becomes the subject of the passive sentence. Despite this variation in syntactic functions, the arguments remain semantically consistent. In each case, Jill is the experiencer (= the one doing the liking) and Jack is the one being experienced (= the one being liked). In other words, the syntactic arguments are subject to syntactic variation in terms of syntactic functions, whereas the thematic roles of the arguments of the given predicate remain consistent as the form of that predicate changes.
The syntactic arguments of a given verb can also vary across languages. For example, the verb put in English requires three syntactic arguments: subject, object, locative (e. g. He put the book into the box). These syntactic arguments correspond to the three semantic arguments agent, theme, and goal. The Japanese verb oku 'put', in contrast, has the same three semantic arguments, but the syntactic arguments differ, since Japanese does not require three syntactic arguments, so it is correct to say Kare ga hon o oita ("He put the book"). The equivalent sentence in English is ungrammatical without the required locative argument, as the examples involving put above demonstrate.
Distinguishing between arguments and adjuncts
Arguments vs. adjuncts
||This article may be confusing or unclear to readers. (January 2013)|
A large body of literature has been devoted to distinguishing arguments from adjuncts. Numerous syntactic tests have been devised for this purpose. One such test is the relative clause diagnostic. If the test constituent can appear after the combination which occurred/happened in a relative clause, it is an adjunct, not an argument, e.g.
- Bill left on Tuesday. → Bill left, which happened on Tuesday. - on Tuesday is an adjunct.
- Susan stopped due to the weather. → Susan stopped, which occurred due to the weather. - due to the weather is an adjunct.
- Fred tried to say something twice. → Fred tried to say something, which occurred twice. - twice is an adjunct.
The same diagnostic results in unacceptable relative clauses (and sentences) when the test constituent is an argument, e.g.
- Bill left home. → *Bill left, which happened home. - home is an argument.
- Susan stopped her objections. → *Susan stopped, which occurred her objections. - her objections is an argument.
- Fred tried to say something. → *Fred tried to say, which happened something. - something is an argument.
This test succeeds at identifying prepositional arguments as well:
- We are waiting for Susan. → *We are waiting, which is happening for Susan. - for Susan is an argument.
- Tom put the knife in the drawer. → *Tom put the knife, which occurred in the drawer. - in the drawer is an argument.
- We laughed at you. → *We laughed, which occurred at you. - at you is an argument.
The utility of the relative clause test is, however, limited. It incorrectly suggests, for instance, that modal adverbs (e.g. probably, certainly, maybe) and manner expressions (e.g. quickly, carefully, totally) are arguments. If a constituent passes the relative clause test, however, one can be sure that it is NOT an argument.
Obligatory vs. optional arguments
A further division blurs the line between arguments and adjuncts. Many arguments behave like adjuncts with respect to another diagnostic, the omission diagnostic. Adjuncts can always be omitted from the phrase, clause, or sentence in which they appear without rendering the resulting expression unacceptable. Some arguments (obligatory ones), in contrast, cannot be omitted. There are many other arguments, however, that are identified as arguments by the relative clause diagnostic but that can nevertheless be omitted, e.g.
- a. She cleaned the kitchen.
- b. She cleaned. - the kitchen is an optional argument.
- a. We are waiting for Larry.
- b. We are waiting. - for Larry is an optional argument.
- a. Susan was working on the model.
- b. Susan was working. - on the model is an optional argument.
The relative clause diagnostic would identify the constituents in bold as arguments. The omission diagnostic here, however, demonstrates that they are not obligatory arguments. They are, rather, optional. The insight, then, is that a three-way division is needed. On the one hand, one distinguishes between arguments and adjuncts, and on the other hand, one allows for a further division between obligatory and optional arguments.
Arguments and adjuncts in noun phrases
Most work on the distinction between arguments and adjuncts has been conducted at the clause level and has focused on arguments and adjuncts to verbal predicates. The distinction is crucial for the analysis of noun phrases as well, however. If it is altered somewhat, the relative clause diagnostic can also be used to distinguish arguments from adjuncts in noun phrases, e.g.
- Bill's bold reading of the poem after lunch
- *bold reading of the poem after lunch that was Bill's - Bill's is an argument.
- Bill's reading of the poem after lunch that was bold - bold is an adjunct
- *Bill's bold reading after lunch that was of the poem - of the poem is an argument
- Bill's bold reading of the poem that was after lunch - after lunch is an adjunct
- Bill's bold reading of the poem after lunch
The diagnostic identifies Bill's and of the poem as arguments, and bold and after lunch as adjuncts.
Representing arguments and adjuncts
The distinction between arguments and adjuncts is often indicated in the tree structures used to represent syntactic structure. In phrase structure grammars, an adjunct is "adjoined" to a projection of its head predicate in such a manner that distinguishes it from the arguments of that predicate. The distinction is quite visible in theories that employ the X-bar schema, e.g.
The complement argument appears as a sister of the head X, and the specifier argument appears as a daughter of XP. The optional adjuncts appear in one of a number of positions adjoined to a bar-projection of X or to XP.
Theories of syntax that acknowledge n-ary branching structures and hence construe syntactic structure as being flatter than the layered structures associated with the X-bar schema must employ some other means to distinguish between arguments and adjuncts. In this regard, some dependency grammars employ an arrow convention. Arguments receive a "normal" dependency edge, whereas adjuncts receive an arrow edge. In the following tree, an arrow points away from an adjunct toward the governor of that adjunct:
The arrow edges in the tree identify four constituents (= complete subtrees) as adjuncts: At one time, actually, in congress, and for fun. The normal dependency edges (= non-arrows) identify the other constituents as arguments of their heads. Thus Sam, a duck, and to his representative in congress are identified as arguments of the verbal predicate wanted to send.
The distinction between arguments and adjuncts is crucial to most theories of syntax and grammar. Arguments behave differently from adjuncts in numerous ways. Theories of binding, coordination, discontinuities, ellipsis, etc. must acknowledge and build on the distinction. When one examines these areas of syntax, what one finds is that arguments consistently behave differently from adjuncts and that without the distinction, our ability to investigate and understand these phenomena would be seriously hindered.
See also
- Dependency grammar
- Phrase structure grammar
- Predicate (grammar)
- Subcategorization frame
- Theta criterion
- Theta role
- Subcategorization frame
- Concerning the completion of a predicates meaning via its arguments, see for instance Kroeger (2004:9ff.).
- Geeraerts, Dirk; Cuyckens, Hubert (2007). The Oxford Handbook of Cognitive Linguistics. Oxford University Press US. ISBN 0-19-514378-7.
- For instance, see the essays on valency theory in Ágel et al. (2003/6).
- See Eroms (2000) and Osborne and Groß (2012) in this regard.
- Ágel, V., L. Eichinger, H.-W. Eroms, P. Hellwig, H. Heringer, and H. Lobin (eds.) 2003/6. Dependency and valency: An international handbook of contemporary research. Berlin: Walter de Gruyter.
- Eroms, H.-W. 2000. Syntax der deutschen Sprache. Berlin: de Gruyter.
- Kroeger, P. 2004. Analyzing syntax: A lexical-functional approach. Cambridge, UK: Cambridge University Press.
- Osborne, T. and T. Groß 2012. Constructions are catenae: Construction Grammar meets dependency grammar. Cognitive Linguistics 23, 1, 163-214.
- Tesnière, L. 1959. Éléments de syntaxe structurale. Paris: Klincksieck. | http://en.wikipedia.org/wiki/Core_argument | 13 |
31 | Lossless data compression is a class of data compression algorithms that allows the exact original data to be reconstructed from the compressed data. The term lossless is in contrast to lossy data compression, which only allows constructing an approximation of the original data, in exchange for better compression rates.
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the Unix tool gzip. It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by the LAME MP3 encoder and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data could be deleterious. Typical examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Lossless compression techniques
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by DEFLATE) and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images.
Text and image
Statistical modeling algorithms for text (or text-like binary data such as executables) include:
- Context tree weighting method (CTW)
- Burrows–Wheeler transform (block sorting preprocessing that makes compression more efficient)
- LZ77 (used by DEFLATE)
Techniques that take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values. This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes. For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Historical legal issues
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the USA and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialised area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using autoregressive models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the error) tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of successive images within a sequence). This is called delta encoding (from the Greek letter Δ, which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Lossless compression methods
By operation of the pigeonhole principle, no lossless compression algorithm can efficiently compress all possible data. For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
General purpose
- Run-length encoding (RLE) – a simple scheme that provides good compression of data containing lots of runs of the same value.
- Lempel-Ziv 1978 (LZ78), Lempel-Ziv-Welch (LZW) – used by GIF images and compress among many other applications
- DEFLATE – used by gzip, ZIP (since version 2.0), and as part of the compression process of Portable Network Graphics (PNG), Point-to-Point Protocol (PPP), HTTP, SSH
- bzip2 – using the Burrows–Wheeler transform, this provides slower but higher compression than DEFLATE
- Lempel–Ziv–Markov chain algorithm (LZMA) – used by 7zip, xz, and other programs; higher compression than bzip2 as well as much faster decompression.
- Lempel–Ziv–Oberhumer (LZO) – designed for compression/decompression speed at the expense of compression ratios
- Statistical Lempel Ziv – a combination of statistical method and dictionary-based method; better compression ratio than using single method.
- Apple Lossless (ALAC - Apple Lossless Audio Codec)
- Adaptive Transform Acoustic Coding (ATRAC)
- apt-X Lossless
- Audio Lossless Coding (also known as MPEG-4 ALS)
- Direct Stream Transfer (DST)
- Dolby TrueHD
- DTS-HD Master Audio
- Free Lossless Audio Codec (FLAC)
- Meridian Lossless Packing (MLP)
- Monkey's Audio (Monkey's Audio APE)
- MPEG-4 SLS (also known as HD-AAC)
- Original Sound Quality (OSQ)
- RealPlayer (RealAudio Lossless)
- Shorten (SHN)
- TTA (True Audio Lossless)
- WavPack (WavPack lossless)
- WMA Lossless (Windows Media Lossless)
- ILBM – (lossless RLE compression of Amiga IFF images)
- JBIG2 – (lossless or lossy compression of B&W images)
- WebP – (high-density lossless or lossy compression of RGB and RGBA images)
- JPEG-LS – (lossless/near-lossless compression standard)
- JPEG 2000 – (includes lossless compression method, as proven by Sunil Kumar, Prof San Diego State University)
- JPEG XR – formerly WMPhoto and HD Photo, includes a lossless compression method
- PGF – Progressive Graphics File (lossless or lossy compression)
- PNG – Portable Network Graphics
- TIFF – Tagged Image File Format
- Gifsicle (GPL) – Optimize gif files
- Jpegoptim (GPL) – Optimize jpeg files
3D Graphics
- OpenCTM – Lossless compression of 3D triangle meshes
See this list of lossless video codecs.
Cryptosystems often compress data before encryption for added security; compression prior to encryption helps remove redundancies and patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms introduce predictable patterns (such as headers, wrappers, and tables) into the compressed data that may actually make cryptanalysis easier. One possible solution to this problem is to use Bijective Compression that has no headers or additional information. Also using bijective whole file transforms such as bijective BWT greatly increase the Unicity Distance. Therefore, cryptosystems often incorporate specialized compression algorithms specific to the cryptosystem—or at least demonstrated or widely held to be cryptographically secure—rather than standard compression algorithms that are efficient but provide potential opportunities for cryptanalysis.
Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and in much faster time than the leading general-purpose compression utilities.
Lossless compression benchmarks
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks. There are a number of better-known compression benchmarks. Some benchmarks cover only the compression ratio, so winners in these benchmark may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software.
The benchmarks listed in the 5th edition of the Handbook of Data Compression (Springer, 2009) are:
- The Maximum Compression benchmark, started in 2003 and frequently updated, includes over 150 programs. Maintained by Werner Bergmans, it tests on a variety of data sets, including text, images, and executable code. Two types of results are reported: single file compression (SFC) and multiple file compression (MFC). Not surprisingly, context mixing programs often win here; programs from the PAQ series and WinRK often are in the top. The site also has a list of pointers to other benchmarks.
- UCLC (the ultimate command-line compressors) benchmark by Johan de Bock is another actively maintained benchmark including over 100 programs. The winners in most tests usually are PAQ programs and WinRK, with the exception of lossless audio encoding and grayscale image compression where some specialized algorithms shine.
- Squeeze Chart by Stephan Busch is another frequently updated site.
- The EmilCont benchmarks by Berto Destasio are somewhat outdated having been most recently updated in 2004. A distinctive feature is that the data set is not public, to prevent optimizations targeting it specifically. Nevertheless, the best ratio winners are again the PAQ family, SLIM and WinRK.
- The Archive Comparison Test (ACT) by Jeff Gilchrist included 162 DOS/Windows and 8 Macintosh lossless compression programs, but it was last updated in 2002.
- The Art Of Lossless Data Compression by Alexander Ratushnyak provides a similar test performed in 2003.
- The Calgary Corpus dating back to 1987 is no longer widely used due to its small size, although Leonid A. Broukhis still maintains The Calgary Corpus Compression Challenge, which started in 1996.
- The Large Text Compression Benchmark and the similar Hutter Prize both use a trimmed Wikipedia XML UTF-8 data set.
- The Generic Compression Benchmark, maintained by Mahoney himself, test compression on random data.
- Sami Runsas (author of NanoZip) maintains Compression Ratings, a benchmark similar to Maximum Compression multiple file test, but with minimum speed requirements. It also offers a calculator that allows the user to weight the importance of speed and compression ratio. The top programs here are fairly different due to speed requirement. In January 2010, the top programs were NanoZip followed by FreeArc, CCM, flashzip, and 7-Zip.
- The Monster of Compression benchmark by N. F. Antonio tests compression on 1Gb of public data with a 40 minute time limit. As of Dec. 20, 2009 the top ranked archiver is NanoZip 0.07a and the top ranked single file compressor is ccmx 1.30c, both context mixing.
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument, as follows:
- Assume that each file is represented as a string of bits of some arbitrary length.
- Suppose that there is a compression algorithm that transforms every file into an output file that is no longer than the original file, and that at least one file will be compressed into an output file that is shorter than the original file.
- Let M be the least number such that there is a file F with length M bits that compresses to something shorter. Let N be the length (in bits) of the compressed version of F.
- Because N<M, every file of length N keeps its size during compression. There are 2N such files. Together with F, this makes 2N+1 files that all compress into one of the 2N files of length N.
- But 2N is smaller than 2N+1, so by the pigeonhole principle there must be some file of length N that is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless.
- We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue.
Any lossless compression algorithm that makes some files shorter must necessarily make some files longer, but it is not necessary that those files become very much longer. Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, DEFLATE compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm: indeed, this result is used to define the concept of randomness in algorithmic complexity theory.
It's provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 0. Allegedly "perfect" compression algorithms are usually called derisively "magic" compression algorithms.
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it's possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi, which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Mathematical background
Any compression algorithm can be viewed as a function that maps sequences of units (normally octets) into other sequences of the same units. Compression is successful if the resulting sequence is shorter than the original sequence plus the map needed to decompress it. For a compression algorithm to be lossless, there must be a reverse mapping from compressed bit sequences to original bit sequences. That is to say, the compression method must encapsulate a bijection between "plain" and "compressed" bit sequences.
The sequences of length N or less are clearly a strict superset of the sequences of length N − 1 or less. It follows that there are more sequences of length N or less than there are sequences of length N − 1 or less. It therefore follows from the pigeonhole principle that it is not possible to map every sequence of length N or less to a unique sequence of length N − 1 or less. Therefore it is not possible to produce an algorithm that reduces the size of every possible input sequence.
Psychological background
Most everyday files are relatively 'sparse' in an information entropy sense, and thus, most lossless algorithms a layperson is likely to apply on regular files compress them relatively well. This may, through misapplication of intuition, lead some individuals to conclude that a well-designed compression algorithm can compress any input, thus, constituting a magic compression algorithm.
Points of application in real compression theory
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimising the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
The Million Random Number Challenge
Mark Nelson, frustrated over many cranks trying to claim having invented a magic compression algorithm appearing in comp.compression, has constructed a 415,241 byte binary file () of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute ("decompress") it without error.
The FAQ for the comp.compression newsgroup contains a challenge by Mike Goldman offering $5,000 for a program that can compress random data. Patrick Craig took up the challenge, but rather than compressing the data, he split it up into separate files all of which ended in the number 5, which was not stored as part of the file. Omitting this character allowed the resulting files (plus, in accordance with the rules, the size of the program that reassembled them) to be smaller than the original file. However, no actual compression took place, and the information stored in the names of the files was necessary to reassemble them in the correct order in the original file, and this information was not taken into account in the file size comparison. The files themselves are thus not sufficient to reconstitute the original file; the file names are also necessary. A full history of the event, including discussion on whether or not the challenge was technically met, is on Patrick Craig's web site.
See also
- Audio compression (data)
- Comparison of file archivers
- David A. Huffman
- Entropy (information theory)
- Kolmogorov complexity
- Data compression
- Lossy compression
- Lossless Transform Audio Compression (LTAC)
- List of codecs
- Information theory
- Universal code (data compression)
- Grammar induction
- Unisys | LZW Patent and Software Information
- Chanda, Elhaik, and Bader (2012). "HapZipper: sharing HapMap populations just got easier". Nucleic Acids Res: 1–7. doi:10.1093/nar/gks709. PMID 22844100.
- David Salomon, Giovanni Motta, (with contributions by David Bryant), Handbook of Data Compression, 5th edition, Springer, 2009, ISBN 1-84882-902-7, pp. 16–18.
- Lossless Data Compression Benchmarks (links and spreadsheets)
- http://nishi.dreamhosters.com/u/dce2010-02-26.pdf, pp. 3–5
- Visualization of compression ratio and time
- comp.compression FAQ list entry #9: Compression of random data (WEB, Gilbert and others)
- ZIP file format specification by PKWARE, chapter V, section J
- Nelson, Mark (2006-06-20). "The Million Random Digit Challenge Revisited".
- Craig, Patrick. "The $5000 Compression Challenge". Retrieved 2009-06-08.
- Comparison of Lossless Audio Compressors at Hydrogenaudio Wiki
- Comparing lossless and lossy audio formats for music archiving
- — data-compression.com's overview of data compression and its fundamentals limitations
- — comp.compression's FAQ item 73, What is the theoretical compression limit?
- — c10n.info's overview of US patent #7,096,360, "[a]n "Frequency-Time Based Data Compression Method" supporting the compression, encryption, decompression, and decryption and persistence of many binary digits through frequencies where each frequency represents many bits."
- "LZF compression format" | http://en.wikipedia.org/wiki/Lossless | 13 |
15 | All economic questions and problems arise from scarcity. Economics assumes people do not have the resources do satisfy all of their wants. Therefore, we must make choices about how to allocate those resources. We make decisions about how to spend our money and use our time. This EconomicsMinute will focus on the central idea of economics- every choice involves a cost.
- Define opportunity cost.
- Determine the opportunity cost and trade-offs for certain decisions.
- Select criteria important for decision making.
1. How many sodas can you buy instead of one movie ticket?
[You can buy five sodas instead of one movie ticket. Students should divide the price of a movie ticket by the price of a soda]
2. How many pieces of gum can you buy instead of one soda?
[You can buy two pieces of gum instead of one soda. Students should divide the price of gum by the price of a soda.]
3. If you buy four pieces of gum, how many sodas could you have bought?
[If you buy four pieces of gum, you could have bought two sodas. Students should multiply the price of gum by four which equals two, then divide two by the price of a soda which equals two.]
In order to buy a movie, you need to give up a certain amount of gum and soda. If you buy ten pieces of gum, you give up going to the movie or buying soda. Decisions involve trade offs. When you make a choice, you give up an opportunity to do something else. The highest-valued alternative you give up is the opportunity cost of your decision. Opportunity cost is the highest-valued forgone activity. It is not all the possible things you have given up.
For example, if you go to the movies you have to give up a certain amount of gum and soda. If you are a sodaholic, you have to give up five sodas. If you are gum fanatic, you surrender ten packs of gum. But, the opportunity cost of a movie is not five sodas and ten packs of gum. It is five sodas or ten packs of gum.
Pokemon Guide: Readers can learn about Pokemon cards and their values at this website.
Calvin and Hobbes: This site displays Calvin and Hobbes comic strip cartoons.
Go to the Pokemon Guide and answer the following questions.
1. What is the opportunity cost of an unlimited edition Mewtwo in terms of other Pokemon cards? [The answers will vary. Any combination of Pokemon cards that equals the price of the unlimited edition Mewtwo works as an answer.]
2. Go back to the gum, soda, movie ticket example. What is the opportunity cost of an unlimited edition Mewtwo in terms of movie tickets? [You would have to give up two movie tickets for one unlimited edition Mewtwo.]
All kinds of decisions involve opportunity costs, not just ones about how to spend your money. For example, if you have soccer practice when your favorite television show is on, part of the opportunity cost of soccer practice is missing that television show. When you made the choice to join a soccer team, you had to tradeoff missing that television show for becoming a better player
Go to this Calvin and Hobbes website .
1. In the first panel, what does Calvin want to do with the ball? [Calvin wants to play football with the ball.]
2. Using the ideas of trade-offs and opportunity cost, explain why Calvin gives Hobbes the ball. [Calvin sees a trade-off between losing the game and having an ambulatory adulthood. The opportunity cost of not being able to walk is greater than losing the game. Calvin surrenders the ball to Hobbes.]
All economic questions and problems arise from scarcity. Economics assumes people do not have the resources do satisfy all of their wants. Therefore, we must make choices about how to allocate those resources. We make decisions about how to spend our money and use our time. This <> EconomicsMinute will focus on the central idea of economics- every choice involves a cost.
Let's say you have five dollars. What would you like to spend it on? There are a million things you would love to spend five bucks on, but let's imagine there are only three things out there you really want to buy: gum, soda, and movie tickets. Look at the price chart below and answer the questions.
- What would you spend your five dollars on?
- What would you be willing to give up?
You won a radio station contest and you are now $300 dollars richer. You can finally start looking for a new stereo. Determine what criteria you think are important for choosing a stereo and identify the trade-offs made when selecting one stereo over another.
“I thought the game was fun and interesting.”
“This is a terrific lesson to use for trade-offs.”
“Thank you for your help. This site is great.”
“I appreciate you giving such great examples and explanations on how to start teaching young kids economics. It's great.”
“I am taking a college course and your explanations really helped me grasp the concept!”
“Thank you. I am trying to teach the concept of opportunity cost to government officials in Indiana particularly with regard to the value of time for a poor person carrying water. It is a very difficult concept to introduce and I believe that ideas from your lesson plans will be helpful.”
“What a nice page! Opportunity cost helped me to deal in my economics subject!”
“I reviewed this lesson for possible use and thought it was great.”
“I like this site. I'm a 5th grader doing this for school.”
“It's my first year doing economics and your explanations are really opening my eyes. Now I can understand economics better!”
“I love this page, I am home schooled and in 5th Grade.”
“This lesson is just what I was looking for! Yesterday, I covered Wants v. Needs and had my students make collages of them. Today, I covered Scarcity and, again, used the students collages. Since many of my students put autos in their collages, I can use this lesson and expand it to include the autos in the collages, once again for my Opportunity Cost lesson tomorrow! THANK YOU!”
“I always have a tough time trying to relate economics to my students. This lesson was perfect to explain opportunity cost and trade off. Using the cartoon really helped my students think outside the box!”
Review from EconEdReviews.org
“This lesson really needs to be aged at grades 3-5. The concepts and presentation of the lesson were very easy for my middle school students. The lesson itself took very little time and the pokeman link has been blocked by my school district. I like the extension activity though because it fit their interests better and you could use it on the computer using the internet or without the computer using Sunday ads. ” | http://www.econedlink.org/lessons/index.php?lid=51&type=educator | 13 |
15 | In Logic, any categorical statement is termed as the Proposition.
A Proposition (or a categorical statement) is a statement that asserts that either a part of, or the
whole of, one set of objects - the set identified by the subject term in the sentence expressing that
statement - either is included in, or is excluded from, another set - the set identified by the
predicate term in that sentence.
The standard form of a proposition is :
Quantifier + Subject + Copula + Predicate
Thus, the proposition consists of four parts :
1. Quantifier: The words 'all', 'no' and 'some' are called quantifiers because they specify a quantity 'All' and 'no' are universal quantifiers because they refer to every object in a certain set, while the quantifier 'some' is a particular quantifier because it refers to at least one existing object in a certain set.
2. Subject (denoted by 'S'): The subject is that about which something is said.
3. Predicate (denoted by 'P'): The predicate is the part of the proposition denoting that which is affirmed or denied about the subject.
4. Copula : The copula is that part of the proposition which denotes the relation between the subject and the predicate.
Four-Fold Classification of Propositions :
A proposition is said to have a universal quantity if it begins with a universal quantifier, and a particular quantity if it begins with a particular quantifier. Besides, propositions which assert something about the inclusion of the whole or a part of one set in the other are said to have affirmative quality, while those which deny the inclusion of the whole or a part of one set in the other are said to have a negative quality. Also, a term is distributed in a proposition if it refers to all members of the set of objects denoted by that term. Otherwise, it is said to be undistributed. Based on the above facts, propositions can be classified into four types :
1. Universal Affirmative Proposition (denoted by A): It distributes only the subject i.e. the predicate is not interchangeable with the subject while maintaining the validity of the proposition.
e.g., All snakes are reptiles. This is proposition A since we cannot say 'All reptiles are snakes'.
2. Universal Negative Proposition (denoted by E): It distributes both the subject and the predicate i.e. an entire class of predicate term is denied to the entire class of the subject term, as in the proposition.
e.g., No boy is intelligent.
3.Particular Affirmative Proposition (denoted by I): It distributes neither the subject nor the predicate.
e.g.,Some men are foolish. Here, the subject term 'men' is used not for all but only for some men and similarly the predicate term 'foolish' is affirmed for a part of subject class. So, both are undistributed.
4. Particular Negative Proposition (denoted by O): It distributes only the predicate. e.g.,
Some animals are not wild. Here, the subject term 'animals' is used only for a part of its class and hence is undistributed while the predicate term 'wild' is denied in entirety to the subject term and hence is distributed. These facts can be summarized as follows :
|Statement Form||Quantity||Quality||Distributed |
|(A): All S is P.||Universal||Affirmative||S only |
|(E): No S is P.||Universal||Negative||Both S and P |
|(I): Some S is P.||Particular||Affirmative||Neither S nor P |
|(O): Some S is not P||Particular||Negative||P only | | http://www.indiabix.com/logical-reasoning/logical-deduction/introduction | 13 |
29 | A syllogism is a deduction. It is a kind of logical argument in which one proposition (the conclusion) is inferred from two or more others (the premises). The idea is an invention of Aristotle.
In the Prior Analytics, Aristotle defines the syllogism as "a discourse in which, certain things having been supposed, something different from the things supposed results of necessity because these things are so". (24b18–20)
Each proposition must have some form of the verb 'to be' in it. A categorical syllogism is like a little machine built of three parts: the major premise, the minor premise and the conclusion. Each of these parts is a proposition and, from the first two, the "truth value" of the third part is decided.
- Major premise: All men are mortal.
- Minor premise: All Greeks are men.
- Conclusion: All Greeks are mortal.
Each of the three distinct terms represents a category. In the above example, "men," "mortal," and "Greeks." "Mortal" is the major term; "Greeks", the minor term. The premises also have one term in common with each other, which is known as the middle term; in this example, "man." Both of the premises are universal, as is the conclusion.
- Major premise: All mortals die.
- Minor premise: Some men are mortals.
- Conclusion: Some men die.
Here, the major term is "die", the minor term is "men," and the middle term is "mortals". The major premise is universal; the minor premise and the conclusion are particular. Aristotle studied different syllogisms and identified valid syllogisms as syllogisms with conclusion true if both premises are true. The examples above are valid syllogisms.
A sorites is a form of argument in which a series of incomplete syllogisms is so arranged that the predicate of each premise forms the subject of the next until the subject of the first is joined with the predicate of the last in the conclusion. For example, if one argues that a given number of grains of sand does not make a heap and that an additional grain does not either, then to conclude that no additional amount of sand will make a heap is to construct a sorites argument.
Logic today [change]
The syllogism was replaced by first-order logic after the work of Gottlob Frege, published in 1879. This logic is suitable for mathematics, computers, linguistics and other subjects, because it uses numbers (quantified variables) instead of sentences.
- Greek: συλλογισμός – syllogismos – "conclusion," "inference"
- Frede, Michael 1975. Stoic vs. Peripatetic syllogistic. Archive for the History of Philosophy 56, 99-124.
- Jaeger, Werner 1934. Aristotle: fundamentals of the history of his development. Oxford University Press. p370
- Frege, Gottlob 1879. Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a. S.: Louis Nebert. Translation: Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in Jean Van Heijenoort, ed., 1967. From Frege to Gödel: a source book in mathematical logic, 1879–1931. Harvard University Press. | http://simple.wikipedia.org/wiki/Syllogism | 13 |
21 | Posted by roba on Saturday, November 14, 2009 at 4:58pm.
The recursive algorithm of the 4th Section of the chapter "Computational Geometry"
employs a trick of presorting, in which we maintain two arrays X and Y of the input
points P sorted on coordinate x and y, respectively. The algorithm starts with sorting
all the input points in Q in time O(n log n). Assuming that a subset P Q of input
points together with arrays X and Y are given, set P is partitioned into PL and PR and
the corresponding arrays XL, XR, YL, and YR are all obtained in time O(/P/). To see
this, observe that the median xm of x-coordinates of the points in P is the x coordinate
of the point in the middle of X. To obtain YL and YR, scan array Y and move a point
(x,y) with x < xm to YL and a point (x, y) with x xm to YR.
Consider a modi cation of the recursive algorithm in which presorting is not applied.
Instead, we sort in each recursive call, applying an algorithm sorting by comparisons.
Each time a given subset P needs to be partitioned into PL and PR, the points in P
are sorted on the x-coordinate. In the "combine" part, the set of points in the vertical
strip of width 2ä is sorted on the y-coordinates.
Find a tight asymptotic estimate on the running time of this algorithm as a function
of the size n of the input set Q.
Hints: Find a recurrence for the running time. It is dierent from the recurrence
T(n) = 2T(n=2) + O(n) describing the version with presorting. Solve the recurrence.
To this end, you might apply the approach used to prove the \master theorem" of
Diameter of a convex polygon.
There is given a convex polygon P, represented as a sequence of consecutive points
(p0, p1,........ pn-1)
in the sense that the polygon P consists of segments pi, pi+1, where the addition of
subscripts is modulo n.
1) Give an efficient algorithm to nd a pair of points in P of the maximum distance
from each other.
A readable description of the underlying idea of the algorithm in words, possibly illus-
trated with simple drawings, will be better than a tight pseudocode.
2) Argue why the algorithm is correct.
The correctness of the algorithm is to rely on the convexity of the polygon. Point in
your correctness argument where you resort to convexity.
3) Estimate the running time of the algorithm.
The goal is to design an algorithm of the asymptotically optimal running time.
Answer this Question
Math - What the heck is recursive? recursive: repeats Going up a stairwell is a...
computers:algorithm - Write a pseudocode to find the average of any amount of ...
4th grade - 23x17 in expanded algorithm
4th grade - what is 27×19 in expanded algorithm?
computers - Write an algorithm, in English, to solve the following problem. Be ...
computers - How can you write a sorting algorithm for three intergers in simple ...
Computers - Problem-Solving 1. Develop an algorithm or write pseudocode to ...
4th grade math - 31 X 19 in expanded algorithm
graham school - how,arrays and an expanded algorithm it is a 4th grade homework
4th Grade Math - Arrays and an expanded algorithm: 23 x 17
For Further Reading | http://www.jiskha.com/display.cgi?id=1258235896 | 13 |
26 | We've been talking about various ways of helping writers to structure their arguments so that they are clear, persuasive, and logical. What we haven't yet discussed are ways that you might use instruction in logic in the tutoring session to help a writer analyze her ideas.
What follows are some steps you might take in analyzing the logic of a paper:
1) Ask the writer if she's familiar with syllogisms. If not, explain to her that a syllogism consists of two premises - a major premise and a minor premise - and a conclusion. Here is one example:
- All men are mortal.
- Socrates is a man.
- Socrates is mortal.
Here is another:
- All women are brilliant.
- I am a woman.
- I am brilliant.
You'll want to take the opportunity to explain to the writer that an argument can be logical without being true (as in the second example).
2) Establish the argument's logic by breaking her argument down into its component syllogisms. Deductuve arguments rely on syllogisms in order to make sense. Go through the paper with the writer and try to find out what the he is really arguing by unearthing the syllogisms buried in her prose.
3) Establish the argument's truth by testing the validity of its premises. Consider, for example, the following syllogism:
- Murder is wrong.
- Abortion is murder.
- Abortion is wrong.
This argument may be logical, but is it true? If you accept the second premise - that "abortion is murder" - then you have no choice but to accept the conclusion drawn here. But is this second premise necessarily true? A writer will need to consider, in this case, the terms he is using. For example, murder is defined as "the unlawful killing of another human being with malice aforethought" (Random House Dictionary of the English Language). We have two problems applying this definition to abortion: first, abortion is legal, and second, we can debate whether or not a fetus is indeed a human being. Both matters must be addressed by the writer if he wishes to make an argument along the lines of the one suggested in the syllogism above.
4) Look for suppressed assumptions. Let's say that the writer of the abortion argument understood that a controversy exists as to whether a fetus can be considered a human being - and he neglected to expose what he knew. In this case, the writer is suppressing assumptions. Writers suppress assumptions all the time and for all kinds of reasons - some legitimate, some suspect. One legitimate reason for suppressing an assumption is that your readers can be counted on to agree with what you have to say. For example, if the writer of the abortion argument is writing a pamphlet for a Right to Life group, he will not need to prove that abortion is murder because it is an assumption that members of this group already share. Still, you will want to be on the lookout for suppressed assumptions, and to question writers as to why their argument wasn't more explicitly made.
5) Consider each premise and the evidence that supports it. Is that evidence adequate? Current? Reputable? If not, the reader will remain unconvinced of the validity of the premises, and the argument will fail.
Logical fallacies are mistakes in reasoning. They may be intentional or unintentional, but in either case they undermine the strength of an argument. Some common fallacies are defined below. Please familiarize yourselves with them so that you can help writers to avoid them.
1) Hasty Generalization: A generalization based on too little evidence, or on evidence that is biased. Example: All men are testosterone-driven idiots. Or: After being in New York for a week, I can tell you: all New Yorkers are rude.
2) Either/Or Fallacy: Only two possibilities are presented when in fact several exist. Example: America: love it or leave it. Or: Shut down all nuclear power plants, or watch your children and grandchildren die from radiation poisoning.
3) Non Sequitur: The conclusion does not follow logically from the premise. Example: My teacher is pretty; I'll learn a lot from her. Or: George Bush was a war hero; he'll be willing to stand tough for America.
4) Ad Hominem: Arguing against the man instead of against the issue. Example: We can't elect him mayor. He cheats on his wife! Or: He doesn't really believe in the first amendment. He just wants to defend his right to see porno flicks.
5) Red Herring: Distracting the audience by drawing attention to an irrelevant issue. Example: How can he be expected to manage the company? Look at how he manages his wife! Or: Why worry about nuclear war when we're all going to die anyway?
6) Circular Reasoning: Asserting a point that has just been made. Sometimes called "begging the question." Example: She is ignorant because she was never educated. Or: We sin because we're sinners.
7) False Analogy: Wrongly assuming that because two things are alike in some ways, they must be alike in all ways. Example: An old grandmother's advice to her granddaughter, who is contemplating living with her boyfriend: "Why should he buy the cow when he can get the milk for free?"
8) Post Hoc, Ergo Propter Hoc: The mistake of assuming that, because event a is followed by event b, event a caused event b. Example: It rained today because I washed my car. Or: The stock market fell because the Japanese are considering implementing an import tax.
9) Equivocation: Equates two meanings of the same word falsely. Example: The end of a thing is its perfection; hence, death is the perfection of life. (The argument is fallacious because there are two different definitions of the word "end" involved in the argument.)
Last modified: Tuesday, 12-Jul-2005 11:25:57 EDT
Copyright © 2004 Dartmouth College | http://www.dartmouth.edu/~writing/materials/tutor/problems/logic.shtml | 13 |
21 | Glossary of Useful Terms
Abstract nouns, such as truth or beauty, are words that are neither specific nor definite in meaning; they refer to general concepts, qualities, and conditions that summarize an entire category of experience. Conversely, concrete terms, such as apple, crabgrass, computer, and French horn, make precise appeals to our senses. The word abstract refers to the logical process of abstraction, through which our minds are able to group together and describe similar objects, ideas, or attitudes. Most good writers use abstract terms sparingly in their essays, preferring instead the vividness and clarity of concrete words and phrases.
Allusion is a reference to a well-known person, place, or event from life or literature. In “Opera Night in Canada”, for example, Michael McKinley alludes to characters in the famous opera Madame Butterfly and the performance of the Toronto Maple Leafs hockey team when he says, “. . . the idea of these two art forms being united after all this time is as shocking as Pinkerton returning to marry Madame Butterfly or the Leafs being united with the Stanley Cup.”
Analogy is an extended comparison of two dissimilar objects or ideas.
Analysis is examining and evaluating a topic by separating it into its basic parts and elements and studying it systematically.
Anecdote is a brief account of a single incident.
Argumentation is an appeal predominantly to logic and reason. It deals with complex issues that can be debated.
Attitude describes the narrator's personal feelings about a particular subject. In "There's a Better Environmental Way to Farm," Juanita Polegi expresses frustration and dismay at the idea that farmers who don't use pesticides and fertilizers are not harming the environment. Attitude is one component of point of view.
Audience refers to the person or group of people for whom an essay is written.
Cause and effect
Cause and effect is a form of analysis that examines the causes and consequences of events and ideas.
Characterization is the creation of imaginary yet realistic persons in fiction, drama, and narrative poetry.
Chronological order is a sequence of events arranged in the order in which they occurred. Stanley Coren follows this natural time sequence in his process essay "Dogs and Monsters."
Classification is the analytical process of grouping together similar subjects into a single category or class; division works in the opposite fashion, breaking down a subject into many different subgroups. In "Why We Crave Hot Stuff," Trina McQueen gives several examples of news stories that she classifies as "hot stuff" or tabloid news items.
Clichés are words or expressions that have lost their freshness and originality through continual use. For example, "busy as a bee," "pretty as a picture," and "hotter than hell" have become trite and dull because of overuse. Good writers avoid clichés through vivid and original phrasing.
Climactic order refers to the organization of ideas from one extreme to another-for example, from least important to most important, from most destructive to least destructive, or from least promising to most promising.
Cognitive skills are mental abilities that help us process external stimuli.
Coherence is the manner in which an essay "holds together" its main ideas. A coherent theme will demonstrate such a clear relationship between its thesis and its logical structure that readers can easily follow the argument.
Colloquial expressions are informal words, phrases, and sentences that are generally more appropriate for spoken conversations than for written essays.
Comparison is an expository writing technique that examines the similarities between objects or ideas, whereas contrast focuses on differences.
Conclusions bring essays to a natural close by summarizing the argument, restating the thesis, calling for some specific action, or explaining the significance of the topic just discussed. If the introduction states your thesis in the form of a question to be answered or a problem to be solved, then your conclusion will be the final "answer" or "solution" provided in your paper. The conclusion should be approximately the same length as your introduction and should leave your reader satisfied that you have actually "concluded" your discussion rather than simply run out of ideas to discuss.
Concrete: See abstract.
Conflict is the struggle resulting from the opposition of two strong forces in the plot of a play, novel, or short story.
Connotation and Denotation
Connotation and Denotation are two principal methods of describing the meanings of words. Connotation refers to the wide array of positive and negative associations that most words naturally carry with them, whereas denotation is the precise, literal definition of a word that might be found in a dictionary. Anita Rau Badami's essay, "My Canada," uses words with strong implied meanings (connotation) that extend their literal definitions (denotation). When Badami's husband leaves his job in a "vast, faceless corporation" in India and the family goes to Canada, Badami is first greeted by "a blast of freezing air" in a "barren city where the sky covered everything like blue glass, where [she] could hear [her] own footsteps echoing on an empty street. . . ." Over time she comes to love "the crisp winter mornings" and "the long silent streets and canola fields shimmering yellow under an endless blue sky."
Content and Form
Content and Form are the two main components of an essay. Content refers to the subject matter of an essay, whereas its form consists of the graphic symbols that communicate the subject matter (word choice, spelling, punctuation, paragraphing, etc.).
Contrast: See comparison.
Deduction is a form of logical reasoning that begins with a general assertion and then presents specific details and examples in support of that generalization. Induction works in reverse by offering a number of examples and then concluding with a general truth or principle.
Definition is a process whereby the meaning of a term is explained. Formal definitions require two distinct operations: (1) finding the general class to which the object belongs and (2) isolating the object within that class by describing how it differs from other elements in the same category.
Denotation: See connotation.
Description is a mode of writing or speaking that relates the sights, sounds, tastes, smells, or feelings of a particular experience to its readers or listeners. Good descriptive writers, such as those featured in Chapter 1, are particularly adept at receiving, selecting, and expressing sensory details from the world around them. Along with persuasion, exposition, and narration, description is one of the four dominant types of writing.
Development concerns the manner in which a paragraph of an essay expands on its topic.
Dialect is a speech pattern typical of a certain regional location, race, or social group that exhibits itself through unique word choice, pronunciation, and/or grammatical usage.
Dialogue is a conversation between two or more people, particularly within a novel, play, poem, short story, or other literary work. See Evelyn Lau's "More and More" or Matt Cohen's "Zada's Hanuukkah Legacy" for examples of essays that incorporate dialogue.
Diction is word choice. If a vocabulary is a list of words available for use, then good diction is the careful selection of those words to communicate a particular subject to a specific audience. Different types of diction include formal (scholarly books and articles), informal (essays in popular magazines), colloquial (conversations between friends, including newly coined words and expressions), slang (language shared by certain social groups), dialect (language typical of a certain region, race, or social group), technical (words that make up the basic vocabulary of a specific area of study, such as medicine or law), and obsolete (words no longer in use). Diction can also refer to the quality of one's pronunciation of words.
Division: See classification.
Documented essay is a research or library paper that integrates paraphrases, summaries, and quotations from secondary sources with the writer's own insights and conclusions. Such essays normally include references within the paper and, at the end, a list of the books and articles cited.
Dominant impression in descriptive writing is the principal effect the author wishes to create for the audience.
Editing is an important part of the rewriting process of an essay that requires writers to make certain their work observes the conventions of standard written English.
Effect: See cause and effect.
Emphasis is the stress given to certain words, phrases, sentences, and/or paragraphs within an essay by such methods as repeating important ideas; positioning thesis and topic sentences effectively; supplying additional details or examples; allocating more space to certain sections of an essay; choosing words carefully; selecting and arranging details judiciously; and using certain mechanical devices, such as italics, underlining, capitalization, and different colours of ink.
Essay is a relatively short prose composition on a limited topic. Most essays are 500 to 1,000 words long and focus on a clearly definable question to be answered or problem to be solved. Formal essays, such as Janice Gross Stein's "Developing a National Voice," are generally characterized by seriousness of purpose, logical organization, and dignity of language; informal essays, such as Drew Hayden Taylor's "Pretty Like a White Boy: The Adventures of a Blue-Eyed Ojibway," are generally brief, humorous, and more loosely structured. Essays in this textbook have been divided into nine traditional rhetorical types, each of which is discussed at length in its chapter introduction.
Etymology is the study of the origin and development of words.
Evidence is any material used to help support an argument, including details, facts, examples, opinions, and expert testimony. Just as a lawyer's case is won or lost in a court of law because of the strength of the evidence presented, so, too, does the effectiveness of a writer's essay depend on the evidence offered in support of its thesis statement.
Example is an illustration of a general principle or thesis statement. Jennifer Cowan's "TV Me Alone," for instance, gives several different examples of public places where television can be found.
Exposition is one of the four main rhetorical categories of writing (the others are persuasion, narration, and description). The principal purpose of expository prose is to "expose" ideas to your readers, and to explain, define, and interpret information through one or more of the following modes of exposition: example, process analysis, division/classification, comparison/contrast, definition, and cause/effect.
Figurative language is writing or speaking that purposefully departs from the literal meanings of words to achieve a particularly vivid, expressive, and/or imaginative image. In Steven Heighton's description of the park entrance at Vimy Ridge, for example, he uses figurative language. "In the blue-green stained-glass light of the forest, the near-silence was eerie, solemn, as in the cathedral at Arras." Some principal figures of speech include metaphor, simile, hyperbole, allusion, and personification.
Flashback is a technique used mainly in narrative writing that enables the author to present scenes or conversations that took place prior to the beginning of the story.
Focus is the concentration of a topic on one central point or issue.
Form: See content.
Formal essay: See essay.
Free association is a process of generating ideas for writing through which one thought leads randomly to another.
General words are those that employ expansive categories, such as animals, sports, occupations, and clothing; specific words are more limiting and restrictive, such as koala, lacrosse, computer programmer, and bow tie. Whether a word is general or specific depends at least somewhat on its context: Bow tie is more specific than clothing, yet less specific than "the pink and green striped bow tie Aunt Martha gave me last Christmas." See also abstract.
Generalization is a broad statement or belief based on a limited number of facts, examples, or statistics. A product of inductive reasoning, generalizations should be used carefully and sparingly in essays.
Hyperbole, the opposite of understatement, is a type of figurative language that uses deliberate exaggeration for the sake of emphasis or comic effect (e.g., "hungry enough to eat 20 chocolate éclairs").
Hypothesis is a tentative theory that can be proved or disproved through further investigation and analysis.
Idiom refers to a grammatical construction unique to a certain people, region, or class that cannot be translated literally into another language (e.g., "To be on thin ice," "To pull someone's leg").
Illustration is the use of examples to support an idea or generalization.
Imagery is description that appeals to one or more of our five senses. See, for example, Will Ferguson's description of Sudbury's cliffs in "The Sudbury Syndrome": ". . . Sudbury's slag pile glaciers, the scorched tailings of the city's infamous nickel mines. Rail cars roll up to the edge, then pause, tilt and pour out the molten slag, casting an orange echo against the sky, like the castle defenses of a medieval siege. The slag cools into a crust, then blackens, and is in turn covered." Imagery is used to help bring clarity and vividness to descriptive writing.
Induction: See deduction.
Inference is a deduction or conclusion derived from specific information.
Informal essay: See essay.
Introduction refers to the beginning of an essay. It should identify the subject to be discussed, set the limits of that discussion, and clearly state the thesis or general purpose of the paper. In a brief (five-paragraph) essay, your introduction should be only one paragraph; for longer papers, you may want to provide longer introductory sections. A good introduction will generally catch the audience's attention by beginning with a quotation, a provocative statement, a personal anecdote, or a stimulating question that somehow involves its readers in the topic under consideration. See also conclusion.
Irony is a figure of speech in which the literal, denotative meaning is the opposite of what is stated.
Jargon is the special language of a certain group or profession, such as psychological jargon, legal jargon, or medical jargon. When jargon is excerpted from its proper subject area, it generally becomes confusing or meaningless, as in "I have a latency problem with my backhand" or "I hope we can interface tomorrow night after the dance."
Levels of thought
Levels of thought is a phrase that describes the three sequential stages at which people think, read, and write: literal, interpretive, and analytical.
Logic is the science of correct reasoning. Based principally on inductive or deductive processes, logic establishes a method by which we can examine premises and conclusions, construct syllogisms, and avoid faulty reasoning.
Logical fallacy is an incorrect conclusion derived from faulty reasoning. See also post hoc, ergo propter hoc and non sequitur.
Metaphor is an implied comparison that brings together two dissimilar objects, persons, or ideas. Unlike a simile, which uses the words like or as, a metaphor directly identifies an obscure or difficult subject with another that is easier to understand. In Maureen Littlejohn's "You Are a Contract Painkiller," for example, the author uses the image of a contract killer to describe the medication ASA.
Mood refers to the atmosphere or tone created in a piece of writing. The mood of Cecil Foster's "Why Blacks Get Mad," for example, is intense and serious; of Susan Swan's "Nine Ways of Looking at a Critic," mildly sarcastic; and of Allen Abel's "A Home at the End of the Journey," good-humoured and sympathetic.
Narration is storytelling (i.e., the recounting of a series of events) arranged in a particular order and delivered by a narrator to a specific audience with a clear purpose in mind. Along with persuasion, exposition, and description, it is one of the four principal types of writing.
Non sequitur, from a Latin phrase meaning "it does not follow," refers to a conclusion that does not logically derive from its premises.
Objective writing is detached, impersonal, and factual; subjective writing reveals the author's personal feelings and attitudes. David Foot's "Boomers Dance to a New Beat" and Stanley Coren's "Dogs and Monsters" are examples of objective prose, whereas Alison Wearing's "Last Snowstorm" is essentially subjective in nature. Most good college-level essays are a careful mix of both approaches, with lab reports and technical writing toward the objective end of the scale, and personal essays in composition courses at the subjective end.
Organization refers to the order in which a writer chooses to present his or her ideas to the reader. Five main types of organization may be used to develop paragraphs or essays: (1) deductive (moving from general to specific); (2) inductive (from specific to general); (3) chronological (according to time sequence); (4) spatial (according to physical relationship in space); and (5) climactic (from one extreme to another, such as least important to most important).
Paradox is a seemingly self-contradictory statement that contains an element of truth.
Paragraphs are groups of interrelated sentences that develop a central topic. Generally governed by a topic sentence, a paragraph has its own unity and coherence and is an integral part of the logical development of an essay.
Parallelism is a structural arrangement within sentences, paragraphs, or entire essays through which two or more separate elements are similarly phrased and developed. Look, for example, at Evan Solomon's "The Babar Factor," in which the first two paragraphs follow the same pattern. See also Naheed Mustafa's "My Body Is My Own Business": ". . . waifish is good, waifish is bad, athletic is good-sorry, athletic is bad. Narrow hips? Great. Narrow hips? Too bad."
Paraphrase is a restatement in your own words of someone else's ideas or observations. When paraphrasing, it is important to acknowledge the original source in order to avoid plagiarism.
Parody is making fun of a person, an event, or a work of literature through exaggerated imitation.
Person is a grammatical distinction identifying the speaker or writer in a particular context: first person (I or we), second person (you), and third person (he, she, it, or they). The person of an essay refers to the voice of the narrator. See also point of view.
Personification is figurative language that ascribes human characteristics to an abstraction, animal, idea, or inanimate object. Consider, for example, Tomson Highway's description in "What a Certain Visionary Once Said" of the earth that breathes and "whisper[s] things that simple men, who never suspected they were mad, can hear."
Persuasion is one of the four chief forms of rhetoric. Its main purpose is to convince a reader (or listener) to think, act, or feel a certain way. It involves appealing to reason, to emotion, and/or to a sense of ethics. The other three main rhetorical categories are exposition, narration, and description.
Point of view
Point of view is the perspective from which a writer tells a story, including person, vantage point, and attitude. Principal narrative voices are first-person, in which the writer relates the story from his or her own vantage point ("As a high school student in Saskatoon, Saskatchewan, I never planned. I didn't worry about anything. I just coasted along letting things happen to me."); omniscient, a third-person technique in which the narrator knows everything and can even see into the minds of the various characters ("As a high school student in Saskatoon, Saskatchewan, she never planned. She didn't worry about anything. She just coasted along letting things happen to her."); and concealed, a third-person method in which the narrator can see and hear events but cannot look into the minds of the other characters ("As a high school student in Saskatoon, Saskatchewan, she never planned. She seemed to just coast along letting things happen to her.").
Post hoc, ergo propter hoc
Post hoc, ergo propter hoc, a Latin phrase meaning "after this, therefore because of this," is a logical fallacy confusing cause and effect with chronology. Just because Irving wakes up every morning before the sun rises doesn't mean that the sun rises because Irving wakes up.
Premise is a proposition or statement that forms the foundation of an argument and helps support a conclusion. See also logic and syllogism.
Prereading is thoughtful concentration on a topic before reading an essay. Just as athletes warm up their physical muscles before competition, so, too, should students activate their "mental muscles" before reading or writing essays.
Prewriting, which is similar to prereading, is the initial stage in the composing process during which writers consider their topics, generate ideas, narrow and refine their thesis statements, organize their ideas, pursue any necessary research, and identify their audiences. Although prewriting occurs principally, as the name suggests, "before" an essay is started, writers usually return to this "invention" stage again and again during the course of the writing process.
Process analysis, one of the seven primary modes of exposition, either gives directions about how to do something (directive) or provides information on how something happened (informative).
Proofreading, an essential part of rewriting, is a thorough, careful review of the final draft of an essay that ensures that all errors have been eliminated.
Purpose in an essay refers to its overall aim or intention: to entertain, inform, or persuade a particular audience with reference to a specific topic. For example, Janice Gross Stein argues in "Developing a National Voice" that Canada must have a strong independent voice in global politics. See also dominant impression.
Refutation is the process of discrediting the arguments that run counter to your thesis statement.
Revision, meaning "to see again," takes place during the entire writing process as you change words, rewrite sentences, and shift paragraphs from one location to another in your essay. It plays an especially vital role in the rewriting stage of the composing process.
Rewriting is a stage of the composing process that includes revision, editing, and proofreading.
Rhetoric is the art of using language effectively.
Rhetorical questions are intended to provoke thought rather than bring forth an answer. See, for example, Judy Rebick's rhetorical question in "The Culture of Overwork": "If working long hours makes us unhappy and unhealthy, why do we do it?"
Rhetorical strategy or mode
Rhetorical strategy or mode is the primary plan or method whereby an essay is organized. Most writers choose from methods discussed in this book, such as narration, example, comparison/contrast, definition, and cause/effect.
Sarcasm is a form of irony that attacks a person or belief through harsh and bitter remarks that often mean the opposite of what they say. See, for example, Dave Bidini's sarcastic description of arena names in "Kris King Looks Terrible": ". . . these days, arena names make little sense. For instance, not only does the National Car Rental Center, home of the Florida Panthers, promise little in the way of aesthetics, you can't even rent a car there. Same with the horseless Saddledome in Calgary. And despite the nation's affection for the old Maple Leaf Gardens, there's probably more foliage growing on the Hoover Dam." See also satire.
Satire is a literary technique that attacks foolishness by making fun of it. Most good satires work through a "fiction" that is clearly transparent. Will Ferguson presents Canada's "Black Cliffs of Sudbury" as being equivalent or even preferable to England's White Cliffs of Dover when they obviously are not; he simply uses this satiric approach to highlight the ugliness of Sudbury.
Setting refers to the immediate environment of a narrative or descriptive piece of writing: the place, time, and background established by the author.
Simile is a comparison of two dissimilar objects that uses the words like or as. See, for example, Karen Connelly's description of herself in "Touch the Dragon": "As the country pulls out from under me, I overturn like a glass of water on a yanked tablecloth, I spill." See also Sharon Butala's description of the Prairies in "The Myth: The Prairies Are Flat": ". . . all those 'flat and boring' philosophers who fail to see, whether level as a tabletop or not, the exquisite and singular beauty of the Prairies." See also metaphor.
Slang is casual conversation among friends; as such, it is inappropriate for use in formal and informal writing, unless it is placed in quotation marks and introduced for a specific rhetorical purpose: "Hey dude, ya know what I mean?" See also colloquial expressions.
Spatial order is a method of description that begins at one geographical point and moves onward in an orderly fashion. See, for example, Lesley Choyce's description of Wedge Island that first describes the whole island and then moves from the grassy peninsula at the top out to the tip.
Specific: See general.
Style is the unique, individual way in which each author expresses his or her ideas. Often referred to as the "personality" of an essay, style is dependent on a writer's manipulation of diction, sentence structure, figurative language, point of view, characterization, emphasis, mood, purpose, rhetorical strategy, and all the other variables that govern written material.
Subjective: See objective.
Summary is a condensed statement of a larger grouping of thoughts or observations.
Syllogism refers to a three-step deductive argument that moves logically from a major and a minor premise to a conclusion. A traditional example is "All men are mortal. Socrates is a man. Therefore, Socrates is mortal."
Symbol refers to an object or action in literature that metaphorically represents something more important than itself. In Gwynne Dyer's "Flagging Attention" he acknowledges the symbolic value of flags as representing their countries and thereby creating a sense of national unity and community.
Synonyms are terms with similar or identical denotative meanings, such as aged, elderly, older person, and senior citizen, but with different connotative meanings.
Syntax describes the order in which words are arranged in a sentence and the effect that this arrangement has on the creation of meaning.
Thesis statement or thesis is the principal focus of an essay. It is usually phrased in the form of a question to be answered, a problem to be solved, or an assertion to be argued. The word thesis derives from a Greek term meaning "something set down," and most good writers find that "setting down" their thesis in writing helps them tremendously in defining and clarifying their topic before they begin to write an outline or a rough draft.
Tone is a writer's attitude or point of view toward his or her subject. See also mood.
Topic sentence is the central idea around which a paragraph develops. A topic sentence controls a paragraph in the same way a thesis statement unifies and governs an entire essay. See also induction and deduction.
Transition is the linking together of sequential ideas in sentences, paragraphs, and essays. This linking is accomplished primarily through word repetition, pronouns, parallel constructions, and such transitional words and phrases as therefore, as a result, consequently, moreover, and similarly.
Understatement, the opposite of hyperbole, is a deliberate weakening of the truth for comic or emphatic purpose. Commenting, for example, on the food at SkyRink, Dave Bidini says in "Kris King Looks Terrible": "There were kiosks selling blackened grouper, lemon chicken, bean curd seared in garlic and peppers, sushi, and hot soup with prawns-not your typical hockey cuisine."
Unity exists in an essay when all ideas originate from and help support a central thesis statement.
Usage refers to the customary rules that govern written and spoken language.
Vantage point is the frame of reference of the narrator in a story: close to the action, far from the action, looking back on the past, or reporting on the present. See also person and point of view. | http://www.pearsoned.ca/text/flachmann4/gloss_iframe.html | 13 |
15 | The word timbre (also called tone quality or tone color) is an important part of every musician’s vocabulary. Woodwind and brass instrumentalists can affect timbre by such simple things such as embouchure or fingering choices. String instrumentalists have things such as bow speed, angle and pressure to work with. Surprisingly to some, pianists have one and only one way to legitimately manipulate the actual color of each note they play: pedals.
Some may be compelled to cite qualities such as “warm”, “harsh” and “transparent” as auditory examples of how technique and touch can affect tone quality on the piano. We can affect tone production with our fingers, not just with pedals, right? Yes and no. We can affect tone production dependently of hammer velocity, but not independently of hammer velocity. In other words, those who ask their students to play a note “less harsh” without playing the note any softer are asking for the impossible.
Pianists do have the una corda, damper and sostenuto pedals at their disposal, but these are wielded by the feet, not by the fingers.
Physical Representations of Tone Production
Consider the following list, which represents all possible qualities of a note that can be controlled by the pianist:
- Pitch: Which string in the piano does the hammer strike?
- Rhythm: When does the hammer strike the string?
- Volume: How fast is the hammer traveling at the instant it strikes the string?
- Articulation: When (and how quickly) does the string stop vibrating?
- Timbre: Una corda pedal: How many strings are struck, and which parts of the hammers hit the strings? Damper and sostenuto pedals: What other strings are allowed to pick up sympathetic vibrations from the string that is struck?
If there were other ways to affect the “color” of a piano tone other than just pedals on the piano, there would be a physical representation of this change of color within the piano that obeys basic laws of physics.
Perhaps we can cause one hammer to hit the string with greater force or weight than another hammer that is traveling at the same velocity? Or maybe we can somehow get the hammer to stay in contact with the string for a longer period of time to achieve a warmer sound, perhaps by following the key all the way to the bottom of the keybed?
All these questions seem to ask the same fundamental question: can we affect how the hammer hits the string with factors other than velocity? Unfortunately, we can’t. These ideas are nothing more than myths produced by followers of the pianistic faith I refer to as “Piano Voodoo”.
Piano Voodoo Exposed
We’ve all noticed the point at which a key descent kind of “clicks” at the very bottom of its descent. This point, called “escapement” or “letoff” by piano technicians, is the point during the hammer’s journey at which we literally lose control of the hammer. After the escapement point, the hammer coasts to the string. Whether a pianist accelerates or decelerates a key depression on the key’s way down, and regardless of the angle at which the finger descends onto the key, all we are truly controlling is the velocity of the hammer up to its escapement point. After that, we are completely out of control.
Ironically, this is the kind of non-control that we need in order to be fully in control of the piano. If the escapement of a piano were adjusted until it occurred at a point beyond the string (in other words, escapement never occurred), we would get multiple note strikes and thuds every time we followed a key all the way to the bottom of the keybed.
So, we have no way to affect the length of time the hammer is in contact with the string other than affecting the velocity of the hammer when it is released by the escapement mechanism, which occurs before the key even reaches the bottom of the keybed (and about one-eighth of an inch before the hammer hits the string). This also means that there is no sense of hammer force when trying to distinguish force from velocity. Force implies
acceleration (or what pianists might think of as “continuing weight” on the key), and we non-Jedi pianists can neither accelerate nor decelerate the speed of the actual hammer once it has passed its escapement point.
To read about current research in this field (actual studies done on tone production), see another blog post of mine, Studies Addressing Piano Voodoo of Tone Production.
Jet Engineers Know Better Than Pilots
It is surprising just how many great artists and teachers are disciples of Piano Voodoo and just how far they will go to complexify musical artistry. Or maybe it isn’t surprising? In almost every case of observing Piano Voodoo masters, I’ve always wondered how many are so “at one” with the piano that they have ironically lost touch with what is actually happening inside the piano.
A jet fighter pilot (who is more “at one” with the jet than anyone, even more so than the engineers who made it) is not the most qualified person to understand exactly how the physics of the jet work. The pilot knows exactly what kind of hand movements are necessary to produce the jet movements desired. Jet pilots have their own set of techniques that allow for “smoother” jet flying, but their understanding of how their joysticks, levers and buttons move the jet is nothing more than an illusion being interpreted by the software and hardware that allows the jet to function. Fighter pilots would never pretend to know how to write or even understand all of the software that resides in all the computer chips on modern jets that manifests all of these different flying techniques. Likewise, the most brilliant concert artists and piano teachers on Earth are no more qualified than basic piano technicians are (and in fact are nearly always less qualified) to explain why a certain passage sounds so different from one pianist to another in terms of piano mechanics. It is the greatest artists on Earth who are often the most deceived.
Even being an illusion, this illusion of “control of piano timbre via fingers” is a very strong one. A great artist plays one note abrasively-loud with one technique, another more pleasant note with a different technique, then the artist argues to his or her death that they were both the same loudness but different “colors”. The master’s own brilliant musicianship is what deceives them: because they so desperately want to hear different “colors” with every fiber of their consciousness, they project these colors onto the notes they play. No matter what technique is used, the pianist is still manipulating a simple 17th century machine to produce the sound.
Unfortunately, most piano teachers I have spoken with cannot tell me what “escapement” is. That is why Piano Voodoo is taught at the very top, at national conventions attended by music teachers. That is why Piano Voodoo articles sometimes appear in music magazines and journals. These people stand on stage in front of huge audiences of agreeable teachers and do nothing but complexify the issue of tone production. Many teachers walk away from these talks with a small knot in their stomach, because while the talks are very artistic and inspiring, they leave the listener empty and confused. “How am I going to get my students to do this when I don’t even fully understand it myself?”, they ask. Fortunately, teachers who have ever felt this way can take comfort in the fact that it’s not that you didn’t fully understand it, it’s the speaker who didn’t fully understand it.
Authority bias is the tendency to value an ambiguous stimulus (e.g., an art performance) according to the opinion of someone who is seen as an authority on the topic. Applied to Piano Voodoo, the voodoo master can say just about anything they want to with the expectation that followers will nod their heads, even if it makes no sense to anyone. This outcome is especially reliable and predictable since those who nod their heads with the most vigor end up looking like masters themselves to others who are more transparent about their confusion. Students and teachers who don’t know any better and who have never heard the term “escapement” before (let alone understand its implications) have no choice but to subscribe to Piano Voodoo when the music field places Voodoo masters on teaching pedestals.
One would think that these authority figures would be discredited by truth, but the problem is that these figures are some of the greatest pianists out there. There are other great pianists who achieve the same great playing without constructing musical illusions around them, but it is much more dazzling to speak about (and to listen to) grand artistic illusions than simple physics of the piano that were known hundreds of years ago. One places a lot on the line by challenging these viewpoints, since looking at truth square in the face does not seem like a very “artistic” thing to do. Perhaps my favorite quote about teaching is the following, spoken by a football coach (emphasis added):
“It is sometimes said that the great teachers and mentors, the wise men and gurus, achieve their ends by inducting the disciple into a kind of secret circle of knowledge and belief, make of their charisma a kind of gift. The more I think about it, though, the more I suspect that the best teachers – and, for that matter, the truly long-term winning coaches, the Walshes and Woodens and Weavers – do something else. They don’t mystify the work and offer themselves as a model of oracular authority, a practice that nearly always lapses into a history of acolytes and excommunications. The real teachers and coaches may offer a charismatic model – they probably have to – but then they insist that all the magic they have to offer is a commitment to repetition and perseverance. The great oracles may enthrall, but the really great teachers demystify. They make particle physics into a series of diagrams that anyone can follow, football into a series of steps that anyone can master, and art into a series of slides that anyone can see. A guru gives us himself and then his system; a teacher gives us his subject, and then ourselves.” – Adam Gopnik (“The Last of the Metrozoids”, The New Yorker, May 10th, 2004)
As powerful and true as this quote is, in the field of musical artistry, I don’t believe Piano Voodoo will ever disappear. Authority bias will continue to take place, and people who are truly liberated by truth will continue to be criticized by those who have based their entire teaching career (and ego) on pianistic illusion.
For those who really like authority, I can just as easily appeal to authority as the other side can. Those who disagree with this blog entry would also have to disagree with Charles Rosen, who is the granddaddy of all writer-musicians according to The Guardian.
Appeal To Ridicule
Some people find the suggestion that they are using voodoo instead of real artistry offensive. While the word voodoo is indeed a derogatory term in this context, the time has come for musical truth-deniers to be the ones on the defensive rather than truth-seekers having to always defend themselves. Unfortunately, the same people who are offended by this are often the ones who are doubly offensive in their suggestion that people who know certain scientific truths about the piano are replacing artistry with science. It is far more offensive for people to be ridiculed for being right than for being wrong.
The person who understands why they have certain feelings has a larger perspective (not smaller) than the person who simply has feelings but doesn’t understand why, and this larger perspective is going to be true for any discipline in the world. A historian is better off knowing why a war took place than in just knowing it took place. A painter is better off knowing why a certain brush stroke (or brush) is necessary than merely using it “just because it works”. Sound production on the piano is no different.
As author W. H. Auden said, “Great art is clear thinking about mixed feelings.” Gaining knowledge and truth of any kind is never going to make someone less of an artist, and I would be embarrassed for making such a backwards argument.
Using Technique For The Right Reasons
“The last temptation is the greatest treason: To do the right deed for the wrong reason.” (T.S. Eliot, Murder in the Cathedral)
While the musically elite may be correct to conclude that when a certain technique is used, certain “colors” are more likely to take place, it is not correct to assume this particular technique is the only way to achieve it. The field of psychology knows this as illusory correlation - beliefs that inaccurately suppose a particular relationship between a certain type of action and an effect. Piano strings don’t know what technique you’re using – they only know the velocity and timing of each hammer strike. Therefore, the proper relationship between technique and color is the relationship of likelihood rather than of absolute cause.
Different touches and angles of descent are still very useful to teach. Tone production is a very real part of pianism. There is nothing wrong with telling a student to not play a note quite so staccato while the pedal is down in order to achieve a “warmer” effect. But it can only enhance a student’s musicianship if we also explain the real reason why we suggest to play with various techniques in certain situations, such as the fact that a staccato touch is more likely to produce a greater hammer velocity because of the quickness of the stroke, rather than allowing the student to operate under beliefs that contradict basic laws of physics.
As another example, it is still useful to teach students to play with more “weight” if a less harsh tone is desired. When playing big chords, slapping the keys with the hand (pivoting at the wrist, the same way in which we might knock on a door without moving our arm) can produce a frightfully percussive sound, while dropping the entire forearm with a sense of “weight” helps to moderate that sound. But to tell a student that involvement of the arm achieves an equal dynamic level as involvement of the wrist/hand alone — except “without the harshness” — is simply not true. In reality, it is doing nothing more than causing the hammers to travel slower at the point of contact with the strings. While the hand can jerk downward very suddenly, the forearm (which is controlled by larger and therefore slower-moving muscles) has a harder time moving with the same degree of suddenness. Using additional “weight” moderates key (and therefore hammer) speed and nothing more.
Words like “warm”, “harsh” and “transparent”, as useful as we all find these words to be in our teaching, are nothing more than mere descriptions of certain combinations of note velocities, timings, articulations and pedaling. These seemingly simple parameters each can be manipulated with almost infinite variety, and when they interact with each other, it compels us to assume that there are 30 or 40 parameters instead of just a few. “Warm” might represent a generally soft and legatissimo sound, louder velocities in the lower note registers, or a phrase with a quiet dynamic peak. “Harsh” would be generally too loud. “Sparkly” might represent louder velocities in the upper registers with a staccato touch. “Transparent” might come to mind when we hear an unpedaled/pianissimo Alberti bass.
A slightly different hand position can drastically change the timing and velocity of notes (as well as timing of release). When you have 30 or 40 notes in a passage, all approached with one technique as opposed to another, the combination of all these velocity and timing variations will often produce what we artists call different “tones” or “colors”. One simple change in hammer velocity within a chord (on just one note) can affect the overtones present to such a degree that we perceive a change in “piano timbre” when actually it was only caused by volume of a single note.
All these and more techniques and touches at the piano really do work! It’s just that they don’t work for the reason many assume. Pianists produce these different colors and glorify them by assigning them attributes that don’t exist, which (in the end) only serves to glorify oneself. For additional information from a graduate physics thesis (on the piano!) turned into a website, see Piano Physics. Be sure to eventually make your way to the “Agreement of Perception” subsection of the “Paradox” section. You’ll see that the author agrees that basic understanding of piano physics and great artistry do not need to be treated as if they were mutually exclusive.
It’s Still Beautiful, Wondrous and Magical
Astronomers now know why stars flicker, what “shooting stars” actually are, and they know what causes the northern and southern lights. Does this prevent those very same astronomers from appreciating the beauty more than astronomers 3,000 years ago? Perhaps there is less “wonder” for the astronomers: they no longer wonder if Zeus is painting the sky. But this superstitious wonder is replaced with what I would argue is a better type of wonder.
When I heard Olga Kern play a C major Haydn Sonata at the 2009 MTNA Convention with more variety of tone color than I had ever heard in that piece, I stood in awe at her mastery of articulative touches, pedaling techniques, various timing choices, and most of all, combinations of velocity both horizontally and vertically. The problem is, most people who heard these colors probably assumed that mere pitch/velocity/duration/timing/pedaling couldn’t possibly be responsible for the “magic” they heard. But with a little imagination and an open mind, one can see how all this is actually possible. My sense of awe, wonder and appreciation of beauty increases when I consider how few tools pianists have at their disposal in order to create all their “magic.” Yes, they use hand positions and different touches to make these things as easy to do as possible, as do I, but just because someone is using a different grip and stroke on their tennis racket doesn’t mean their racket isn’t still behaving just like it always does. It’s still the same tennis racket.
What we have to work with is plenty, and in my opinion it is magical and humbling in the most glorious way to hear a colorful performance in light of such simplicity. We don’t need to invent the Tooth Fairy in order to judge or demonstrate good tone production. We become greater artists and teachers at the piano by embracing truth instead of running away from it. By examining and confronting the science behind tone production, we do not put ourselves in shackles of realism. We are instead liberated from the prison of illusion and artistic dogma.
Below is a collection of common reactions to this subject and my responses to each one.
1) “Music is an art, not a science.”
This is a straw man argument. This debate does not pretend to be artistic in nature, nor does it need to be, because it only deals with the science behind how a single note void of any musical context can be manipulated.
Even if this weren’t a straw man argument, music is a lot more science than most people think. Consider just how much music theory plays a role on stage. In fact, music theory ought to be called “music science” because that’s exactly what it is.
Those who believe music performance is not a science should read what CPE Bach wrote about performance practice in baroque music. They should read what Paderewski wrote about tempo rubato. They should ask Christopher Taylor sometime if he sees any “science” in his performance. In Johann Sebastian Bach: The Learned Musician, Christoph Wolff compares Bach to Isaac Newton because of how scientifically Bach approached his art.
There is a science to ornamentation, articulation, rubato, and dynamic planning. Only when this science is absolutely thoroughly mastered does it even begin to sound like “art”. Is there no science to fingering? Is there no science to technique? No science in composing a nocturne? An etude? Some of the most terrible performances result from the fact that the performer approached the piece with great feeling and absolutely no understanding.
Which situation is more inherently “good”: having feelings about something, or having feelings about something and understanding why those feelings are there? It is not an asset to be ignorant of how the piano works, and it is a form of ridicule to bring up the “this knowledge is irrelevant to great artistry” argument, especially when it isn’t even true.
Furthermore, just as good technique seeks to remove unnecessary or distracting physical movement and tension, a good pianist will also seek to eliminate unnecessary or distracting mental thought.
It is not myself who reduces art to science; it is the deceived artist who turns science into religion.
2) “When we play piano, this scientific stuff is too much to think about. It makes me feel like a centipede who is unsure of which foot to put forward. It gets in the way of my artistry.” (and) “The focus here is in the wrong place: Knowing this stuff doesn’t make you any more of an artist.”
When great pianists perform, are they calculating hammer velocities with math equations? If only they were capable of such a thing. Suggesting that this knowledge would somehow limit one’s ability to be “magical” at the piano is just as much of a fallacious claim as the suggestion that getting a college music degree somehow impedes one’s ability to create inventive and unique music as a composer. The latter belief was once stated in the bio of Yanni on his website before it was finally edited out, probably due to flame mail he inevitably must have received. Or maybe Yanni’s press agent kicked some sense into him.
Knowledge and beauty are not incompatible. Pianists can still do everything they already do at the piano. It can only benefit them to stop and think about the true “why” behind what they do. At best, they’ll validate through science what they already know through experience. At worst, they’ll open new doors of possibilities by embracing reality and realize there’s a better way to approach it.
Some students can work well without true understanding, while others cannot. It would be bad pedagogy for a teacher to expect 100% of their students to operate under illusion just because that’s how the teacher operates. The implications of the truth of tone production at the piano are only unimportant to those who are perfectly content to take action before truth is known. This isn’t good enough for some people, including some of the greatest artists of today.
Furthermore, “focus” on the subject is not a prerequisite for teaching it, any more than a teacher must “focus” on music theory in every piano lesson they teach in order to acknowledge the validity of music theory as it applies to everything we do at the keyboard.
Finally, this knowledge can and does make one a better artist: see the “It’s Still Beautiful, Wondrous and Magical” section above. How we approach the piano can most definitely change when we recognize the reality of what we are dealing with as artists. It is widely accepted that creating a “body map” (as taught by the Taubman approach and Alexander technique) enhances one’s playing. Likewise, when a pianist develops an accurate and adequate “piano map”, it can only enhance their efforts. For example, when a pianist realizes that color is only a result of velocity, he or she may simplify their efforts in a certain passage to focus on the dynamic levels of each note rather than an illusory color that works independently of dynamic level.
3) “How do we produce tone at the piano? We produce it with the heart.”
When we discuss “how” tone is produced, we can discuss how a pianist proceeds to produce it, or we can discuss how the piano reacts to what the pianist does. This subject is limited to the scope of the latter. “How” a pianist produces tone is an entirely different subject.
4) “One can hold certain notes down with the fingers before hitting other notes, causing sympathetic vibrations.”
Yes, but there is a good reason that this technique of overholding is often called “finger pedaling”: this is an instance where the fingers seek to imitate the pedal. The change in tone is not caused by any special stroke style; instead, it is caused by initial conditions that are set up before the stroke occurs. Piano Voodoo masters who claim that special strokes exist on the piano that can manipulate tone independently of velocity are making a fallacious argument, because “manipulation of initial conditions” is not a stroke.
But even if this did count for a “stroke”, those who argue against the premise of this article do so on artistic grounds, and yet there is not a single concert artist in recorded history who overholds notes in this fashion purely for the effect of getting “only a few” sympathetic vibrations with the few fingers that are overholding. When pianists wish to add extra tone to their music (such as the final two chords in a Haydn sonata), the pianist will simply pedal the last two chords for their exact duration. If a pianist wants to add tone to a scale, it is done with changing pedal, half-pedal, pedal that “floats off” for a diminuendo, or pedal that stays down for “pedal crescendos”. Never in standard repertoire is a pianist called to “grab certain notes silently before playing other notes”. This is done sometimes in some non-standard repertoire, but even then, it’s not done to achieve some magical sound during the stroke: it is done so that the pianist can let the original notes go and allow the listener to hear only the sympathetic vibrations that linger after the original notes have been released (such as the last line of “The Serpent’s Kiss” from Bolcom’s Garden of Eden, or the beginning and end of the third movement of Ligeti’s Musica Ricercata).
5) “I have great experience in this matter and have studied with great masters.”
Appeal to credibility only works when both sides of the debate can agree that the source of information is trustworthy, relevant and sufficiently strong. The claim above is not even relevant to the proposition. Artistry is not a prerequisite to understanding the limitations of the piano well enough to have an intelligent debate about it.
Additionally, this argument is a form of appeal to ridicule. It implies that one person’s opinion matters more on this subject because they are more of an artist than their opponent.
6) “Ability to produce good tone at the piano is a gift.”
This is completely irrelevant. The subject at hand deals with mechanics of piano action, not with artistry involved in depressing a key.
7) “How about we just agree that there is no such thing as “wrong” or “right” on this issue.”
There is no room for both sides in this debate to be right, any more than there is room for 2+2 to be 4 and 5 at the same time. Either it is possible to play a note at the same volume twice and change its timbre the second time by virtue of stroke style, or it is not possible. Those who do not have truth on their side are the only ones who stand to benefit by offering the “no wrong or right” argument. “If at first you don’t succeed [in debate], redefine success,” right?
(c) 2010 Cerebroom | http://blog.twedt.com/archives/222 | 13 |
24 | In this unit, we will learn the difference between inductive and deductive reasoning. You will make conjectures and learn how to verify them with the use of deductive reasoning. In addition to the various conditional statements, we will also explore biconditional statements.
A conclusion based on a pattern is called a conjecture. A conjecture is a guess based on analyzing information or observing a pattern. When we come to a conclusion or make a general rule based on a pattern that we observe, we are using inductive reasoning.
Since a conjecture is an educated guess, it may be true or false. A conjecture is said to be false if there is even one situation in which the conjecture is not true. We call the false example a counterexample.
When we make a conclusion after examining several specific cases, we have used inductive reasoning. However, we must be cautious -- by finding only one counterexample, we can disprove the conclusion.
Related Conditional Statements
If you change the hypothesis or conclusion of a conditional statement, you form a related conditional. There are three related conditionals -- converse, inverse, and contrapositive.
A conditional consists of a hypothesis (p) and a conclusion (q).
A conditional statement is a statement that can be written as an if-then statement -- "if p, then q" or "p implies q." An if-then statement is a statement such as "If two angles are vertical angles, then they are congruent." The phrase immediately following the word "if" is the hypothesis -- two angles are vertical angles. The phrase immediately following the word "then" is the conclusion -- they are congruent.
Sometimes it is necessary to rewrite a conditional statement so that it is in the "if-then" form. Here is an example:
Conditional: A person who does his homework will improve his math grade.
If-Then Form: If a person does his homework, then he will improve his math grade.
A conditional statement has a false truth value ONLY if the hypothesis (p) is true and the conclusion (q) is false. A conditional statement always has the same truth value as its contrapositive, and the converse and inverse always have the same truth value.
The converse statement is formed by exchanging the hypothesis with the conclusion.
Let's use our original conditional statement -- "If two angles are vertical angles, then they are congruent." To write the converse, we simply exchange the hypothesis (two angles are vertical angles) with the conclusion (they are congruent). Our converse statement becomes
If two angles are congruent, then they are vertical angles.
The inverse statement is formed by negating the hypothesis and the conclusion. The negation of a statement, "not p," has the opposite truth value of the original statement.
If p is true, then not p is false.
If p is false, then not p is true.
Once again, we will use our original conditional statement, "If two angles are vertical angles, then they are congruent." The inverse (negate both the hypothesis and the conclusion) is as follows
If two angles are not vertical angles, then they are not congruent.
The contrapositive is formed by doing both -- exchanging and negating the hypothesis and the conclusion.
If two angles are not congruent, then they are not vertical angles.
Transform conditional (if-then) statements into converse, inverse,...
With inductive reasoning, we use examples to make conjecture. With deductive reasoning, we use facts, definitions, rules, and properties to draw conclusions and prove that our conjectures are true.
Law of Detachment
One form of deductive reasoning that lets us draw conclusions from true facts is called the Law of Detachment. If the conditional statement (if p then q) is a true statement and the hypothesis (p) is true, then the conclusion/conjecture (q) is true.
Given: If i get over 90%, I will receive an A. I got 96%.
Conjecture: I have an A.
Law of Syllogism
Another form of deductive reasoning is the Law of Syllogism. If two conditional statements (if p then q) and (if q then r) are true, then the resulting conditional statement (if p then r) is also true.
Given: If I oversleep, I will miss the bus. If I miss the bus, I will have to walk to school.
Conjecture: If I oversleep, I will have to walk to school.
A biconditional statement combines a conditional statement, "if p, then q," with its converse, "if q, then p."
Definitions can be written as biconditionals:
Definition: Circumference is the distance around a circle.
Biconditional: A measure is the circumference if and only if it is the distance around a circle.
Writing biconditional statements. When you combine a conditional...
Visit www.msdgeometry.com - your ultimate internet resource for comprehensive information on High School Geometry, Algebra, and Trigonometry. | http://www.msdgeometry.com/joomla/index.php?option=com_content&view=article&id=284:logic-and-reasoning-in-geometry&catid=35:chapter-2&Itemid=149 | 13 |
44 | Chapter 2: The Real Numbers
Although you have doubtless worked quite a bit with the real numbers, this chapter will start at the beginning introducing you to them again from a perhaps higher viewpoint than that you have seen in the past. This will serve to put your knowledge in a proper mathematical framework so that you can better understand some of the topics of the last chapter. Here are some of the topics:
A set is simply a collection of objects. We could define axioms for set theory, but, instead, we choose to depend on your intuitive understanding of sets. What can you do with a set? Basically, you can check to see if an object is in the set (we then say that the object is an element of the set). You can also define subsets of the set; a subset is simply a collection of objects, every one of which is in the original set. Every set contains the empty subset (the collection with no elements in it), and the original set itself (the collection of all elements of the original set). Furthermore, we assume you are familiar with basic operations like the union and intersection of subsets of a set.
Given two sets S and T we can define the cartesian product of S and T as the set whose elements are all the ordered pairs (s, t) where s is in S and t is in T. For example, in the last chapter, we defined took the cartesian product of the set of real numbers with itself as the set of points in the plane.
By a relation between the sets S and T, we mean simply any subset of the cartesian product .
This may seem like a strange definition. But it is really what we mean by a relation. For example, the property of two real numbers a and b that a be less than b is a relation. It is a relation between the set of real numbers and itself, and it is completely determined by specifying the set of all pairs of numbers (a, b) such that . As another example, we have the relation of b being the mother of b. Let S be the set of all people and T be the set of all women. Then this relation is simply the set of all pairs (s, t) in where t is the mother of s.
A function from the set S to the set T is simply a relation with the property that for every s in S there is exactly one element in R with first coordinate s. The set S is called the domain of the function and set T is called the range space of the function. We often indicate that R is a function with domain S and range space T by using the notation: . The set of all elements t in T for which there is at least one pair in R with second coordinate t is called the range of the function. We write R(a) = b if (a, b) is in R.
Warning: Many textbooks define a function f from S to T as a rule which assigns to each element s in S exactly one element t in T. They then say that the set of all points (s, f(s)) is called the graph of the function. Although this approach corresponds more closely to one's intuitive notion of a function, we have avoided this approach because it is not clear what one means by a rule.
In the case where the domain and range space are both the set R of real numbers, we see that the function is the set of points in what we would have called the graph of the function. The condition that there be exactly one point in the graph with an given first coordinate amounts to saying that every vertical line intersects the graph in exactly one point. For this reason, the condition is often referred to as the vertical line test.
A binary operator on a set S is a function from to S. A unary operator on a set S is a function from S to S.
Example 2: Addition and Multiplication on the real numbers are both binary operators. Negation is a typical unary operator. Binary operators are usually written in infix notation like 5 + 7 rather than in functional notation like +(5,7). Similarly, unary operators are usually written in prefix notation like -3 rather than functional notation like -(3).
The collection of elements out of which we will be making algebraic expressions will be referred to as a field. More precisely, a field is set endowed with two binary operators which satisfy some simple algebraic properties:
Definition 1: A field F is a consisting of a set S and two binary operators called addition and multiplication which satisfy the following the properties:
Example 3: The set of rational numbers with the usual addition and multiplication operators is a field. The same is true of the set of real numbers with the usual addition and multiplication operators. The set of complex numbers (yet to be defined) will also be a field.
Example 4: Consider the set of expressions of the form p(x)/q(x) where p and q are polynomials with real coefficients and q is non-zero. We say that if (where the multiplication is the usual multiplication of polynomials). Addition and multiplication of these expressions are defined in the usual manner. One can show that these expressions form a field called the field of rational functions.
Warning: We are treating rational functions here as simply expressions, not as functions. In particular, the definition of equality does not correspond to equality of functions.
Example 5: There is basically only one way to make a field with only two elements 0 and 1. See if you can make up the appropriate addition and multiplication table and verify the field properties.
The definition of a field includes only the most basic algebraic properties of addition and multiplication. We will see, however, that all the usual rules for manipulating algebraic expressions are consequences of these basic properties. First, let use begin by noting that the definition of a field only assumes the existence of identities and inverses. In fact, it follows that they are in fact unique:
Proposition 1: If F is a field, then the identity elements 0 and 1 as well as the additive and multiplicative inverses are unique.
Proof: Suppose that 0 and 0' are additive identities. Then, since 0 is an identity, we have 0' + 0 = 0'. Similarly, since 0' is an identity, we have 0 + 0' = 0. Since addition is commutative, we conclude that 0 = 0'.
Let a be in the field. Suppose that both b and c are additive inverses of a. The a + b = 0 and a + c = 0. We can now calculate b = b + 0 = b + (a + c) = (b + a) + c = (a + b) + c = 0 + c = c + 0 = c. (Please be sure that you understand why each of the steps of this calculation are true.)
To complete the proof, you should make similar arguments for multiplicative identities and inverses.
Proposition 2: Let F be a field. If a is in F, then . If a and b are in F satisfy , then either a or b are zero.
Proof: Let a be in F. Then . Let b be the additive inverse of . Then applying it to the last equation, we get
Suppose ab = 0. If a is zero, there is nothing more to prove. On the other hand, if , then a has a multiplicative inverse c and so .
Corollary 1: If a is an element of a field F, then -a = (-1)a.
We can define the binary subtraction operator: a - b = a + (-b) and, for , the binary division operator . The division operator will also be expressed as .
Proposition 3: Let F be a field containing a, b, c, and d where b and d are non-zero. Then
Proof: (i) Do this as an exercise - it is a matter of simplifying using the commutative and associativity properties of multiplication to see that the product is equal to 1.
(ii) Using associativity and commutativity, one can show that . By assertion (i), this is the same as .
(iv) One has
where some of the steps have been combined.
(v) Using property ii, it is easy to see that . But then assertion v follows from assertion ii. This completes the proof of the proposition.
The remaining rules in section 1.1.1 on simplifying expressions are now easy to verify, i.e. they are properties of any field. The material in section 1.1.2 and 1.1.3 on solving linear equations or systems of linear equations are also properties of fields. On the other hand, the material on solving quadratics does not hold for arbitrary fields, both because it uses the order relation of real numbers as well as the existence of square roots.
Definition 2: An ordered field F is a field (i.e. a set with addition and multiplication satisfying the conditions of Definition 1) with a binary relation < which satisfies:
Fact: If F is an ordered field, then 0 < 1.
Proof: By Definition 1, and so by trichotomy, if the the fact were wrong, then we would have a field F with 1 < 0. By property iii, we would have 1 + (-1) < 0 + (-1) and so 0 < -1. But then using property iv, we would have . By Proposition 2, the left side is 0 and so . This contradicts trichotomy and so the assertion must be true.
If F is an ordered field, an element a in F is called positive if 0 < a.
Proposition 4: The set P of positive elements in an ordered field F satisfy:
Proof: (i) By property i of the Definition 2, exactly one of a < 0, a = 0, and 0 < a must be true. If a < 0, then by property iii of Definition 2, we have a + (-a) < 0 + (-a) and so 0 < -a. Conversely, if 0 < -a, adding a to both sides gives a < 0. So the three conditions are the same as -a is in P, a = 0, and a is in P.
Remarks: i. In one of the exercises, you will show that, if a field has a set P of elements which satisfy the conditions of Proposition 4, then the field is an ordered field assuming that one defines a < b if and only if b - a is in P.
ii. An element a of an ordered field F is said to be negative if and only if a < 0.
iii. It is convenient to use the other standard order relations. They can all be defined in terms of <. For example, we define a > b to mean b < a. Also, we define to mean either a < b or a = b and similarly for .
iv. The absolute value function is defined in the usual way:
v. One now has everything you need to deal with inequalities as we did back in section 1.1.4.
Proposition 5: Let a and b be elements of an ordered field F.
Proof: i. By Trichotomy, we can treat three cases: a > 0, a = 0, and a < 0. If a > 0, then -a < 0 and so |a| = a and so |-a| = -(-a) = a. If a = 0, then -a = 0 and so |a| = 0 = |-a|. If a < 0, then -a > 0 and so |a| = -a and |-a| = -a. In all three cases, we have |a| = |-a|.
ii. Again, we can treat three cases: If a > 0 or a = 0, then |a| = a and so . If a < 0, then adding -a to both sides gives 0 < -a and so a < -a by transitivity. In this case we have |a| = -a and so a < |a|.
We could argue the other inequality the same way, but notice that we could also use our result replacing a with -a. (Since it holds for all a in F, it holds for -a.) The result says , where we have used assertion i. Adding a - |a| to both sides of the inequality gives the desired inequality.
iii. Once again, do this by considering cases: If , then |a + b| = a + b. Since and , we can add b to both sides of the first inequality and |a| to both sides of the second one to get and . Using transitivity, we get as desired.
Now suppose that . Then adding -a - b to both sides of the inequality gives -a + (-b) < 0. Applying the result of the last paragraph, we get . But a + b < 0 means that |a + b| = -(a + b) and so where we have used assertion i for the last step. This completes the proof.
This is an example of how to deal with a more complicated inequality. It is not clear how to begin. In such a case, it is often useful to simply work with the result trying to transform it into something easier to prove. So, suppose the result were true. We could then expand out the expression using the distributive law to get:
which simplifies to . This still looks complicated until one thinks of grouping the factors differently to get . Subtracting the left side from both sides of the inequality gives . Now, you may recognize the right side as being a perfect square: factoring we get . It is still complicated, but do you expect that it is true? The right hand side is the square of a complicated expression -- and the assertion is that it is non-negative. We have not yet proved this, but it is a common property of real numbers, so you might make the
Conjecture: If a is any element of an ordered field, then .
The conjecture looks like it should be simple enough to prove. In fact, you should go ahead and try an prove by considering cases as we have done before. Once it is proved, is the Cauchy-Schwartz Inequality proved? No. We had assumed that it was true and shown that it a result which would be true provided the conjecture were true. That is no proof of the inequality. But, all is not lost, the idea is that we might be able to trace back through our steps in reverse order and reach the desired inequality.
Assuming that the conjecture is true, let's see that the Cauchy-Schwartz inequality must also be true. We know that the square of any field element is non-negative. So, applying this to the element , we get . Using the distributive law to expand this gives . Adding 2(ad)(bc) to both sides of the inequality yields . Using the commmutative and associative law several times allows us to re-arrange this into . Adding appropriate terms to both sides and again using associativity and commutativity takes us back to the step:
Finally, we can factor the sides to get the Cauchy-Schwartz Inequality.
Only later, will you see that the Cauchy-Schwartz inequality is useful. But already there is a lesson here. If we had just presented the last paragraph as a proof, you would have no idea of why one was doing each of the steps -- you would know that the inequality was true, but have no intuitive grasp of why or how one ever came up with the proof. In this case, the idea is that the proof is obtained by non-deductive means -- we simply worked backward from the result we wanted to prove until we got to an assertion we could prove. We proved the assertion and then used it to work forward to the desired result. When reading any proof you should always be asking yourself whether or not the proof is something you could have come up with yourself. If not, then you need to work more with the material until you hopefully will understand enough to be able to do it yourself.
The so-called natural numbers (Are the others un-natural?) are the numbers 1, 2, 3, etc. But expressing this is a bit complicated. Assume for the whole section that we are operating within a particular ordered field F. The set S we want to describe satisfies the conditions:
Now consider the set T of all inductive sets S. For example, the set F is in T as well as the set of all positive elements of F. The set of natural numbers would appear to be contained in any of the sets in T. So, one way to define the set we want is
Definition 3: The set of natural numbers is the intersection of all inductive sets, i.e. a is a natural number provided that it is an element of every set S in T.
Proposition 6: (Induction) The set of natural numbers satisfy the conditions
Proof: i. Let a be in . Then a is an element of every set S in T. If S is in T, then an so a is an element of F.
ii. Since 1 is in every set S which is in T, 1 is in their intersection which is, by definition, .
iii. Let a be in . If S is in T, then a must also be in S. But then a + 1 is also in S. Since this is true for every S in T, a + 1 is in the intersection of all the S in T, i.e. a + 1 is in .
iv. Suppose S is an inductive subset of . Then S is in T. Since is the intersection of all the S in T, it follows that is contained in S. But then we have and , which means that the two sets must be equal. This completes the proof.
The importance of Proposition 6 is that it is the basis of a method of definition and of proof called mathematical induction which we will normally refer to simply as induction.
First, let's see how it works for definitions. Let's do an inductive definition of powers. Let a be in F. We define to be a. Now, suppose we have already defined for some natural number k, then define to be the product . Consider the set S of all natural numbers k for which we have defined the . It contains 1 and if k is in S, so is k + 1. By Proposition 6, it follows that is defined for all natural numbers k.
We can also use induction in proofs. Here is the general scheme: Suppose that for every natural number k, we have a property P(k). Assume furthermore that:
Example 7: Let's prove that whenever m and n are natural numbers. We use induction with the property P(k) being the condition on k that for all natural numbers m, we have .
Suppose we wanted to show the same result for all integers n which greater or equal to zero. There are two possibilities: either show the special case of n = 0 separately. Or, you could define P(k) to be the property which we were calling P(k - 1); either approach is equally valid.
The set of integers is defined to be the set of all elements in F which are either natural numbers, 0, or whose negative is a natural number. If P(k) is defined for all integers k, then you can sometimes prove that P(k) is true for all integers k by using two induction proofs, the first showing that it is true for all non-negative integers and the second showing that it is true for all negative integers.
Warning: From the exercises, you will see that proof by induction is an extremely powerful tool. If one is dealing with inductively defined quantities like positive integer powers of a number, then induction is both natural and leads to a good understanding. On the other hand, it is often the case that even though you can prove things by induction, you are left with the feeling that you still do not have any intuitive understanding of why the result should be true. So, whereas induction may lead to a quick and easy proof, the result can be less than fully satisfying.
Lemma 1: for every natural number b.
Proof: Let P(k) be the property that . Clearly P(1) is true. If for some natural number k, we have P(k) true, then . Since we have already seen that 1 is positive, we have 0 < 1. Adding k to each side, we get k = k + 0 < k + 1. By transitivity, it follows that , and so P(k+1) is true. Therefore, P(k) is true for all natural numbers k.
Lemma 2: If k is a natural number there is no natural number m with k < m < k + 1.
Proof: Suppose that there is a natural numbers n and m with n < m < n + 1. Let S be the set of all natural numbers except m. Then S is a subset of the set of natural numbers and 1 is in S. (If 1 were not in S, then we would have to have 1 = m because m is the only natural number not in S; but then n < m = 1 contrary to Lemma 1.) Further, if k is any natural number in S such that k + 1 is not in S, then k + 1 = m since m is the only natural number not in S. But then n < m = k + 1 < n + 1 implies n - 1 < k < n. So there is a natural number lying strictly between n - 1 and n.
Now let P(k) be the property that there be no natural number m lying strictly between k and k + 1. Proceeding by induction, let us note that P(1) is true. If not, then letting n = 1 in the last paragraph, we see that 0 = n - 1 < m < n = 1. This contradicts Lemma 1 since m is a natural number.
Now suppose that P(k) is true but P(k + 1) is false. So there is a natural number m with k + 1 < m < k + 2. But the result of the first paragraph of the proof with n = k + 1 then shows that there is a natural number lying strictly between k and k + 1. But this contradicts the assumption that P(k) is true. We conclude therefore that if P(k) is true, then so is P(k + 1). By induction, it follows that P(k) is true for all natural numbers k, and so Lemma 2 is proved.
Proposition 7: (Descent) Every non-empty set S of natural numbers contains a smallest element, i.e. there is an a in S such that for all b in S.
Proof: Suppose S is a non-empty set of natural numbers that does not have a smallest element. Let S' be the set of all natural numbers smaller than all elements of S. The element 1 must be in S' because otherwise 1 would be the smallest element of S by Lemma 1. Let k be any natural number in S' such that k + 1 is not in S'. Since k + 1 is not in S', there must be a natural number n in S for which . Now n must be greater than k since k is in S'. We cannot have k < n < k + 1 by Lemma 2 and so , which means that n = k + 1.
We will show that n is the smallest element in S. For suppose m is any element of S. We have k < m because k is in S'. We cannot have k < m < k + 1 = n by Lemma 2. So we have . So n is the smallest element of S. Since we assumed that S had no smallest element, we have a contradiction. This proves Proposition 7.
The value of Proposition 7 is that it is the basis for another proof technique called infinite descent which is another variant on induction. Here is an example.
Example 8: Prove that for every a in F and for every natural number n and m, we have: .
Let P(k) be the property that for all a and m. If P(k) were not true for all natural numbers k, then the set S of natural numbers for which it were false would be non-empty. By Proposition 7, there is a smallest natural number k for which P(k) is false. Now, k cannot be 1, since . So k - 1 must also be a natural number and P(k) must be true (otherwise k would not be the smallest element of S). So . But then contrary to assumption.
This entire section will deal with natural numbers.
Definition 4: A natural number d is a divisor of the natural n if and only if there is a natural number m such that n = dm. A divisor d of n is said to be proper if it is other than 1 and n itself. A natural number p is said to be prime if it is not equal to 1 and it has no proper divisors.
For example, 4 is a divisor of 20 because . The prime numbers are 2, 3, 5, 7, 11, etc.
Let us define by induction the product of n numbers. A product of 1 number is defined to be itself. Assuming that we have defined the product of k numbers. A product of k + 1 numbers is the product of the product of the first k of the numbers and the last number. For convenience, we also stipulate that the product of zero numbers is 1. If for are n numbers, then the product of these n numbers is denoted . Of course, if all the numbers are equal to a, we can still denote the product of a with itself n times as . (One defines the sum of n numbers in an analogous manner; the notation for the sum of the n numbers for is .)
Proposition 8: i. Every divisor of a natural number n satisfies . Every proper divisor satisfies 1 < d < n.
ii. Every natural number can be written as a product of finitely many primes.
Proof: i. Suppose dm = n. Then and so . We also have 1 leq d since d is a natural number. So, . In particular, if d is a proper divisor of n, then 1 < d < n.
ii. We will prove the second assertion by infinite descent. If the assertion is false, then there is a smallest natural number n which is not expressible a product of primes. Then n is not a prime or else it would be the product of a single prime. Since n is not a prime, we can write n = dm where d is a proper factor of n. But then m is also a proper factor of n. (Why?) In particular, both d and m are natural numbers smaller than n. Since n was the smallest number not expressible as a product of primes, both d and m can be expressed as a product of primes. But then by taking the product of all the factors in the expressions of both d and m, we see that n is a product of primes also. This contradiction proves the result.
Proposition 9: (Division Theorem) If n and d are natural numbers, there there are unique non-negative integers m and r such that n = md + r and .
Proof: Let us prove the result by induction on n. If n is 1 and d is also 1, then m = 1 and r = 0 is the unique solution. On the other hand, if n = 1 and d > 1, then m = 0 and r = 1 is the unique solution.
Now, suppose that the result is true for some natural number n. Then n = md + r with . If r = d - 1, then n + 1 = (m+1)d + 0. Otherwise, n + 1 = md + (r + 1). To show uniqueness, suppose that with for i = 1 and 2. Taking differences, we see that where we have ordered the terms so that the right hand side is non-negative (If it were not, just swap the subscripts). But then the left side is also non-negative. So, we must have . Since the left side is a multiple of d, Proposition 8 implies that it must be zero. So . But then, the right side is also zero and so too.
If m and n are natural numbers, then a natural number d is called a common divisor of m and n if it is both a divisor of m and also a divisor of n. The largest common divisor of m and n is called the greatest common divisor of m and n and is denoted gcd(m, n).
Proposition 9 will allow us to develop the Euclidean Algorithm for calculating gcd(m, n). Repeatedly apply Proposition 9 to obtain
where one stops with as soon as one obtains the first zero remainder.
If d is a common divisor of m and n, then d also divides by the first equation. But then, the second equation shows that d divides r_1, and so on. We finally determine that d divides all the remainders. In particular, it divides the last remainder . On the other hand, starting from the last equation, we see that divides . The second from the last equation then says that it also divides , and so on. We conclude that divides both n and m. We conclude therefore that .
Again starting from the second from the last equation, we can solve to get . Using the previous equation we can solve for and substitute the expression into the right hand side of this last equation to express as a linear combination of and . Repeating the process, we eventually get the greatest common denominator written as a linear combination of m and n. Summarizing the results:
Proposition 10: (Euclidean Algorithm) If m and n are natural numbers, the Euclidean algorithm described above calculates the greatest common divisor of m and n. Furthermore, it allows one to find integers a and b such that gcd(m, n) = am + bn.
We say that two natural numbers m and n are relatively prime if gcd(m,n) = 1.
Corollary 2: If two natural numbers m and n are relatively prime, then there are integers a and b with 1 = am + bn. In particular, if p is a prime and n is not a multiple of p, then p and n are relatively prime and so there are integers a and b with 1 = ap + bn.
Corollary 3: If a prime p divides a product mn of natural numbers, then either p divides m or p divides n.
Proof: If it divides neither, then there are integers a, b, c, and d such that 1 = ap + bm and 1 = cp + dn. Taking products, we get 1 = 1cdot 1 = (ap + bm)(cp + dn) = (acp + adn +bmc)p + bd(mn). Since p divides mn, we have mn = ep for some e. Substituting, we see that p divides the right side and so p must divide the left side. But the left side is 1, which is a contradiction. So, it must be that p divides m or p divides n.
Corollary 4: If a prime p divides a product of any finite number of natural numbers, then p divides at least one of the numbers.
Proof: This follows by induction from Corollary 3.
Corollary 5: (Linear Diophantine Equations) The equation ax + by = c where a, b, and c are constants has integer solutions (x, y) if and only if gcd(a,b) divides c.
Proof: If a or b is zero, then the result is obvious. If not, then we can assume that a and b are natural numbers (by replacing one or both of x and y with their negatives). If gcd(a,b) divides c, then the Euclidean Algorithm gives integers u and v with au + bv = gcd(a,b). Multiplying by d = c/gcd(a,b) shows that x = ud and y = vd are solutions of the equation. On the other hand, if there is a solution, then clearly gcd(a,b) divides ax + by = c.
Example 9: Let's calculate the gcd(310, 464). One has , , and . We conclude that gcd(310, 464) = 2. Furthermore, we have and . Substituting gives .
Theorem 1:(Fundamental Theorem of Arithmetic) Every natural number can be represented as a product of zero or more primes. Furthermore, these primes are uniquely determined (including the number of times each prime is repeated) up to order of the factors.
Proof: We already know that at least one representation exists. If the result is false, then let n be the smallest natural number for which uniqueness is false. Suppose one can write two distinct representations where the factors are all prime. There cannot be a prime p which appears in both factorizations; if there were, then n/p would be a smaller natural number with two distinct representations as products of primes. But, clearly is a factor of n. Corollary 4 implies that must divide some . Since is a prime, it follows that as primes have no proper divisors. This contradiction shows that there is no natural number n for which the factorization is non-unique. This completes the proof.
Corollary 6: (Euclid) There are infinitely many prime numbers.
Proof: Suppose that the only prime numbers were . Let n = . Clearly n is a natural number not divisible by contrary to the Fundamental Theorem.
Corollary 7: is irrational.
Proof: Suppose that where m and n are positive integers. By dividing m and n by their greatest common divisor, we may assume that they are relatively prime. Squaring both sides of the equation and multiplying through by the denominator, we get . Since 2 divides the left side, 2 divides and, since 2 is a prime, we have 2 divides m. Dividing our equation by 2, we get . So 2 divides the right side and is divisible by 2. Again, this means that 2 is a divisor of n. But this contradicts the assumption that m and n are relatively prime. So, our original assumption that is rational must have been false. This completes the proof.
All of the material in the previous sections of this chapter applies to any ordered field. In particular, it applies to both the fields of rational numbers as well to the fields of real numbers. The problem with working in the field of rational numbers is that it is relatively sparse; so, when you go to solve equations of degree greater than one, we often find that what would have been a solution is not rational. We have already seen that is not a rational number. It will be convenient to have an algebraic domain in which every polynomial equation has a solution. We will find that the complex numbers fill this role; the real number field will be both useful to construct the complex numbers as well as being important in and of itself.
Exactly what makes the real numbers special is a rather subtle matter. This section will start that explanation, and the version given here will suffice until we can revisit the question in a later chapter.
In section 1.2.2, we said that real numbers could be represented as infinite decimals. This is the aspect of real numbers that we will discuss in this
First let us start with a a finite decimal. This is a numeral of the form 3.14159. The form of a finite decimal is an optional sign (either plus or minus) followed by a string of decimal digits, a period, and another string of decimal digits. Each of these represents a rational number: If there are n digits to the right of the period (called the decimal point), then the rational number is the quotient of the decimal (with the period removed) divided by . For example, we have .
Because the denominator is always a power of 10, many rational numbers cannot be represented as a finite decimal. For example, 1/3 cannot be written as a finite decimal. On the other hand, we can write arbitrarily good approximations: 0.3, 0.33, 0.333, 0.3333, etc. of 1/3. So, it is reasonable to say that if we just allowed ourselves to keep writing digits, we would get 1/3. You might write this as 0.333333.... where the ellipsis means to keep repeating the pattern. Another example would be 1/7 = 0.142857142857142857.... These are examples of infinite decimals.
But in what sense do these represent the rational number? To get a better idea, let's look again at how we convert a finite decimal to a fraction. If the decimal is , then the part is an integer, the digit means , the digit is in the next place and it represents , and so on. So our whole decimal becomes
Let's go back to our specific examples, we have a succession of improving estimates:
where there are n digits 3 in the last approximation. Our last one is then
It is still hard to tell what value we are approaching as n gets larger and larger. Here is the secret to calculating the sum:
Lemma 3: (Geometric Series) If , then
Proof: This is actually a familiar factorization of which the first few cases are: and . To understand how this works, just multiply the left side of our general expression by (1 - a). By the distributive law, this is the same as the original expression less the product of a and the original expression, i.e. . Notice that almost all the terms are repeated in the second expression. The one left out is 1 and we have one additional one . So the difference is just , which shows the result.
Applying Lemma 3 with a = 1/10 gives
and the right side simplifies to . This is the exact value when there are precisely n digits to the right of the decimal point. Now, what happens when we take more and more digits? The result is always a little less than 1/3, but the error shrinks to zero as n gets arbitrarily large. This is the sense in which we can say that the infinite decimal represents 1/3.
Let's repeat the same computation with our second example. In this case, it is inconvenient to use powers of 10 because the pattern repeats itself every 6 digits. But things are easy if we simply use powers of .
where again we have repeated the pattern exactly n times. Lemma 3 says that this is equal to
Clearly, as n gets arbitrarily large the right factor approaches 1. Furthermore, if you reduce the fraction, you will see that 142857/999999 = 1/7. Again, we see that the infinite decimal represents 1/7 in the sense that if we take the sequence of numbers we get by taking more and more digits, the limiting value of the elements of the sequence is 1/7.
Now, let's formalize our discussion.
Definition 5: i. An infinite decimal is an expression of the type , where is an integer, and is an infinite sequence of decimal digits (i.e. integers between 0 and 9).
ii. Every such infinite decimal defines a second sequence of finite decimals where .
iii. One says that the infinite decimal represents the number r (or has limit r) if can be made arbitrarily close to zero simply by taking k sufficiently large.
Definition 6: i. An ordered field F is said to be Archimedean if, for every positive a in F, there is a natural number N with a < N.
ii. An Archimedean ordered field F is called the field of real numbers if every infinite decimal has a limit in F.
Given any element a in F, we can form an infinite decimal for a. First, we can assume that a is positive, since the case where a = 0 is trivial, and if a < 0, then we can replace a with -a. Next, we see why we needed to add the Archimedean property to the above definition. Without it, we would not know how to get the integer part of a: Since F is Archimedean, the set of natural numbers N with a < N is non-empty and so there it has a smallest element b. Let . Then if . Choose to be the decimal digit such that and let , so that again . Assuming that we have already defined for some natural number k, the quantities and with , define by induction the digit so that and let , so that .
The infinite decimal was defined so that with . So this infinite decimal has limit a. We say that this is the infinite decimal expansion of the element a in F.
Proposition 11:i. Every element a in F is the limit of the infinite decimal expansion of a.
ii. The decimal expansion of every rational number is a repeating decimal, i.e. except for an initial segment of the decimal, the decimal consists of repetitions of a single string of digits.
iii. Every repeating decimal has limit a rational number.
Proof: The first assertion has already been proved. For the second assertion, note that the definition of the sequence of digits is completely determined by the value of .
If a = r/s is rational with r and s integers, then is a rational number with denominator (a factor of ) s. Furthermore, since , if is rational with denominator s, then so is . By induction, it follows is rational with denominator s for every k. Since lies between 0 and 1 and is rational with denominator s, it follows that there are at most s possible values for .
The following principle is called the pigeonhole principle: If s + 1 objects are assigned values from a set of at most s possible values, then at least two of the objects must be assigned the same value.
By the pigeonhole principle, there are subscripts i and j with such that . As indicated at the beginning of the proof, it follows that the sequence of digits starting from must be the same as the sequence of digits starting from and so the decimal repeats over and over again the cycle of values .
The third assertion is easy to prove -- it is essentially the same as our calculation of the limit of the infinite decimal expansions of 1/3 and 1/7. The formalities are left as an exercise.
Example 10: The field of real numbers contains many numbers which are not rational. All we need to do is choose a non-repeating decimal and it will have as its limit an irrational number. For example, you might take where at each step one adds another zero.
Proposition 12: Every a > 0 in the field of real numbers has a positive -root for every natural number n, i.e. there is a real number b with .
Proof:It is easy to show by induction that, if , then for every natural number n. So the function is an increasing function. By the Archimedean property, we know that there is a natural number M > a. Again by induction, it is easy to see that . By descent, it follows that there is a smallest natural number m such that . Let so that the -root of a must lie between and . Next evaluate for integers j from 0 to 10. The values start from a number no smaller than a and increase to a number larger than a. Let be the largest value of j for which the quantity is at most a. Repeating the process, one can define by induction an infinite decimal such that the -power of the finite decimal differs from a by no more than .
Let b be the limit of the infinite decimal, and be the values of the corresponding finite decimals. Then we have and and so it is reasonable to expect that . This is in fact true. Using the identity for geometric series, we see that: . But then the triangle inequality gives where C is a positive constant which does not depend on k. Since this holds for all positive integers k, it follows that .
A sequence of real numbers is said to converge to a real number b (or to have limit b) if all the are as close to b as desired as long as k is sufficiently large. More formally, this means that for every (regardless of how small), there is a (possibly quite large) N > 0 such that for all k larger than N. The sequence is said to be bounded above by a real number B if for all .
For example, if , is an infinite decimal, and for , then converges to .
Proposition 13: Let be a sequence of real numbers bounded above by a real number B. If the sequence is increasing, i.e. , then the sequence converges to some real number b.
Proof: Since each is a real number, it has an decimal expansion where is a sign, either plus or minus. Because the sequence is increasing, one has:
Let Then for all because the decimal expansion of all such b_i agrees with that of b up to the decimal digit. So b is the limit of the . This completes the proof.
Remark: A sequence is said to be bounded below by a real number B if all the terms of the sequence are no smaller than B. A decreasing sequeence of real numbers which is bounded below converges to a real number. (To see this, simply apply the Proposition to the sequence .)
Corollary 8: i. Let for be closed intervals with . If the lengths converge to 0, then there is precisely one real number c contained in all the intervals such that the sequence of the as well as the sequence of the converge to c.
ii. Let the form a decreasing sequence of positive real numbers which converge to zero. Define for . Then the sequence converges to a real number b. (The value b is said to be the limit of the infinite sum .
Proof: i. The sequence of the is increasing and bounded above by every . So, the sequence converges to a real number a which is no larger than any of the . Clearly, for all . Similarly, the sequence of the is decreasing and bounded below by every . So, the sequence converges to a real number b no smaller than any of the , and for all . We cannot have or else the distance between them would be positive; but this cannot be true because both lie in for all j and the sequence of the converges to zero.
ii. This follows from the first assertion using the intervals .
If a is a positive element of any ordered field, we know that because the set of positive numbers is closed under multiplication. Since we also have , it follows by trichotomy that the square of any element in an ordered field is always non-negative. In particular, such a field cannot contain a solution of .
We would like to have a field where all polynomial equations have a root. We will define a field called the field of complex numbers which contains the field of rational numbers and which also has a root, denoted i, of the equation . In a later chapter, it will be shown that, in fact, contains a root of any polynomial with coefficients in . This result is called the Fundamental Theorem of Algebra.
Let us first define the field of complex numbers. Since it is a field which contains both the field of real numbers and the element i, it must also contain expressions of the form z = a + bi where a and b are real numbers. Furthermore, there is no choice about how we would add and multiply such quantities if we wanted the field axioms to be satisfied. The operations can only be:
where we have used the assumption that .
It is straightforward, but a bit tedious to show that these operations satisfy all the field axioms. Most of the verification is left to the exercises. But let us at least indicate how we would show that there are multiplicative inverses. Let us proceed heuristically -- we would expect the inverse of a + bi to be expressed as but this does not appear to be of the desired form because there is an i in the denominator. But our formula from geometric series shows how to rewrite it: We have . This is just what we need:
Of course, we have proven nothing. But we now have a good guess that the multiplicative inverse might be . It is now an easy matter to check that this does indeed work as a multiplicative inverse.
Proposition 14: The set of all expressions a + bi, where a and b are real and i behaves like , is a field if we define operations as shown above.
We have already seen that the field cannot be ordered. Nevertheless, we can define an absolute value function by .
Proposition 15: Let w and z be complex numbers. Then
Proof: These are all left as exercises.
We have defined a 1-1 correspondence between the set of complex numbers and the Euclidean plane of pairs of real numbers. The complex numbers are not just a set; they also have addition and multiplication operators. Our next job is to see how these arithmetic operators correspond to geometric operations in the plane.
The Parallelogram Law Let's start with addition. If for j = 1, 2, then . In our 1-1 correspondence we have the four numbers corresponding to the four points , , , . Here is a picture of the situation.
From the picture, it certainly looks like the four points are vertices of a parallelogram. Showing that it is true is a simple matter of calculating the slopes of the four sides. For example, the slope of the line containing z_1 and z_1 + z_2 is
which is the slope of the containing O and z_2. (You need to treat the case where separately: in this case, the two sides are coincident.) This result is the so-called parallelogram rule, which may be familiar to you as addition of forces in physics.
Part of multiplication is easy: In Proposition 15, we have already seen the property where the absolute value is clearly just the length of the line segment from O to z (by the distance formula, a.k.a. Pythagorean Theorem). This tells us that the length (another word for absolute value) of the product of two complex numbers is the product of the lengths of the factors.
Besides length, what else is needed to determine the line segment from O to the complex number z? One way to determine it is to use the angle from the positive x-axis to the segment from O to z. This angle is called the argument of z and is denoted . Note that arg(z) is determined only up to a multiple of radians and that it is not defined at all in case z = 0.
The figure below shows the relationship between z = a + bi, r = |z|, and .
In particular, we see that: , , , and (unless a = 0). Notice that our definition of the trigonometric functions using the unit circle automatically guarantees that all the signs in these formulas are correct regardless of quadrant in which the point z lies. In particular, we have
which tells us that multiplying a complex number by a positive real number does not change the argument, but just expands the length by that factor. For example, doubling a complex number makes it twice as long but it points in the same direction.
Let's calculate products using the argument function. Let have length and argument for j = 1, 2 (where we are assuming that neither is zero, since that case is trivial). The product is:
where we the final simplification used the addition formulas for both the sine and the cosine function. So, the full story on multiplication is that you multiply the lengths of the factors and add the arguments:
In the very special case of natural number powers this says:
Proposition 16:(De Moivre) If a non-zero complex number z = a + bi has length r = |z| and argument and if n is any natural number, then: .
Corollary 9: Let n be a natural number.
The corollary follows immediately from De Moivre's formulas. In fact, these are the only -roots, but it is convenient to defer the proof of this until a later chapter. The quantities of the first part of Corollary 9 are called -roots of unity. Geometrically, they all lie on the unit circle and are evenly spaced around the around the circle. The second part of the Corollary says that we can obtain the various roots of any number by simply multiplying any one of them by the various -roots of unity.
The ability to do arithmetic operations on the points in the plane makes a number of topics in geometry much simpler.
A geometric figure is a collection of points. We can transform this set by applying operations to each of the points. For example, if you add a complex number to each point, it translates or shifts the figure. For example, if the figure is the unit circle, consisting of all the points (x,y) where . Then adding (2, 3) to each of these points gives the set of points (x + 2, y + 3) where . Letting x' = x + 2 and y' = y + 3, we see that x = x' - 2 and y = y' - 3. So, the shifted circle is the set of all (x', y') where .
In general, let S be a set of points (x, y) where f(x, y) = 0. If we want to shift this a units to right and b units upward, then the new set of points is the set of (x, y) where f(x - a, y - b) = 0.
For example, is a parabola with vertex at (2, 3).
One can also do reflections across the y-axis by replacing x with -x. For example, the set of (x, y) where is the upper half of a parabola having the x-axis as its axis. Its reflection across the y-axis has equation . Similarly, one can reflect across the x-axis by replacing y with -y in the equation of the set. Note that this corresponds to mapping z = x + iy to its complex conjugate.
The third type of transformation is a rotation about the origin. We know that we can rotate z = x + iy through an angle by multiplying it be the complex number to give the number zu = (xcos(theta) -ysin(theta)) + i(xsin(theta) + ycos(theta)). So, our new point is (x', y') where
Alternatively, we can get x and y from x' and y' by rotating (x', y') through an angle . So, if the original set is the set of (x, y) where f(x, y) = 0, then the rotated set of (x, y) where .
For example, suppose we want to rotate the hyperbola xy = 1 counter-clockwise 45 degrees. Then use and the equation of the rotated figure is or .
Warning! It is notoriously easy to make a mistake in rotating in the wrong direction. You should always check a point afterwards to make sure you have rotated in the direction intended. The formula for doing the rotation is also hard to remember correctly; it is usually best to just remember that you rotate by multiplying by .
When we set up the plane, we defined it to be the set of pairs (x, y) of real numbers. One then defined the distance between two points as by a formula involving the x and y coordinates of the two points. This means that our notion of distance appears to depend on the choice of coordinate system. In fact, it is independent of the choice of coordinate system as is straightforward to verify:
Proposition 17: If two points and are transformed by a finite number or translations, rotations, and reflections, the distance between the two points does not change.
In Chapter 1, we assumed an intuitive notion of what one meant by an angle -- it was measured as the length of the arc of the unit circle swept out as you traversed the angle. Although intuitive, it is difficult to define exactly what one means by the length of the arc of the circle swept out as you traverse the angle. In this section, we will see how to make this precise assuming that one has the basic properties of the sine and cosine function. In Chapter 5, we will complete the job by rigourously defining the trigonometric functions.
First you need to remember that as ranges from 0 to radians, decreases from 1 to -1. So, for each real value v between -1 and 1, there is a unique such that cos(theta) = v. This uniquely defined is called the arccosine of v and is denoted either or .
Next we need some definitions for complex numbers. If z = x + iy is a non-zero complex number, then the direction of z is defined to be u = z/|z|. (Geometrically, it is a complex number of length one pointed in the same direction as z.) For any complex number u = a + bi of length one, define
For any non-zero complex number z, define arg(z) to be arg(u) where is the direction of z. The principal argument was defined so that
In fact, it is the unique real number which satisfies this condition. It follows that if w and z are two non-zero complex numbers, then
where we say that two numbers are equal mod if their difference is an integer multiple of .
Finally, in this section, we will refer to points (x, y) in the plane as if they are the corresponding complex number z = x + iy. A directed line segment AB is determined by its two endpoints A and B; so a directed line segment is really an ordered pair of points, which is the same thing as an ordered pair of complex numbers. Assuming that r and s are the complex numbers associated with A and B respectively, the complex number is called the direction of the directed line segment AB. Clearly the direction of AB is a complex number of length one, and the direction of BA is -u if u is the direction of AB. (Geometrically, u is a complex number of length one which points in the same direction as AB.) A directed angle is an ordered triple of points A, O, and B where both A and B are distinct from O. The measure of the directed angle is arg(u/v) where u is the direction of OA and v is the direction of OB. So, the measure of an angle is a real number in the interval such that . Note that the angles are directed in the sense that . One needs to be careful about this because many of the theorems in synthetic geometry refer to undirected angles, i.e. they use the absolute value of the measure of the angle so that for undirected angles.
One can now prove many of the results of synthetic geometry. For example,
Proposition 18: If is a triangle, then (as directed angles) one has
Proof: Let the directions of BA, BC, and CA be u, v, and w respectively, Then the sum is the direction
where the equality is mod . Now, if the sum of the three angles is not equal to , it must be equal to because each of the angles is in the interval . But this could only happen if all three angles were equal to , in which case would not be a triangle.
Notice how easy it is to get the addition formulas:
Proposition 19: (Addition Formulas) If A and B are two angles, then
Proof: Let u and v be the complex numbers of length 1 such that A = arg(u) and B = arg(v). Then A + B = arg(uv) (mod ) and -A = arg(1/u) = arg( ). One has and . Multiplying these expressions for u and v together givees the first two assertions. The last assertion follows from the definition of the complex conjugate.
Proposition 20: (Law of Cosines) Let a, b, and c be the lengths of the sides of the triangle opposite angles A, B, and C respectively. Then
Proof: Let u and v be the directions of CA and CB respectively. Then identifying the points with the corresponding complex numbers, we have B = C + av and A = C + bu. So, one has
where one has used part iii of Proposition 19.
Directions can be used to verify the usual properties of similar triangles. The exercises will give further details.
All contents © copyright 2001 K. K. Kubota. All rights reserved | http://www.msc.uky.edu/ken/ma109/lectures/real.htm | 13 |