score
int64 50
2.08k
| text
stringlengths 698
618k
| url
stringlengths 16
846
| year
int64 13
24
|
---|---|---|---|
152 | Chapter 4: Continuity and the Fundamental Theorem of Algebra
Intuitively, a function f(x) is said to continuous at b if functional values f(x) are as close as we would like to f(b) as soon as x is sufficiently close to b and at a place where f(x) is defined. For example, the top surface of a table defines a flat surface which is continuous. It is even continuous at the edge of the table because f(x) is not defined for points x beyond the edge of the table. On the other hand, we have a room containing only a table and consider the function consisting of the table top and the part of the floor which is not under the table, then this function is still continuous at all points except at the points corresponding to the edge of the table. Given such a point, there are points arbitrarily close to it where the functional value is defined by the table top and other points arbitrarily close to it where the functional value is defined by the floor level.
Example 1: i. Another example is the function consisting of all the pairs for non-zero x together with the single point (0, 1). This is usually written as
This function is continuous at all x except for x = 0. All the functional values for x near 0 are close to 0 whereas f(0) = 1.
ii. A less extreme case is
Again, this function is continuous at all x except x = 0.
iii. This function is not continuous anywhere:
iv. This monstrosity is continuous everywhere except at (0, 0):
Let's give a more careful definition: Let f(x) be a real valued function defined for certain real values x. If f is defined at b, we want to say that f is continuous at b if f(x) is as close as we wish to f(b) for all x sufficiently close to b where f is defined. The problem is to make sense of expressions such as "as close as we wish". One might wish for different degrees of closeness at various times. To cover the most stringent case, we will interpret this as meaning the distance between the two is less than any specified positive number. So, "f(x) is as close as we wish to f(b)" means that, if you are given a maximal distance , then one must have . The expression, "for all x sufficiently close to b" means that there is a positive number such that the assertion holds for all x satisfying . So, our careful definition is:
Definition 1: Let f be a function with domain a set of real numbers and with range space the set of all real numbers. We say that f is continuous at b in its domain if for every there is a such that for all x in the domain of f with .
Showing that a function is continuous can be a lot of work:
Example 2: i. The function is continuous at x = 0. In fact, if we are given , then we need to show that there is a such that for all x with . If we choose to be the smaller of and 1, then we can see that this works. In fact, if , then because . So . Since , we know that . So, as required.
ii. Now let's try and show that the same function is continuous at x = 1. For a given , we need to have for all x with , where is yet to be chosen. Now, . So, if we want this to be small by making |x - 1| small, we can do it provided that |x + 1| is not made large in the process. But if . So, if , we would have
where the last inequality holds provided we choose . This is precisely what we want provided that we also have .
So, given , let be any number smaller than both 1 and . Then, if , we have because
This proves that is continuous at x = 1.
iii. Now, let's try to show that is continuous at all real number b. Repeating the same kind of reasoning, we would want
Given an \epsilon > 0, we could choose so that it is less than 1 and Then, we would have for all x with .
We will need to study the continuity of real valued functions of two variables. The careful definition is almost the same as in the one variable case:
Definition 2: Let f be a function with domain a set of pairs of real numbers and with range space the set of all real numbers. We say that f is continuous at (a,b) in its domain if for every there is a such that for all (x,y) in the domain of f with .
All we really did was replace the absolute value with the distance function: the condition that is replaced with the distance from (x,y) to (a,b) be less than . We could have written Definition 1 in the same way, since absolute value gives the distance between two real numbers.
Example 3: i. Any constant function is continuous at every point in its domain. Suppose f(x) = c where c is real number for all x in the domain of f. Let b be in the domain of f. If , then we need
for all x in the domain of f with . But this holds for any choice of as long as it is a positive number.
ii. Any linear function f(x,y) = ax + by + c is continuous at all points (x, y) in its domain. In fact, let and (r, s) be in the domain of r. We need
for all (x, y) in the domain with .
For (x, y) with , one has:
because the square root function is increasing. Similarly . But then, we have:
If we choose our so that , then the right hand side will be smaller than as desired.
Proving that a function is continuous using only the definition can be quite tedious. So, we will need to develop some results which make it easy to check that certain functions are continuous. Throughout this section we will be dealing with real valued functions of one or two variables. We will take their values at points z where, by point, we mean either a real number or a pair of real numbers depending on whether the function is of one or two variables. We will also use absolute value signs to indicate the distance to the origin, either 0 or (0, 0), depending on whether the function is of one or two variables.
Given two functions f and g, we can define sums, differences, products, and quotient functions by:
Proposition 1: Let f and g be real valued function of one or two variables. Let z be a point in the domain of f and the domain of g where both f and g are continuous.
Proof: Let w be a point in the intersection of the domains of f and of g. Then
If is any positive number, then we can use the continuity of f and g at z to know that there is a and such that and for all points w in the domains of f and g such that (for f) and (for g). Choosing to be any number smaller than both and gives
as needed. You should check through the same proof using subtraction instead of addition of functions.
For products, one has
Let . Choose smaller than 1 and subject to another condition which we will specify below. By the continuity of f at z, we know that there is a such that for all w in the domain of f with . Similarly, by the continuity of g at z, we know that there is a such that for all w in the domain of g with . Choose any smaller than both and . Continuing our series of inequalities, we get for all w in the domains of both f and of g with that
where the last inequality follows because we add the condition to the list of conditions which is required to satisfy.
Finally, let's consider the case of quotients. We need to assume that z is in the domains of f and of g as well as . The inequality looks like
Now the numerator is like the one for products and the same sort of argument will be able to handle it. The new ingredient is the denominator. In order to get an upper bound on a quotient |a|/|b|, you need to either make the numerator larger or the denominator smaller. So, we need a lower bound on |g(w)|. But and so we can choose a such that for all w in the domain of g with , one has . Since g(w) is close to g(z), it cannot be too close to zero; more specifically,
where we have used:
Lemma 1: If a and b are real numbers, then .
Proof: This is an exercise. Let be chosen according to criteria which we will figure out below. Now, choose so and for all w in the domains of f and g such that . Then we can continue our inequalities:
where the last inequality would be true if we chose so that .
Corollary 1: i. Every polynomial with real coefficients is continuous at all real numbers. ii. If p(z) is a polynomial with complex coefficients and p(x, y) = r(x,y) + i s(x,y), then r(x,y) and s(x,y) are continuous at all pairs (x, y). iii. If p(z) is a polynomial with complex coefficients, then |p(x,y)| is continuous at all (x, y).
Proof: i. and ii. All of these are functions made up of a finite number of additions and multiplications of continuous functions. So the result follows by applying the Proposition a certain number of times. (More correctly, one can proceed by descent assuming that one has the function made up with the least number of multiplications and additions.)
For assertion iii, one needs
Lemma 2: i. If f(x, y) is a real valued function continuous at (a, b) and g(x) is a real valued function continuous at f(a, b), then g(f(x,y)) is continuous at (a, b)
ii. The square root function is continuous at all non-negative reals.
Proof: i. Let . Since g is continuous at f(a, b), there is a so that for all w in the domain of g such that . Further, since f is continuous at (a, b), one knows that there is a such that for all (x, y) in the domain of f where . But then, letting w = f(x,y), we have
ii. Let a > 0 and . One has for with where is a quantity to be determined:
So, if we choose so that , then one has .
Now consider the case where a = 0. If , choose smaller than 1 and . If x is non-negative and , then since the square root function is increasing, and so, the square root function is even continuous at zero.
The typical discontinuity is a point where the function makes a jump. Continuous functions cannot do this:
Proposition 2: ( Bolzano's Theorem) Let f(x) be a continuous function defined on the closed interval [0,1]. If f(0) and f(1) have different signs (i.e. one is positive and the other is negative), then f has a root c in (0, 1).
Proof: Let's try to find the number c by developing a binary decimal expansion using binary search. At each step, we will add one binary digit to the expansion.
Originally, we only know to look in the interval . So, the first approximation is . Now, consider the midpoint 1/2 of the interval . If f(1/2) = 0, we have found our . Otherwise, either f(0) and f(1/2) have different signs or else f(1/2) and f(1) have different signs. In the first case, let and . Otherwise, let and . We have replaced our original interval with one half as long without losing the property that the function has different signs at the endpoints.
Now, just repeat the process over and over again. Here are the details: Assuming that one has , already defined where and have different signs. Let be the midpoint of the interval. If , then is the sought after point. If not, then it could happen that and have different signs; in this case, let be with a 0 digit added on the right and let , , and . Otherwise, and f(b_k) have different signs; in this case, let be with a 1 digit added on the right and let , , and .
The define an infinite binary decimal, which agrees with each in the first digitis. Because we are in the real numbers, the binary decimal converges to a number which we also call . This is a root of f. Otherwise, would be non-zero and we could find a such that for all with . So, for all such , has the same sign as . But this is absurd because for large enough , the interval lies entirely within of c and and have different signs.
Corollary 2: (Intermediate Value Theorem) If f(x) is continuous on the closed interval [a, b], then for every real number d between f(a) and f(b), there is a c in (a, b) with f(c) = d.
Proof: Apply Bolzano's Theorem to the function f(a + (b-a)x) - d.
Definition 3: A point z is an absolute minimum of a function f defined on a set S if z is in the domain of f and for all w in the set S. A point z is an absolute maximum of a function f defined on a set S if z is in the domain of f and for all w in S.
Proposition 3: ( Weierstrass's Extreme Value Theorem) If f is a real valued function defined and continuous on a closed interval [a, b] (or a closed rectangle in the case of a function of 2 variables), then f has at least one absolute maximum and at least one absolute minimum on this interval (or rectangle).
Proof: We can use binary search here as well. Let's discuss the one variable case first, and then indicate what changes need to be made to handle the two variable one. First, note that it is enough to handle the case where because one can always replace the function with . Start with the interval , , and search for an absolute maximum. Let . If for every in , there is a number y in with , then let and (i.e. add a 0 digit on the right of . If not, then there is an x in with for every y in . In particular, for every y in , there is a number (viz. x) in such that . In this case, let and , i.e. add a 1 digit to the right of . We have assured that an absolute of f on will be an absolute maximum of f on and we have halved the length of the interval.
As in the previous proposition, we can repeat this process defining and given and . Each step gives an interval of half the length whose maximum would be a maximum over . Let c be the limit of the infinite binary decimal defined by the . Then c is an absolute maximum. In fact, if e is in [0, 1] with f(e) > f(c), then there is a k with e in and e not in . By the construction, there is a g in with . In fact, there are such g in for arbitrarily large j. In fact, g is different from c, and so there is a k with e in and g not in . Then there must be an h with h in and . You can now repeat the process with h in place of g.
But f is continuous at c and so there is a such that for all x with . Since lies within of c for all sufficiently large j, we have a contradiction.
The same sort of proof can be used to prove Weierstrass's Theorem in the case of functions of two variables. In this case, we have a rectangle which is just the cartesian product of two intervals. Instead of dealing with a nested set of intervals, we deal with rectangles. We can let and be the two midpoints. The crucial step is:
Remark: Although both Bolzano's and Weierstrass's Theorems were proved with binary search, examination shows that the proof of Weierstrass's Theorem really does not give us any effective means for finding the absolute maximum. We know it exists, but can't really find it. On the other hand, the proof of Bolzano's Theorem does allow one to actually approximate the desired root to any desired degree of accuracy. Since the size of the interval shrinks by a factor of 2 at each step, we get another decimal digit of accuracy every three or four steps.
The Fundamental Theorem of Algebra states:
Theorem 1: ( Gauss) Every non-constant polynomial with complex coefficients has at least one complex root.
Lemma 3: ( D'Alembert's Lemma) Let f(z) be a non-constant polynomial with . Then there is a non-zero complex number c such that |f(cx)| < |f(0)| for all sufficiently small positive real values x.
Proof: Let . be a non-zero polynomial with complex coefficients. We assume that and that , i.e. the smallest positive degree term of degree k. Then .
We want to choose c so that . Since x is a positive, the factor does not affect the argument. Since arg converts products into sums, we can solve to get
One can choose any non-zero c with this argument. Then the last two terms of can be written in the form where d is a positive real number. The remaining terms are in absolute value at most for some positive e. So,
for all positive x small enough so that x <|a_n|r/(2e). This completes the proof.
Corollary 3 Let g(z) be any polynomial with complex coefficients and b be any complex number with . Then there are points c arbitrarily close to b where . In particular, b is not an absolute minimum of the function |g(z)|.
Proof: Let f(z) = g(b + z). Then and f is a polynomial with complex coefficients. The result now follows from Lemma 3.
Proof of the Fundamental Theorem of Algebra Suppose f(z) is a non-constant polynomial with complex coefficients and no complex roots. If d is the degree of g(z), then the triangle inequality shows that for z sufficiently large in absolute value, the highest degree term dominates and one has for some positive c and all z with |z| > M. In particular, one knows that there is a square centered at the origin which is guaranteed to contain in its interior all the absolute minima of the function |g(z)|. By the Weierstrass Extreme Value Theorem applied to a slightly larger square, there is at least one such absolute minimum, say z. By D'Alembert's Lemma, we must have f(z) = 0, which is a contradiction.
All contents © copyright 2002, 2003 K. K. Kubota. All rights reserved | http://www.msc.uky.edu/ken/ma110/text/fta.htm | 13 |
69 | |Drama for secondary schools - study guide|
This study guide is intended for students taking examinations in drama or theatre arts at GCSE level. It may also be helpful to teachers of drama in Key Stage 3 of the National Curriculum in England and Wales, and to anyone teaching drama to teenagers in other parts of the world. I have adapted it from teaching materials prepared by my friend Simone Hennigan of South Hunsley School in the East Riding of Yorkshire, UK. Please use the hyperlinks in the table above to navigate this page. If you have any comments or suggestions to make about this guide, please contact me.
Why teach drama?
Drama has an important rôle to play in the personal development of our students. The skills and qualities developed by students in drama, such as teamwork, creativity, leadership and risk-taking are assets in all subjects and all areas of life. Drama stimulates the imagination and allows students to explore issues and experiences in a safe and supportive environment.
It is vital to create an atmosphere of security, trust and concentration. Drama promotes self-esteem and provides all students with a sense of achievement regardless of academic ability.
It's about social skills, communication skills and having fun - we learn by doing!
To participate in a range of drama activities and to evaluate their own and others' contributions, pupils should be taught to:
Key skills: Communication and working with others.
Drama at KS3
Each student should
Good drama requires a clear sense of discipline and direction from both teachers and students; all are equally responsible for the quality of learning that takes place.
Starting a drama Lesson
This is a suggested or example procedure or routine. Teachers should adapt it to the local needs of your own students.
Games and warm-ups
Use games and exercises, physical and/or verbal, for one or more of the following reasons:
These games will typically last from five to ten minutes.
Alternatively, a teacher may introduce the main theme immediately, (for example, the teacher may enter the circle in rôle).
As a teacher, you can gauge the classes' moods subjectively on entry, using eye-contact or body language as criteria. Take this into account in choosing your introduction.
Drama at KS4 - GCSE Drama and Theatre Arts
The aims set out below describe the educational purposes of following a course in Drama and Theatre Arts for the GCSE examination. Some of these aims are reflected in assessment objectives; others are not because they cannot readily be translated into measurable assessment objectives. The aims are not stated in any order of priority.
The aims of the GCSE Drama and Theatre Arts course are to enable candidates to develop
The GCSE Drama and Theatre Arts syllabus will assess the candidates' ability to:
Scheme of assessment
The table below shows the weighting of the assessment objectives.
Assessment, recording and reporting of drama
The assessment of drama falls into two main categories:
The student's rôle
The teacher's rôle
This course is designed to provide teachers with schemes of work which allow flexibility for individuals whilst ensuring that all Year 7 students follow a common curriculum and have the opportunity to develop a wide range of skills.
Drama diary evaluation
Use this text as a model for your students' work. You can copy and paste it into any text or word processed document.
Ground rules for teaching drama
Session length | Aims | Synopsis or outline for introductory sessions | Resources | Room layout | Making a start | Brainstorm | Introducing freeze-frame | Freeze-frame | Introducing thought tracking | Introducing rôle play | Extending rôle play | Reflection and evaluation
The session includes activities for pairs and small groups as well as the whole class.
Synopsis or outline for introductory sessions
The students imagine scenes from a photograph album, re-creating them as still Freeze frames and bringing them to life using thought tracking and rôle play. They interview one another in rôle as visitors to a tourist town using rôle play. They are encouraged to reflect on what they have learned during the session and to keep an ongoing diary which records their own learning through Drama.
Making a start
In pairs in the circle:
What pictures might you find in a family photograph album?
Move away from your chairs and find a space.
The teacher says (adapt this as necessary):
In a minute I want you to get into the group size which I call out and form a snapshot from our imaginary album. (Use the list you have made on the flip chart paper and adapt it e.g. in fours - a holiday snap; in sixes - a party; whole class - a football match, a community celebration. I will count from ten to one and then say, 'Hold it and freeze'. Hold the picture you have made still, until I say, 'Relax'.
Go through about ten freeze frames quickly making comments on any good ones you see. If there are any which particularly impress you by their clear depictions, body language, facial expressions and so on, the ask the rest of the class to relax and look at them and discuss the strengths of the freeze frames.
Tableaux: to create simple improvisations from freeze-frames.
You may repeat the sequence with smaller groups.
Introducing thought tracking
Go back into the last whole class Freeze frame. Introduce the activity with these words or a variation to suit your own situation:
I am going to tap some people on the shoulder and when I do you must say what is in your head as the character that you are rôle playing (e.g. at a football match one spectator might say 'What a goal!' another might say 'I wish me dad were 'ere.')
Some will do this really well. If so, then praise them! You are looking for concentration and imaginative belief in the situation.
Introducing rôle play
Introduce the activity with these words or a variation to suit your own situation:
Go back to your chairs. In groups of four talk about a memorable event that happened during the holidays. If nothing interesting happened to you, invent something! Decide on a Freeze frame to start the drama. You are going to bring it to life for 30 seconds and use words this time. You have three minutes to practise it. The events can be quite commonplace (like going shopping with friends), or extraordinary (like witnessing an accident).
After two minutes stop the class and tell them that they have one minute left to work on their best moment in the drama. During this time you must move about the class helping, questioning and encouraging the students. Your job is to motivate at this early stage. Keeping the student under the pressure of time helps to clarify and focus the rôle play, otherwise it can ramble.
Now you are ready to bring the freeze frames to life. Get the students to relax and ask for volunteers to show their freeze frame and rôle play to the class. Ask each group to hold their freeze frame, count down: 3, 2, 1, GO!. After about 30 seconds say: And freeze!.
Respect for their peers is essential here. Take a bit of time with this. Try and find something good in each group, but do not tolerate showing-off. It will spoil the drama in the long run if the students do not take their work seriously. Do not tolerate chatting while others are showing their work. They are practising their audience as well as performance skills.
Extending the rôle play
Introduce the activity with these words or a variation to suit your own situation:
Get into groups of four. One of you works for a local paper or TV company, in a seaside town that is very short of news at the moment. Your job is to go onto the beach and interview tourists. The rest of the group are tourists. All of you need to spend one or two minutes deciding what makes a newsworthy item. Practise this for five minutes. Choose the best moment. Start with a Freeze frame and be ready to bring it to life for 30 seconds. Set this up as before (with preparation time of two minutes).
See all the groups. Praise everything you can, but point out things that are obviously wrong and see if the group can identify what would improve it. As their confidence grows, gradually introduce more detailed and constructive criticism.
You are looking for examples of realism, controversy, humour and inventive treatment of the situation.
Reflection and evaluation
Introduce the evaluation with these words or a variation to suit your own situation:
Go back into the circle. What have you learnt from the session? What rules do you feel are necessary for drama to work?
List the rules on the board. Students can brainstorm their own ideas for rules in conjunction with rules you may have given them already (such as the Ground Rules above). The students can copy them into their Drama Diaries.
Session length | Aims | Synopsis or outline for sessions | Resources | Room layout | Session 1 | Extending mime skills | Rôle play | Reflection and evaluation | Session 2 | Planning drama | Discussion and evaluation | The story of Pandora's Box
This section contains guidance on teaching drama, using the classical myth of Pandora's Box. These sessions include activities for individuals and small groups.
Synopsis or outline for these sessions
The students practise mime skills. Then they use these skills in their improvisation work which is based on finding a box which they are not allowed to open. A moral dilemma is introduced, whether or not to open the box. The story of Pandora's Box is told and the class have to make up a modern day version of the story.
Pandora's Box - session 1
Whole class discussion in a circle. The teacher introduces this with some such statement as:
Remind me what you did in Drama last week. What did you learn?
The teacher now introduces the mime:
I would like each of you to mime an object; it must be small and you must be able to pass the mimed object on to the person on your left. That person has to guess what it is, copy the mime and then change it into something else. If your neighbour cannot guess what is being mimed the rest of the class can try to guess. I will start the mime.
If no one guesses the mime just ask what it is. Try to avoid making a big issue of it. This is supposed to be a confidence builder not destroyer! Again praise the good mimes i.e. those that are clearly defined and easily recognized.
Extending the mime skills
The teacher continues:
I am going into the centre of the circle and I will pace out a large treasure box. I want you all to try and remember as much of my mime as possible. For instance what size is the box? Is the lid heavy? What size key did I use to open the box? If you can guess what I have taken out of the box, put your hand up, don't call out, and I will choose someone to answer. If the person is right I will give them the key.
In mime you pace out the box - say two paces by three - take a key out of your pocket, unlock the box and lift a large heavy lid which you allow to thump to the floor. You root around in the box and take out a crown which you put on your head. You take it off and put it back in the box. Then you ask a person who has guessed the mime correctly to come into the centre of the circle and give this person the key. The student has to repeat the mime of opening the box, keeping the same shape and size, and take something new out of the box. The person in the middle invites someone from the circle to tell him/her what the mime was. Repeat this three or four times.
Rôle play with the emphasis on good mime and gesture
Bring students back in the circle. Now introduce the next stage:
I would like you to work in groups of three. Imagine you find a box. You have to show clearly by mime, how big/how heavy it is/what it's like. You have three minutes to practise. Remember to start your rôle play with a Freeze frame. I want to see the 30 seconds before the moment that you find the box. Take a few minutes to discuss who you are, where you are and where exactly the box is. You have to give clues in your rôle play that there is something very special/strange about this box. There is an instruction on it saying: DO NOT OPEN.
Go round the room checking that everyone understands the task and is getting on with it. If in doubt ask the group you are worried about to show you their freeze frame and thirty-second improvisation. If necessary give them advice on how to present their work better. When you have given them enough time to prepare their rôle play, choose a couple of groups to show their work.
Remember to get the freeze frame absolutely still before counting down, 3, 2, 1, GO! You are reinforcing the control necessary for good drama.
Reflection and evaluation/preparation for the next session
Get the class back into a circle. Ask:
What have you learned in this session? What makes a successful mime? Is a rôle play effective for the same reasons? Remember to find as much as possible to praise.
Read the story of Pandora's Box ready for the next session. Think how you can make a modern day version of the Greek myth. What does the box represent? What might be released from the box today?
Pandora's Box - session 2
Recap on last week's session:
What do you remember from last week? This week we are going to take some time to develop a modern version of Pandora's Box, using some of the ideas we discussed at the end of the last Drama session. What might be let out of the box today?
List suggestions on the flip chart. Allow students to discuss them.
Planning the drama
Introduce this, as appropriate:
In groups of four, plan your drama for this week. Take your time. I shall be looking for examples of good improvisation and an imaginative story line. You will need to discuss the improvisation in detail. Start your drama from the point at which you are deciding whether or not to open the box. There should be a great deal of tension at that moment, and then plan and practise what happens next. Each improvisation should last about a minute.
Go round and question the groups to make sure that they are focusing on the drama. If you are in doubt about a group, make them Freeze frame, and then Thought track them. This helps to keep them on their toes.
Performance of polished Improvisation:
Get each group to perform their polished improvisations. Make sure every piece begins and ends in a freeze frame. Count all the groups in with, 3, 2, 1, GO!
Discussion and evaluation
Get the class back into a circle and discuss the moral issues raised within the stories the children explored in their drama. Ask if there were any other stories/ideas that the groups discussed but chose not to use. Discuss why they were not chosen. Start to get the students to understand what the basic requirements for a good piece of drama. For example, some stories work well in writing but not as drama, why? This might be because drama needs tension, conflict and contrast to work well.
Give out the Drama Diary sheet.
The Story of Pandora's Box
Ideas for improvisation
The goblin's castle | Walking with beasts | Silly voices | Tableaux and movement | Titles for improvisation | Scenarios for improvisation | Prop boxes | Titles from teacher | First or last lines from teacher
Mime - the goblin's castle
Pupils have been captured by the Goblin King and are confined to his dungeons. They have to escape!
You talk them through the escape procedure which they must mime.
Mime - walking with beasts
In this mime, pupils become creatures in an alien or prehistoric environment. They can do this individually, or several can join to form one large animal. Ask them to contort themselves and make their faces ugly, scary or unusual. You will talk them through a series of activities:
Give students simple texts to read aloud, say, advertisements, passages from novels, magazine articles or poems. The catch is that they must use someone else's voice. Better still, you can combine two or more voices. Read the text:
As an extension you can ask students to do things like the Queen's Christmas message, using the Queen's accent but in the style of Ali G.
Tableaux and movement
Do this in groups of four or five. Each group has three titles:
Encourage the pupils to think laterally and produce frozen moments which are original - avoid the obvious. They should link each picture with movement, counting the steps. Everyone should be in time and synchronized. Think about arms as well as legs and facial movements.
Titles for spontaneous improvisations
Use titles from this list to inspire or challenge students:
In groups of two or three pupils devise three short scenes based on a title, each scene should look at the title in a different way. One scene should be mime. Pupils should try to synchronize transitions (movements) between scenes.
Scenarios for improvisation
These are ideas for scenarios with two performers - so students should work in pairs (or trios, with one directing). The situations work best if the pupils get straight into them and avoid long discussions.
Students prepare spontaneous (no time for prior discussion or preparation) or polished improvisations in groups of two, three or more using objects in prop boxes as stimulus (can be made up of any object eg hat/book/ball ).
Titles from teacher
First or last lines from teacher
Eventually students should be in a position to use skills acquired and their own imaginations to create group improvisation lasting between three and five minutes. If you wish to impose a more rigid structure consider:
Starters and fill-ups
Use starters for warming up before a session. Fill-ups are useful activities to fill in extra time.
Sometimes warm-up games are useful for starting a session. They can raise the energy level of a group, calm down a boisterous group and improve concentration and focus. They can also be used at the end of a session as a way to bring a class back together, or simply when you have a spare ten minutes because it is not worth starting new work at the end of a session.
You may wish to use some of the games and exercises to help students improve their skills in improvisation, observation, listening or inventiveness for example. If so, make the aim of the exercise clear to the class, as an overdose of seemingly pointless game playing is demotivating for students in the long run. Try to keep a balance between fun and serious activities.
This section is divided into two parts:
Games for the whole class
The class are seated in a circle.
One person is chosen to mime holding a cardboard box, placing it on the floor in front of them and opening the lid to take out an imaginary object. The person then handles or uses the object for a moment before placing it back in the box.
The rest of the group are then invited to put their hands up if they think they can guess what the object is. The person who guesses correctly can then open the next box and the game begins again.
As a variation on this, or if the imaginary object is hard to guess, it may be passed around the circle. The leader may give clues as to its identity by making comments such as Be careful, it can bite or Mind, it's slippery/cold/wet/sticky etc.
This game is very useful for getting to know a new class.
Name that person
This activity becomes tedious with a group larger than twenty, but it is a surprisingly effective way for a teacher to learn new names. This game is useful for a group getting to know one another.
The class stand in a circle and everyone says their name in turn. One person is chosen to start. This pupil must look at someone in the circle and call that person's name. Once it has been called, the caller walks across to the other person's place.
Meanwhile the person whose name has been called must look at a third person, call that person's name and walk towards her/him. No one must leave his or her place before calling the name of the person whose place they intend to take. Make sure that everyone moves at least once during the game.
The class sit on chairs in a circle. Before the game begins, make sure that the circle of chairs is big enough for people to run across from side to side without colliding. Be ready to adapt the game to allow for students with restricted mobility or wheelchair users.
One person (who has no chair), stands in the centre of the circle. That person's aim is to get the rest of the class to change places and to find an empty chair to sit on while they are out of their seats.
The person in the centre might say, for example, All those who had toast for breakfast, change places or All those from (name of village or street), change places. You can vary this in as many ways as you like:
It is a rule that no one may return to the chair he or she has just left in a changeover.
What are you doing?
This game is good for energizing a group and freeing the imagination. It also requires concentration and develops skills in mime.
The group stand in a circle. One person begins to mime an activity, for example, mowing the lawn or posting a letter.
The person next to him or her asks, What are you doing? and the first person is obliged to say something different from what he/she is actually doing (for example, I'm frying an egg). The second person must then mime the first person's answer until the third person asks What are you doing? at which point he/she must make up another lie for the third person to act out.
This game can go round the circle twice before you stop it - unless the students are particularly inventive.
This game can open up useful areas for discussion on how people see and display social status.
The teacher shuffles a pack of ordinary playing cards and deals one to each student. Everyone must memorize their card and return it to the pack unseen by anyone else. Kings have the highest status, aces the lowest.
Everyone then takes on a character whose social status is equivalent to their card. In order to get used to the feel and behaviour of this character the class should spend a few moments moving around the room greeting one another. They may need reminding to be aware of eye contact (low status characters often avoid this where possible) and body posture. Do they walk upright looking forward, or with heads bowed looking at the floor? What tone of voice is used?
The class are then asked to sit down. While they are sitting down they are observers, out of rôle.
A group of five people is chosen and the teacher asks them to improvise in character for a minute or two in a given situation. Suitable situations might be: feeding the ducks in the park, waiting for a bus, or in a dentist's waiting room for example.
The rest of the class are invited to guess the value of the playing card originally held by each person in the improvisation and to comment on their interactions.
Guessing the exact value of the card is less important than discussing the ways in which the students improvising show high or low status attitudes and behaviour.
This exercise is for groups of four. Two of the group members are characters in authority (such as teachers, customs officers, police). The other two have made a mistake which has just been discovered so that they appear to be in the wrong. The aim of the exercise is to show how the four characters handle the resulting conflict. This can lead to useful discussion on confrontations and resolving disputes.
The lost key
This exercise, done in pairs, shows the difference between what Stanislavski calls acting in general (artificial acting) and acting from the particular details of a situation. Each pair can let their improvisation run for one minute, first in as real and lifelike a way as possible, then in a melodramatic way, allowing the emotional drama of the moment to dominate the scene.
The situation is simply that A and B are friends. A discovers that she/he has lost an important key. B tries to help. Show some contrasting improvisations and discuss which style was more satisfying for the performers and for the audience.
There will probably be differing opinions here. Melodrama is not necessarily bad, any more than living the part. It serves a different function according to time, place and expectations.
Lines and proverbs
Groups of three to six students can be given the following lines or proverbs as the theme for a short improvisation. The lines need not actually be spoken. The improvisation can simply reflect the subject matter.
This is a simple game but it needs co-operation. Ask students to walk around the room, using all the space and trying not to bump into each other. Once this is established call out a shape, which the whole class must form. Start with a circle as this is easy. Other useful shapes include:
The whole class must make one shape between them, as though it were to he viewed from the air. Between making shapes ask them to walk steadily as before using all the space in the room.
Games and activities for small groups
This exercise is ideal for groups of five or six students. They can work collaboratively, or if preferred one can act as sculptor and direct the rest. With some classes, it may be best to do this in single-sex groups.
Give the group a theme or image and allow them up to five minutes to depict the theme in the form of a statue or sculpture. It can be as naturalistic or as abstract as the groups wish and may be made of any material from gold to polystyrene. The important thing is that the groups know why they have made their sculpture in that particular way. Their basic raw material is their own bodies, but chairs, clothing and other props may be used if desired.
Suitable themes can include: the mother, victory, the lesson, refugees, the hero. The exercise can be used as an introduction to a lesson and could take the theme of the lesson (e.g. the family, friendship) as a starting point.
Groups can show their statues to the whole class, who may ask questions about them (Is Julie wearing a cloak to show that she is a vampire?). It is important that the questions are specific rather than vague. This is a useful way for the groups to evaluate their own work.
This is an exercise for pairs. If the class size is uneven it would be possible for one group to work in a three.
The aim of the exercise is to create a story collaboratively. Ideas should flow freely and the pairs who are most responsive to one another's ideas will work best as a team. Those who block one another or pursue highly individualistic lines of thought will have most difficulty.
One partner begins to narrate a sequence of events, miming the appropriate actions (for example, The alarm clock rang and I hit it with my pillow. Then I remembered this was the first day of the holidays and got out of bed.) The other person mirrors the actions as far as possible. When the teacher calls Change the second partner takes over the narration, with the first person mirroring.
Two minutes is probably a good time to let the exercise run, at which point tell the class they have one more turn each, after which the story must come to a natural end. Invite pairs who have worked well together to show their story to the class and discuss the need for responsiveness to others when performing.
This exercise uses sounds but no dialogue. It is for groups of five or six students.
The teacher tells each group, secretly, a colour (use strong colours like red, blue, green, yellow, black, white, purple). The group then has five minutes to prepare a short, simple piece depicting that colour. Humming or chanting may be used, but not dialogue.
Blue could be the ocean waves or a sad mood. Purple tends to stand for empire in western culture, but there are other interpretations. Green could stand for envy or for the environment.
Each group's piece can be discussed and different interpretations considered.
Planning and scripting a play or episode
This activity could spread over several weeks and lead to a staged performance of some or all of the pieces created. Ideally students should have access to computer software to draft and revise their scripts, but in the early stages of brainstorming ideas and initial drafts, pen and paper is fine.
In terms of length, five to ten-minute pieces are ideal, Inexperienced writers tend to get carried away with dialogue rather than concentrating on plot structure. This can lead to tedious dialogues which actually say very little. The key to good drama is the amount of information conveyed to the audience in any exchange (sometimes this can be done without words). This provides the dramatic tension which makes the audience watch attentively.
The following steps can be followed by students in pairs to create a short play script.
The basic theme or idea for a script may develop from ongoing work, perhaps in history, on a class reader in English or from a lesson in citizenship.
Here are some ideas for short plays:
In pairs, students can decide on an idea they would like to use to create a five to ten minute play. Initially the play should be for 2 characters but they may bring in one or two more if this is necessary to the plot.
Developing the plot
When the students have decided on a basic idea for a script they need to answer the following questions to develop the plot:
Developing the characters
Students should write a brief description of the characters in their play. In a short play, they will not be very developed - use of stereotypes might be more appropriate.
Here are two examples:
Chilling tales - ideas for improvised and scripted drama
The following Drama activities can be used as a way of introducing the topic in English or as follow-up material.
small groups /pairs prepare dramatic readings of two or three poems which may lead to:
You can use lots of different stories as the source for this - such those in Roald Dahl's Kiss, Kiss collection, Philippa Pearce's The Shadow Cage or Ray Bradbury's The October Country and Something Wicked this Way Comes. Pupils should find a text they like and adapt it through improvisation or scripted work. Pupils may prepare the next scene (as in Dahl's The Landlady) or simply recreate their own version of the story.
Drama and media - advertising campaign
Display: pupils produce advertisement collages where they examine different advertising styles/techniques (such as, before and after, celebrity endorsement, comparison, humour, pseudo-science, narratives and so on)
The advertising agency
In groups of three or four create an advertising agency complete with:
Each member of the group should have a job/identity, such as M.D, graphic designer, account executive, copywriter. Give each group approximately 15 minutes To prepare their presentation for the class - their aim is to bid for business.
In the same groups but this time as manufacturers, they must create a brand new product- remind them to stick to facts /basics because they are producing it not advertising it. Include:
The advertising campaign
Groups prepare an advertising campaign for the new product.
Allow pupils time to write and then rehearse the ad. before recording a rehearsal on audio or videotape. Let pupils watch their first performance, make notes about positives and negatives and then make alterations. Video the final performance for the class to watch later. Pupils should write an evaluation of their final performance which they will include in their Advertising Campaign package.
Games and warm-ups
NSEW | Port and starboard | Shake hands | Name circles | Jumping name circle | Cooperation circle | Sitting circle | Gesture circle | Wink murder | Fruit salad | The L-shaped walk | Funny walks | Eye-contact circle | Squeezing circle | Groupings | Stuck in the mud | Doctor, doctor | Chain statues | Pruey or snake in the dark | The keeper of the keys | Name check | Anyone who... | Permanent handshakes | Keep up | Group count | Milling | Trust cars | Impossible knots | Hypnosis | Hanx | Chain mime | The word wizard | Blindfold | Blind explore | Human noughts and crosses | Mill and grab | Tangle | Pass the object | Cat and mouse | Good morning | AEIOU | Goalkeeper | The Vampire of Strasbourg | The cross and the circle | The indefinite prop and the imaginary prop | Lists | Word tennis | Gibberish
These can be used at the start of any session but try to include a mixture of physical warm-ups and games that improve concentration and thinking.
The sides of the room become the points of the compass. When you shout out a point, pupils must run to it .
Port and starboard
The sides of room become parts of a ship. You call - pupils run.
Shake hands or introductions
Pupils have one minute to shake hands with everyone in the room and ask for four bits of information:
After one minute, get everyone seated and see what they can remember about individuals.
Sit in a circle and introduce yourself then ask the child on your right to introduce himself or herself, plus you. The next child on the right then has to introduce himself or herself, plus the previous child, plus you and so until it comes back to you. The last child will have to introduce everyone in the group! This can be done as a group activity - everyone saying the list as it grows.
Jumping name circle
Stand in a circle and get everyone to do a star jump whilst shouting their own name. Then choose a starting point in the circle. Everyone must count to three then jump, at the same time the student who has been chosen to start shouts his or her name. On the second jump everyone else repeats that student's name. Keep up the rhythm as you work around the rest of the group jumping and repeating names.
Form a circle, then sit down with feet and legs straight. Take hold of the hands (or wrists) of the people on either side of you. The object is to stand up without bending your knees or letting go of your partners. The winners are the ones who realise that you must help others before you can help yourself!
Begin standing but quite close together then all turn to the right and on the word, Go! try to sit on the knees of the person behind you! If it works everyone is supporting someone else so the weight is evenly distributed.
Gesture circle/follow my leader
All sit in a circle and choose one to lead (the teacher could start, to give an example). Whatever the leader does (movement or gesture) the rest must follow. Now choose someone to be the detective. This pupil must leave the room/keep eyes closed whilst you choose the leader. They must enter the circle and try to determine who's in charge of the movements. Remind them that if everyone stares at the leader, it will be obvious - they must devise another way around it.
The detective leaves the room whilst you choose a murderer (either in front of other students or ask them to close their eyes and tap the murderer on the back). The detective enters the circle and the murderer can begin winking at his or her victims, who must try to die convincingly. The detective has 3 chances to identify the murderer - if he fails, the murderer must then reveal himself or herself. Being the murderer yourself makes an interesting variation!
everyone sits on chairs and the teacher gives each student the name of a fruit(apple, banana, orange, pear). When their fruit is called they must change seats. The rules are:
The L-shaped walk
Everyone finds a space and stands still. The only way to move around the room is in an L shape - 2 steps, a right-angled turn, then 3 steps or 3 steps, a right-angled turn and then 2 steps ( like a knight's move in chess). Explain that they must not touch anyone else and must pause if they are going to bump into others. Pupils move on teacher's command.
devise different ways of moving around the room, such as hopping , skipping, crawling , running , slow motion , in reverse, carrying something heavy/prickly/hot/cold/delicate/living and wriggly, on one leg, on one leg and one arm,only using knees and so on.
Begin with all looking at the floor, then on your command - Look up! - everyone must look at someone else in the circle. If they make eye-contact they're out! After a few seconds give the command -Look down! - and continue until only two remain .
All must hold hands in a circle (or wrists if they really can't bear it ) Choose one to begin sending a squeeze message around the circle, by squeezing others' hands or wrists (you can vary number of squeezes and speed/rhythm). Now choose one to be the detective - this student must enter the circle (after you have, secretly, chosen the student to begin the message) and try to identify who has the squeeze. To make it more difficult, choose more than one to begin the message.
Ask students to form different groups depending upon the information you call out, such as groups of people with the same hair colour, eye colour, birthday, village, number of siblings and so on.
Stuck in the mud
This is a form of tig. Choose one person (or more) to be it. When victims are caught, they stand with an arm against the wall or legs apart and wait to be rescued by someone crawling under their legs/arm).
Pupils form a standing circle, all holding hands. Teacher splits the group in the middle and one end begins to weave through the arms and legs of the rest of the group. Shout, Freeze! and the two must connect up again and try to untangle without letting go.
One pupil forms a statue in the centre. The teacher chooses another to sculpt him/her. At a appropriate point shout Freeze! and the two should be attached, with another student as sculptor. Repeat until all group are part of the same statue. Choose one from the group to stand back and name the statue.
Pruey or snake in the dark
Students find a space and close their eyes. Now try walking around the room with eyes shut. Choose one student to be the snake or the pruey monster -they must enter the room and try to catch people, who have their eyes shut. If they are the snake they must hiss so that their prey can listen and try to avoid them. If they are the pruey they make no sound at all but the others must whisper Pruey whenever they bump into anything. If there is no reply then they have been caught by the pruey monster and must make their way to the end of the creature (hold onto waist of the last person) and become part of the stomach. If the monster is the snake, victims must join the back when they have been hissed at!
The keeper of the keys
All sit in a circle, one in the centre with keys (or something similar) in front of him or her. This is the keeper. Now blindfold him or her. One by one students try to grab the keys from the keeper. He or she must listen for the thieves and try to stop them, by using arms, hands or rolled paper.
The group is seated in a circle. One person stands in the centre. His or her task is to say the name of anybody in the circle three times before the owner of that name can say it once. If he or she manages it, then the person named takes over standing in the centre.
This is a variation on Name check. This time the central player wants to sit down. However there are no spare chairs - the only way he or she can get a seat is by calling out distinguishing characteristics, such as Anyone who is wearing black socks! or Anyone from (name of village). At which point any of the group members with those characteristics have to swap places - giving an opportunity for the central player to sit down. Whoever is left without a seat becomes the next player. (You need one chair fewer than the number of players.)
All students walk around the space introducing themselves by shaking hands with others, but always making sure to keep hold of the person's hand they are shaking until this person finds another. This could lead into impossible knots
Keep up or keepie-uppie
The group has to keep a ball or balloon up in the air for as many touches as possible. Each player is only allowed to touch it once in succession. If it touches the floor, or if any player takes more than one touch, the game must start again from number one. Depending on the type of balloon (easy) or ball (much harder), and the available space, you can add further rules - such as using only feet and heads, left hand only, and so on.
The players have to count to ten. They must only speak one at a time, and are not allowed to preplan the sequence. If two people say the same number, or if there is a gap (as judged by the teacher) the game starts again.
The group walks around the teaching space. The leader shouts out a number and the individuals make groups of that number. A development of the game is to request people to create physical objects with their bodies, for example, in groups of five make a camera that can take a picture. Another extension is to request the group to be physically contacting each other, for example the leader shouts ear to ear or head to finger.
In pairs A manoeuvres B around the room. B has eyes closed or is blindfolded and must trust A to take him or her on a safe journey. They are not allowed to speak, and each pair should develop their own series of physical commands for directing them around, for example tapping on the left shoulder to turn left.
In a circle the group holds hands and doesn't let go. Someone is nominated as a lead person and begins to weave in and out of the others, going under and stepping over other peoples hands, When sufficiently knotted the group has to unravel itself back into the original circle without speaking.
Do this in pairs. A holds up a hand. B must align his or her face with the palm of A's hand at a distance of about eight inches and follow its every movement.
Variation - hypnotism with two hands. Same exercise, but this time the actor is guiding two fellow actors, one with each hand, and can do any movement he or she likes; the hypnotist mustn't stop moving either of his or her hands. The two hypnotized actors cannot touch; each body must find its own equilibrium without leaning on the other. The hypnotist mustn't do any movements which are too violent. Swap the rôles, so that all three actors have the experience of being the hypnotist.
Variation - hypnotism with the hands and feet. Like the preceding versions, but with four actors, one for each of the hypnotist's hands and feet. The person leading can do any kind of movement, even dancing, crossing his or her arms, rolling on the ground, jumping, and so on.
Each player has a tissue tucked in the top of the back of his or her trousers or skirt. The object of the game is for each player to collect as many of the other players' tissues without having his or her own tissue taken.
Five people are chosen to leave the room - the group decides on a mime sequence for them to do, such as making a complicated sandwich, or changing a baby's nappy.
The word wizard
The instructions below are given slowly, and one at a time with pauses between. Pupils have pencil and paper. The leader says:
I am a wizard, I am taking away all your words. But as I am generous, you may have four of them back. Write down the four words you want to keep out of all the words in the world.
The teacher repeats this step several more times, until the students have a fairly substantial list. Finally, the teacher should ask the pupils to write a poem, description or short narrative, using only the words on the list.
Darken the room, and ask everyone to stand, (furniture pushed out of the way)and close eyes. They should begin gently moving around, walking slowly, no talking. When they meet people they should greet them non-verbally, gently, and move on. Leader gives series of instructions, gives plenty of time to experience each of these:
Invent more things to find (clothes, mouths, hands)
Darken the room if possible and ask pupils to close eyes. They should move slowly, gently around the room. (No talking: emphasize it is non-verbal). As they meet people, they gently greet them non-verbally and move on. They must stop in front of someone and explore his or her face. Allow a long time for this. They say goodbye non-verbally, and move on. They can continue with the same directions and others - for example, explore hands, play garters with hands, be angry and fight with hands - now make up, explore backs, hair, and so on.
Human noughts and crosses
You'll need nine chairs and space to run.
At one end of the room are three rows of three chairs each, four feet apart. One team is Noughts, while the other is Crosses. They line up in corners of the room facing the chairs. When the leader calls noughts, the first nought runs to a chair and sits with arms circled above head. The runner must sit before Leader counts to five slowly. The leader calls, crosses and the first cross runs and sits with arms crossed on chest. Leader continues to call them alternately until one team wins (same rules as paper Noughts and Crosses). Start over, call losing team first. Keep score (optional).
There is a possible problem with this game - if the pupils know it well, they can ensure that their team never loses. So you could end up with lots of drawn games!
Mill and grab
Pupils mill around. The leader calls a number, say five. Players run to make circles of five, holding hands up together. Those left over go to one spot, perhaps can form another group. Leader waits until all the groups are ready, then calls another number, two, fifteen and so on. (If the leader wants groups of a particular size for the next game, he stops with this number, tells groups to keep together and sit down). Emphasize that groups must be mixed, boys and girls, teachers and pupils, and so on.
Variation - play the game with eyes closed. Do it in silence. Do it in slow motion. Do it as noisily as possible. Do according to an adverb, for example childishly, and so on.
Whole group links hands into a human chain. First person leads chain through itself, over and under arms, between legs, and so on. Extra care must be taken not to break the chain, to move slowly and to be gentle. Tangle ends when group is too tightly packed to move. One person then untangles the group, giving them directions without touching them.
Pass the object
Sit in a circle. Leader holds imaginary object (say, an egg beater) and mimes using it for its purpose. He then passes it on to the next person, who uses it. and then by making a rubbing motion with his hands, erases it and substitutes a new imaginary object, for example, an ice-cream cone. Continue around the circle.
Variation - leader uses object, second person uses that one plus another, adding all the way around the circle - this would make it a memory game.
Cat and mouse
A variation of a well-known game. Everyone has a partner, with whom to hold hands and move around the space - except two people who are on their own, one being the cat, one being the mouse. The cat chases the mouse, as usual. But if the mouse wishes to avoid getting caught, it can join up at one end of a pair and hold hands, which means that the person at the other end of that pair becomes the mouse and has to run away; there can only ever be two people holding hands together. You can decide that if the cat catches the mouse, they exchange rôles.
Each actor has to say Good morning to all the other actors, at the same time shaking hands with them. But he or she must always have one hand shaking hands with someone - so only when both hands are occupied in handshaking can she disengage one to find someone else.
All the actors cluster in a group, and one person comes and stands in front of them. The group must make sounds, using the letters A, E, I O, U, changing the volume according to how near to or how far away from them the single actor is. When the volume-control actor is far away, the group gets louder, and when he or she is close, they get quieter. The actor can move anywhere he or she likes around the room. The individual actors who make up the group should be trying to communicate a thought or emotion to the actor, not just making noise
A trust game. Six actors stand side by side, not too far apart, form the safety net. Another actor, a few steps in front of them, is the goalkeeper. Facing this group, say six metres away, are the other actors. One by one, the other actors look at the goalkeeper, close their eyes and start to run towards him, as fast as they dare. The goalkeeper must catch the runner around the waist. When an actor strays off course, one of the six members of the safety net can catch him.
The most important thing is to try not to slow down when approaching the goalkeeper - this is a test of trust. The idea is not to slow down or stop or end up far from the goalkeeper.
The Vampire of Strasbourg (variation of Pruey)
Pupils walk around the room with their eves closed, their hands covering their elbows without touching each other or colliding. The teacher applies a little squeeze to the neck of one of the participants who then becomes the first Vampire of Strasbourg - his or her arms extend in front of him, he gives a scream of terror, and from this point on he must seek out a neck in order to vampirise someone else. The vampire's scream gives the others a clue as to his whereabouts so that they can try to escape from him. The first vampire finds another neck and gives it a little squeeze. The second victim screams, raises his or her arms and now there are two vampires, then three, four, and so on. Sometimes one vampire will vampirise another vampire; when this happens, the latter lets out a cry of pleasure, which indicates that he or she has been re-humanized, but also that there is still a vampire beside him. The participants must flee the most vampire-infested areas.
The cross and the circle
The participants are asked to describe circles with their right hands, large or small, as they please. It's easy, everybody does it. Stop. Ask them to do a cross with their left hand. Even easier. Everyone gets there. Stop. Ask them to do both at the same time...it's almost impossible. In a group of thirty people, sometimes one person manages it, almost never two.
The indefinite prop and the imaginary prop
Take a prop - for example a carpet-beater - and place it in the middle of the floor. You want the children to use this as any object other than a carpet-beater. Each in turn uses the indefinite prop in a specific way -- it could be a lollipop, a tennis racket, a frying pan, a mirror, a shovel, a sword and so on. The idea is firstly for them to use their imagination to think up more and more unlikely uses for the indefinite prop, and secondly for them to do a well-presented mime to illustrate what they have thought of. This can be done with any age-group.
The imaginary prop is similar to this except that you don't use a physical prop at all. The teacher can begin by eating an imaginary apple and then passing it to the person next to him or her. It now becomes a ball and is bounced on the floor before being passed along to the next person. Now it is a hamster, which is being stroked - the person after that sees it as a flower...As the imaginary prop is passed on, it is important to pay attention to the detail of the mime in order to make it real for everybody.
This can be competitive or otherwise. Each contestant has just one minute to name all the items he or she can think of from a given category, such as Fruit, Vegetables, Cities, Countries, Meals, Girls' Names, Boys' Names, Clothes, Parts of the Body, Cars. So, if the category were Meals, he or she would start off: Sausages and Mash, Fish and Chips, Steak Pie, Baked Beans on Toast... and continue until running out of ideas or the minute is up.
A development of lists is word tennis. Two people face each other and both have to name in turn items from the given category. They go on until one of them cannot think of a new word within three seconds; this person is out and someone else can then challenge the winner. A harder version of word tennis is to take words from a given category - Countries, for example - and specify that the last letter of one word must be the first letter of the next, e.g. England, Denmark, Kenya, Australia. This form of word tennis is not so fast moving, so you need a longer time limit.
The leader splits the group into pairs and suggests the point of a forthcoming conversation. It is then explained that no recognizable words will be spoken. The pair will talk as if in a foreign language, making up words and sounds. The point of the exercise is to develop in intensity of expression excluding real words. Instead of sounds, or new words, players can use numbers.
Ideas for conversation
The admiral's cat (ABC)
Sit in a circle
Teacher begins story, The admiral's cat is an angry cat.
Student to left of teacher continues using next letter of alphabet as initial letter of adjective and so on until someone reaches Z.
Variation: I went shopping and I bought an apple.
As above until Z but students must repeat the previous items before adding one that starts with their own letter.
Sit in a circle.
Teacher begins story.
Each member of the circle must add one line to the story but it must begin alternately Fortunately or Unfortunately.
Students can add any new event but must not repeat or contradict established storyline.
Put class into groups of four or five.
Place up to three props in their circle, for example: hat, soft toy, pen.
Give group three minutes to devise a story, using the props, in which everyone must utter at least one line.
It must be a story (not an improvisation).
Share with the class
The above may be used as preparation for freeze-frames, tableaux or improvisation.
Drama techniques - A to Z
The actor remains silent whilst one or more people speak her/his thoughts
A written description of a character's details (such as age, interests, likes/dislikes) which helps an actor to play that rôle.
A crucial point in the drama where the tension has built towards a climax which leads to a choice or the possibility of change.
Making judgements and assessing dramatic activities. At this stage the formulation and understanding of ideas is more important than the quality of the dramatic performance. This can be achieved through discussions, through individual or group writing in the form of diary extracts, reports, letters, by drawing, or by characters thinking aloud.
Students perform an improvisation which is stopped and the audience intervenes to change the direction/emphasis of the drama. This may then involve members of the audience taking an active rôle in the continuation of the improvisation.
Stopping the action in order to get a still visual image.
A person in rôle sits away from the rest of the group and answers questions in rôle.
Taking on an unscripted rôle and acting as if you are in a make-believe situation.
Portraying a character, or telling a story by body movement (usually without words).
Assessing and thinking about dramatic activities. This is essential if the students are going to get the maximum benefit from these sessions.
Taking on the persona (imagined personality) of another character.
Rôle on the wall
A technique used to build up a character profile for a chosen person from a group. Brainstorm, recording all the ideas on flipchart paper.
A method of presenting work in which groups perform quickly in sequence, to show detailed scenes within a larger frame.
The details of a dramatic situation, setting the scene.
Similar to Hot-seating, but using two or more actors to answer the questions.
The actors place themselves physically as near or as far from a given character in the drama as they feel emotionally.
An event, piece of art or activity that leads to drama. It can be in the form of a poem, story, an artefact, a letter, a diary extract, a picture, a newspaper report and so on.
Teacher in rôle
The teacher takes on a rôle within the drama and leads the session as if she or he were that person.
Tapping the students on the shoulder in order to prompt them into vocalizing their thoughts whilst remaining in character.
A way of helping students experience emotions. Position the students in two lines down the centre of the room to form a tunnel. A volunteer walks down the tunnel in rôle while people from either side speak thoughts to him or her. The aim is to force the student walking down the centre to experience a variety of opinions or emotions. The student is then asked to communicate how the different emotions made him or her feel.
© Simone Hennigan, 1999; web version Andrew Moore, 2002; Contact me | http://www.teachit.co.uk/armoore/drama/drama.htm | 13 |
74 | ||This article needs additional citations for verification. (February 2010)|
Sintering is a method used to create objects from powders. It is based on atomic diffusion. Diffusion occurs in any material above absolute zero, but it occurs much faster at higher temperatures. In most sintering processes, the powdered material is held in a mold and then heated to a temperature below the melting point. The atoms in the powder particles diffuse across the boundaries of the particles, fusing the particles together and creating one solid piece. Because the sintering temperature does not have to reach the melting point of the material, sintering is often chosen as the shaping process for materials with extremely high melting points such as tungsten and molybdenum.
Sintering is traditionally used for manufacturing ceramic objects but finds applications in almost all fields of industry. The study of sintering and of powder-related processes is known as powder metallurgy. A simple, intuitive example of sintering can be observed when ice cubes in a glass of water adhere to each other.
Particular advantages of the powder technology include:
- Very high levels of purity and uniformity in starting materials
- Preservation of purity, due to the simpler subsequent fabrication process (fewer steps) that it makes possible
- Stabilization of the details of repetitive operations, by control of grain size during the input stages
- Absence of binding contact between segregated powder particles – or "inclusions" (called stringering) – as often occurs in melting processes
- No deformation needed to produce directional elongation of grains
- Capability to produce materials of controlled, uniform porosity.
- Capability to produce nearly net-shaped objects.
- Capability to produce materials which cannot be produced by any other technology.
- Capability to fabricate high-strength material like turbines
The literature contains many references on sintering dissimilar materials to produce solid/solid-phase compounds or solid/melt mixtures at the processing stage. Almost any substance can be obtained in powder form, through either chemical, mechanical or physical processes, so basically any material can be obtained through sintering. When pure elements are sintered, the leftover powder is still pure, so it can be recycled.
Sintering is effective when the process reduces the porosity and enhances properties such as strength, electrical conductivity, translucency and thermal conductivity; yet, in other cases, it may be useful to increase its strength but keep its gas absorbency constant as in filters or catalysts. During the firing process, atomic diffusion drives powder surface elimination in different stages, starting from the formation of necks between powders to final elimination of small pores at the end of the process.
The driving force for densification is the change in free energy from the decrease in surface area and lowering of the surface free energy by the replacement of solid-vapor interfaces. It forms new but lower-energy solid-solid interfaces with a total decrease in free energy occurring on sintering 1-micrometre particles a 1 cal/g decrease. On a microscopic scale, material transfer is affected by the change in pressure and differences in free energy across the curved surface. If the size of the particle is small (and its curvature is high), these effects become very large in magnitude. The change in energy is much higher when the radius of curvature is less than a few micrometres, which is one of the main reasons why much ceramic technology is based on the use of fine-particle materials.
For properties such as strength and conductivity, the bond area in relation to the particle size is the determining factor. The variables that can be controlled for any given material are the temperature and the initial grain size, because the vapor pressure depends upon temperature. Through time, the particle radius and the vapor pressure are proportional to (p0)2/3 and to (p0)1/3, respectively.
The source of power for solid-state processes is the change in free or chemical potential energy between the neck and the surface of the particle. This energy creates a transfer of material through the fastest means possible; if transfer were to take place from the particle volume or the grain boundary between particles, then there would be particle reduction and pore destruction. The pore elimination occurs faster for a trial with many pores of uniform size and higher porosity where the boundary diffusion distance is smaller. For the latter portions of the process, boundary and lattice diffusion from the boundary become important.
Control of temperature is very important to the sintering process, since grain-boundary diffusion and volume diffusion rely heavily upon temperature, the size and distribution of particles of the material, the materials composition, and often the sintering environment to be controlled.
Sintering is part of the firing process used in the manufacture of pottery and other ceramic objects. Substances such as glass, alumina, zirconia, silica, magnesia, lime, beryllium oxide and ferric oxide. Some ceramic raw materials have a lower affinity for water and a lower plasticity index than clay, requiring organic additives in the stages before sintering. The general procedure of creating ceramic objects via sintering of powders includes:
- Mixing water, binder, deflocculant, and unfired ceramic powder to form a slurry;
- Spray-drying the slurry;
- Putting the spray dried powder into a mold and pressing it to form a green body (an unsintered ceramic item);
- Heating the green body at low temperature to burn off the binder;
- Sintering at a high temperature to fuse the ceramic particles together.
All the characteristic temperatures associated to phases transformation, glass transitions and melting points, occurring during a sinterisation cycle of a particular ceramics formulation (i.e., tails and frits) can be easily obtained by observing the expansion-temperature curves during optical dilatometer thermal analysis. In fact, sinterisation is associated to a remarkable shrinkage of the material because glass phases flow, once their transition temperature is reached, and start consolidating the powdery structure and considerably reducing the porosity of the material.
There are two types of sintering: with pressure (also known as hot pressing), and without pressure. Pressureless sintering is possible with graded metal-ceramic composites, with a nanoparticle sintering aid and bulk molding technology. A variant used for 3D shapes is called hot isostatic pressing.
To allow efficient stacking of product in the furnace during sintering and prevent parts sticking together, many manufacturers separate ware using Ceramic Powder Separator Sheets. These sheets are available in various materials such as alumina, zirconia and magnesia. They are additionally categorized by fine, medium and coarse particle sizes. By matching the material and particle size to the ware being sintered, surface damage and contamination can be reduced while maximizing furnace loading.
Sintering of metallic powders
Most, if not all, metals can be sintered. This applies especially to pure metals produced in vacuum which suffer no surface contamination. Sintering under atmospheric pressure requires the usage of a protective gas, quite often endothermic gas. Sintering, with subsequent reworking, can produce a great range of material properties. Changes in density, alloying, or heat treatments can alter the physical characteristics of various products. For instance, the Young's Modulus En of sintered iron powders remains insensitive to sintering time, alloying, or particle size in the original powder, but depends upon the density of the final product:
where D is the density, E is Young's modulus and d is the maximum density of iron.
Sintering is static when a metal powder under certain external conditions may exhibit coalescence, and yet reverts to its normal behavior when such conditions are removed. In most cases, the density of a collection of grains increases as material flows into voids, causing a decrease in overall volume. Mass movements that occur during sintering consist of the reduction of total porosity by repacking, followed by material transport due to evaporation and condensation from diffusion. In the final stages, metal atoms move along crystal boundaries to the walls of internal pores, redistributing mass from the internal bulk of the object and smoothing pore walls. Surface tension is the driving force for this movement.
A special form of sintering, still considered part of powder metallurgy, is liquid-state sintering. In liquid-state sintering, at least one but not all elements are in a liquid state. Liquid-state sintering is required for making cemented carbide or tungsten carbide.
Sintered bronze in particular is frequently used as a material for bearings, since its porosity allows lubricants to flow through it or remain captured within it. For materials that have high melting points such as molybdenum, tungsten, rhenium, tantalum, osmium and carbon, sintering is one of the few viable manufacturing processes. In these cases, very low porosity is desirable and can often be achieved.
Sintered metal powder is used to make frangible shotgun shells called breaching rounds, as used by military and SWAT teams to quickly force entry into a locked room. These are shotgun shells designed to destroy door deadbolts, locks and hinges without risking lives by ricocheting or by flying on at lethal speed through the door. They work by destroying the object they hit and then dispersing into a relatively harmless powder.
Sintered bronze and stainless steel are used as filter materials in applications requiring high temperature resistance while retaining the ability to regenerate the filter element. For example, sintered stainless steel elements are employed for filtering steam in food and pharmaceutical applications, and sintered bronze in aircraft hydraulic systems.
Plastic materials are formed by sintering for applications that require materials of specific porosity. Sintered plastic porous components are used in filtration and to control fluid and gas flows. Sintered plastics are used in applications requiring wicking properties, such as marking pen nibs. Sintered ultra high molecular weight polyethylene materials are used as ski and snowboard base materials. The porous texture allows wax to be retained within the structure of the base material, thus providing a more durable wax coating.
Liquid phase sintering
For materials which are hard to sinter a process called liquid phase sintering is commonly used. Materials for which liquid phase sintering is common are Si3N4, WC, SiC, and more. Liquid phase sintering is the process of adding an additive to the powder which will melt before the matrix phase. The process of liquid phase sintering has three stages:
- Rearrangement – As the liquid melts capillary action will pull the liquid into pores and also cause grains to rearrange into a more favorable packing arrangement.
- Solution-Precipitation – In areas where capillary pressures are high (particles are close together) atoms will preferentially go into solution and then precipitate in areas of lower chemical potential where particles are non close or in contact. This is called "contact flattening" This densifies the system in a way similar to grain boundary diffusion in solid state sintering. Ostwald ripening will also occur where smaller particles will go into solution preferentially and precipitate on larger particles leading to densification.
- Final Densification – densification of solid skeletal network, liquid movement from efficiently packed regions into pores.
For liquid phase sintering to be practical the major phase should be at least slightly soluble in the liquid phase and the additive should melt before any major sintering of the solid particulate network occurs, otherwise rearrangement of grains will not occur.
Electric current assisted sintering
These techniques employ electric currents to drive or enhance sintering. English engineer A. G. Bloxam registered in 1906 the first patent on sintering powders using direct current in vacuum. The primary purpose of his inventions was the industrial scale production of filaments for incandescent lamps by compacting tungsten or molybdenum particles. The applied current was particularly effective in reducing surface oxides that increased the emissivity of the filaments.
In 1913, Weintraub and Rush patented a modified sintering method which combined electric current with pressure. The benefits of this method were proved for the sintering of refractory metals as well as conductive carbide or nitride powders. The starting boron–carbon or silicon–carbon powders were placed in an electrically insulating tube and compressed by two rods which also served as electrodes for the current. The estimated sintering temperature was 2000 °C.
In the US, sintering was first patented by Duval d’Adrian in 1922. His three-step process aimed at producing heat-resistant blocks from such oxide materials as zirconia, thoria or tantalia. The steps were: (i) molding the powder; (ii) annealing it at about 2500 °C to make it conducting; (iii) applying current-pressure sintering as in the method by Weintraub and Rush.
Sintering which uses an arc produced via a capacitance discharge to eliminate oxides before direct current heating, was patented by G. F. Taylor in 1932. This originated sintering methods employing pulsed or alternating current, eventually superimposed to a direct current. Those techniques have been developed over many decades and summarized in more than 640 patents.
Spark plasma sintering
Spark plasma sintering (SPS) is a form of sintering where both external pressure and an electric field are applied simultaneously to enhance the densification of the metallic/ceramic powder compacts. This densification uses lower temperatures and shorter amount of time than typical sintering. For a number of years, it was speculated that the existence of sparks or plasma between particles could aid sintering; however, Hulbert and coworkers systematically proved that the electric parameters used during spark plasma sintering make it (highly) unlikely. In light of this, the name "spark plasma sintering" has been rendered obsolete. Terms such as "Field Assisted Sintering Technique" (FAST), "Electric Field Assisted Sintering" (EFAS), and Direct Current Sintering (DCS) have been implemented by the sintering community. Using a DC pulse as the electrical current, spark plasma, spark impact pressure, joule heating, and an electrical field diffusion effect would be created.
Certain ceramic materials have low density, chemical inertness, high strength, hardness and temperature capability; nanocrystalline ceramics have even greater strength and higher superplasticity.
Many microcrystalline ceramics that were treated and had gained facture toughness lost their strength and hardness, with this many have created ceramic composites to offset the deterioration while increasing strength and hardness to that of nanocrystalline materials. Through various experiments it has been found that in order to design the mechanical properties of new material, controlling the grain size and its distribution, amount of distribution and other is pinnacle.
Pressureless sintering is the sintering of a powder compact (sometimes at very high temperatures, depending on the powder) without applied pressure. This avoids density variations in the final component, which occurs with more traditional hot pressing methods.
The powder compact (if a ceramic) can be created by slip casting into a plaster mould, then the final green compact can be machined if necessary to final shape before being heated to sinter.
Densification, vitrification and grain growth
Sintering in practice is the control of both densification and grain growth. Densification is the act of reducing porosity in a sample thereby making it more dense. Grain growth is the process of grain boundary motion and Ostwald ripening to increase the average grain size. Many properties (mechanical strength, electrical breakdown strength, etc.) benefit from both a high relative density and a small grain size. Therefore, being able to control these properties during processing is of high technical importance. Since densification of powders requires high temperatures, grain growth naturally occurs during sintering. Reduction of this process is key for many engineering ceramics.
For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid. The power behind the densification is derived from the capillary pressure of the liquid phase located between the fine solid particles. When the liquid phase wets the solid particles, each space between the particles becomes a capillary in which a substantial capillary pressure is developed. For submicrometre particle sizes, capillaries with diameters in the range of 0.1 to 1 micrometres develop pressures in the range of 175 pounds per square inch (1,210 kPa) to 1,750 pounds per square inch (12,100 kPa) for silicate liquids and in the range of 975 pounds per square inch (6,720 kPa) to 9,750 pounds per square inch (67,200 kPa) for a metal such as liquid cobalt.
Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increase pressure at points of contact causing the material to move away from the contact areas forcing particle centers to draw near each other.
The sintering of liquid-phase materials involve a fine-grained solid phase to create the needed capillary pressures proportional to its diameter and the liquid concentration must also create the required capillary pressure within range, else the process ceases. The vitrification rate is dependent upon the pore size, the viscosity and amount of liquid phase present leading to the viscosity of the overall composition, and the surface tension. Temperature dependence for densification controls the process because at higher temperatures viscosity decreases and increases liquid content. Therefore, when changes to the composition and processing are made, it will affect the vitrification process.
Sintering occurs by diffusion of atoms through the microstructure. This diffusion is caused by a gradient of chemical potential – atoms move from an area of higher chemical potential to an area of lower chemical potential. The different paths the atoms take to get from one spot to another are the sintering mechanisms. The six common mechanisms are:
- Surface diffusion – Diffusion of atoms along the surface of a particle
- Vapor transport – Evaporation of atoms which condense on a different surface
- Lattice diffusion from surface – atoms from surface diffuse through lattice
- Lattice diffusion from grain boundary – atom from grain boundary diffuses through lattice
- Grain boundary diffusion – atoms diffuse along grain boundary
- Plastic deformation – dislocation motion causes flow of matter
Also one must distinguish between densifying and non-densifying mechanisms. 1–3 above are non-densifying – they take atoms from the surface and rearrange them onto another surface or part of the same surface. These mechanisms simply rearrange matter inside of porosity and do not cause pores to shrink. Mechanisms 4–6 are densifying mechanisms – atoms are moved from the bulk to the surface of pores thereby eliminating porosity and increasing the density of the sample.
A grain boundary(GB) is the transition area or interface between adjacent crystallites (or grains) of the same chemical and lattice composition, not to be confused with a phase boundary. The adjacent grains do not have the same orientation of the lattice thus giving the atoms in GB shifted positions relative to the lattice in the crystals. Due to the shifted positioning of the atoms in the GB they have a higher energy state when compared with the atoms in the crystal lattice of the grains. It is this imperfection that makes it possible to selectively etch the GBs when one wants the microstructure visible. Striving to minimize its energy leads to the coarsening of the microstructure to reach a metastable state within the specimen. This involves minimizing its GB area and changing its topological structure to minimize its energy. This grain growth can either be normal or abnormal, a normal grain growth is characterized by the uniform growth and size of all the grains in the specimen. Abnormal growth is when a few grains grow much larger than the remaining majority.
Grain boundary energy/tension
The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy thus striving to make the grain boundary area smaller and this change requires energy.
“Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is σGB. On the basis of this reasoning it would follow:
with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered.” [pg 478]
The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk(i.e. surface tension). When the surface area becomes bigger the bonds stretch more and the GB tension increases. To counteract this increase in tension there must be a transport of atoms to the surface keeping the GB tension constant. This diffusion of atoms accounts for the constant surface tension in liquids. Then the argument,
holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area. For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by
is normally expressed in units of while is normally expressed in units of since they are different physical properties.
In a two-dimensional isotropic material the grain boundary tension would be the same for the grains. This would give angle of 120° at GB junction where three grains meet. This would give the structure a hexagonal pattern which is the metastable state(or mechanical equilibrium) of the 2D speciemen. A consequence of this is that to keep trying to be as close to the equilibrium as possible grains with fewer sides than six will bend the GB to try keep the 120° angle between each other. This results in a curved boundary with its curvature towards itself. A grain with six sides will as mentioned have straight boundaries while a grain with more than six sides will have curved boundaries with its curvature away from itself, see figureFile:Grainangles.jpg. A grain with six boundaries (i.e. hexagonal structure) are in a metastable state (i.e. local equilibrium) within the 2D structure. In three dimensions structural details are similar but much more complex and the metastable structure for a grain is a non-regular 14-sided polyhedra with doubly curved faces. In practice all arrays of grains are always unstable and thus always grows until its prevented by a counterforce.
Since the grains strive to minimize their energy and a curved boundary has a higher energy than a straight boundary. This means that the grain boundary will migrate towards the the curvature.[clarification needed] The consequence of this is that grains with less than 6 sides will decrease in size while grains with more than 6 sides will increase in size.
Grain growth happens due to motion of atoms across a grain boundary. Convex surfaces have a higher chemical potential than concave surfaces therefore grain boundaries will move toward their center of curvature. As smaller particles tend to have a higher radius of curvature and this results in smaller grains losing atoms to larger grains and shrinking. This is a process called Ostwald ripening. Large grains grow at the expense of small grains. Grain growth in a simple model is found to follow:
Here G is final average grain size, G0 is the initial average grain size, t is time, m is a factor between 2 and 4, and K is a factor given by:
Here Q is the molar activation energy, R is the ideal gas constant, T is absolute temperature, and K0 is a material dependent factor.
Reducing grain growth
If a dopant is added to the material (example: Nd in BaTiO3) the impurity will tend to stick to the grain boundaries. As the grain boundary tries to move (as atoms jump from the convex to concave surface) the change in concentration of the dopant at the grain boundary will impose a drag on the boundary. The original concentration of solute around the grain boundary will be asymmetrical in most cases. As the grain boundary tries to move the concentration on the side opposite of motion will have a higher concentration and therefore have a higher chemical potential. This increased chemical potential will act as a backforce to the original chemical potential gradient that is the reason for grain boundary movement. This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth.
Fine second phase particles
If particles of a second phase which are insoluble in the matrix phase are added to the powder in the form of a much finer powder than this will decrease grain boundary movement. When the grain boundary tries to move past the inclusion diffusion of atoms from one grain to the other will be hindered by the insoluble particle. Since it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to the grain boundary migration. This effect is called the Zener effect after the man who estimated this drag force to
where r is the radius of the particle and λ the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is
assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is:
Now assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow:
This can be reduced to so the critical diameter of the grains is dependent of the size and volume fraction of the particles at the grain boundaries.
It has also been shown that small bubbles or cavities can act as inclusion
More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith.
Natural sintering in geology
Siliceous sinter is a deposit of opaline or amorphous silica, that shows as incrustations near hot springs and geysers. It sometimes forms conical mounds, called geyser cones, but can also shape as a terrace. The main agents responsible for the deposition of siliceous sinter are algae and other vegetation in the water. Altering of wall rocks can also form sinters near fumaroles and in the deeper channels of hot springs. Examples of siliceous sinter are geyserite and fiorite. They can be found in many places, including Iceland, New Zealand, U.S.A. (Yellowstone National Park - Wyo., Steamboat Springs - Colo.),...
Calcareous sinter is also called tufa, calcareous tufa, or calc-tufa. It is a deposit of calcium carbonate, as with travertine. Called petrifying springs, they are quite common in limestone districts. Their calcareous waters deposit a sintery incrustation on surrounding objects. The precipitation being assisted with mosses and other vegetable structures, thus leaving cavities in the calcareous sinter after they have decayed. Petrifying spring at Pamukkale, Turkey :
- Capacitor Discharge Sintering
- Ceramic engineering
- Selective laser sintering, a rapid prototyping technology.
- Spark plasma sintering
- Yttria-stabilized zirconia
- High-temperature superconductors
- Metal clay
- W. David Kingery - a pioneer of sintering methods
For the geological aspect :
- Kingery, W. David; Bowen, H. K.; Uhlmann, Donald R. (April 1976). Introduction to Ceramics (2nd ed.). John Wiley & Sons, Academic Press. ISBN 0-471-47860-1.
- "endo gas".
- "Materials Science and Engineering: R: Reports : Consolidation/synthesis of materials by electric current activated/assisted sintering". ScienceDirect. Retrieved 2011-09-30.
- Salvatore Grasso et al. (2009). "Electric current activated/assisted sintering (ECAS): a review of patents 1906–2008". Sci. Technol. Adv. Mater. (free download pdfdoi:10.1088/1468-6996/10/5/053001.) 10 (5): 053001.
- Tuan, W.H.; Guo, J.K. (2004). Multiphased ceramic materials: processing and potential. Springer. ISBN 3-540-40516-X.
- Hulbert, D. M. et al. The Absence of Plasma in‘ Spark Plasma Sintering’. Journal of Applied Physics 104, 3305 (2008).
- Anselmi-Tamburini, U. et al. in Sintering: Nanodensification and Field Assisted Processes (Castro, R. & van Benthem, K.) (Springer Verlag, 2012).
- Palmer, R.E.; Wilde, G. (December 22, 2008). Mechanical Properties of Nanocomposite Materials. EBL Database: Elsevier Ltd. ISBN 978-0-08-044965-4.
- Smallman R. E., Bishop, Ray J (1999). Modern physical metallurgy and materials engineering: science, process, applications. Oxford : Butterworth-Heinemann. ISBN 978-0-7506-4564-5.
- Mittemeijer, Eric J. (2010). Fundamentals of Materials Science The Microstructure–Property Relationship Using Metals as Model Systems. Springer Heidelberg Dordrecht London New York. pp. 463–496. ISBN 978-3-642-10499-2.
- Kang, Suk-Joong L. (2005). Sintering: Densification, Grain Growth, and Microstructure. Elsevier Ltd. pp. 9–18. ISBN 978-0-7506-6385-4.
- Robert W. Cahn, Peter Haasen (1996). Physical Metallurgy (Fourth Edition). pp. 2399–2500. ISBN 978-0-444-89875-3.
- C. Barry Carter, M. Grant Norton (2007). Ceramic Materials: Science and Engineering. Springer Science+Business Media, LLC. pp. 427–443. ISBN 978-0-387-46270-7.
- Robert W. Cahn, Peter Haasen (1996). Physical Metallurgy(Fourth Edition). ISBN 978-0-444-89875-3.
- Smith, Cyril S. (02 1948). Introduction to Grains, Phases and Interphases: an Introduction to Microstructure.
- Sinter in thefreedictionary.com.
- sinter in Encyclopedia Britanica.
- Chiang, Yet-Ming; Birnie, Dunbar P.; Kingery, W. David (May 1996). Physical Ceramics: Principles for Ceramic Science and Engineering. John Wiley & Sons. ISBN 0-471-59873-9.
- Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 0-8493-6594-5.
- German, R.M. (1996). Sintering Theory and Practice. John Wiley & Sons, Inc. ISBN 0-471-05786-X.
- Kang, Suk-Joong L. (2005). Sintering (1st ed.). Oxford: Elsevier, Butterworth Heinemann. ISBN 0-7506-6385-5.
|Look up sintering in Wiktionary, the free dictionary.| | http://en.wikipedia.org/wiki/Sintering | 13 |
78 | Miscellaneous On-Line Topics for
Return to Main Page
Exercises for This Topic
Index of On-Line Text
Everything for Calculus
Everything for Finite Math
Everything for Finite Math & Calculus
Domain and Range
It sometimes happens that one function "undoes" what another function does. For instance, if f(x) = 2x and g(x) = x/2, then f doubles input numbers, while g does the opposite. We shall refer to f and g as inverse functions. But before we can discuss inverse functions properly, we shall first need to review what is meant by the domain of a function, discuss the range of a function, and look at the manner in which functions interact with each other.
First, we review the concept of "domain" from the book.
Domain of a Function
Related to the above concept is the concept of the "range" of a function.
Range of a Function
If we think of a function as machine -- in goes x, out goes f(x) -- then the range of f is the set of all possible numbers that it spews out.
Here is a graphical way of obtaining the range of a function. As an example, let us use f(x) = x2, whose graph is shown below. The range of f consists of all the possible y-values we can get. This amounts to finding all the heights of points on the graph of f. To find this graphically, pretend that a pair of elevator doors come in from both sides and squash the graph onto the y-axis. Then the resulting "squashed" graph gives the range of the function.
Looking at the "squashed" graph, we find it covers the interval [0, +) on the y-axis, confirming that the range is [0, +).
We have seen in the Section 1.2 in the text-book that a graph in the xy-plane is the graph of a function with independent variable x if it passes the "vertical line test." That is, if each vertical line passes through at most one point of the graph.
Now pretend that we have the graph of some function that passes the "horizontal line test." In other words, suppose that each horizontal line passes through at most one point of the graph. The next figure shows the graphs of some functions, one which passes the horizontal line test, and one which fails it. Those that pass it are injective. (For an algebraic definition, see below.)
Each horizontal line passes through at most one point of the graph.
Some horizontal linea pass through more than one point of the graph.
To interpret what it means algebraically for a function to be one-to-one, take a look at the graph of a function that is not one-to-one.
First, notice that the horizontal line through y = 2 passes through two points: (-4, 2) and (0, 2). Another way of saying this is that f(-4) = f(0) = 2. Similarly, f(1) =f(3) = 0, so that again we have different x-values that give the same value for f(x). Also, f(-1) = f(-3) = 4.
This does not happen in the case of one-to-one functions; it cannot happen that f(a) = f(b) for two different x-values a and b.It is for this reason that we call such a function one-to-one. (One-to-one functions are also called injective.)
Example 2 Determining Whether a Function is One-to-One
Which of the following functions is one-to-one?
All we need to do is sketch their graphs and decide which of them pass the horizontal line test.
|(a) The graph of f(x) = 4 -x2 fails the horizontal line test, since the line y = 0 passes through the two points (-2, 0) and (2, 0). In other words, f(-2) = f(2) = 0, so this function is not one-to-one.|
Suppose you take a number x, double it and then halve the answer. You expect to get x back again. In a sense, halving a number is the reverse process of doubling it. Similarly, if you start with x, cube it, and then take the cube root of the answer, you again get back x. Thus taking the cube root of a number is the reverse process of cubing it. Let us express these simple situations in terms of functions.
Example 3 Composing Inverse Functions
Let f(x) = 2x (the doubling function), and let g(x) = x/2 (the halving function). Calculate g(f(x)) and f(g(x)).
To evaluate expressions such as these, we start from the inside and work outward:
|We substituted f(x) = 2x|
|Since g(a) = a/2|
|We substituted g(x) = x/2|
|Since f(a) = 2a|
Thus, g(f(x)) = x and f(g(x)) = x. This is how we say that the halving function and the doubling function are the "reverse" of each other. Formally, we refer to f and g as being inverse functions.
Before we go on...
Here is a nice way of visualizing what is going on. Since g(f(x)) means "first apply f and then apply g," we can think of this as feeding the output of f into g, and seeing what we get. Here is an illustration of this process.
See if you can illustrate the corresponding process for f(g(x)).
f(g(x)) = x for every x in the domain of g.
When f and g are inverse functions, we write g(x) as f-1(x).
Graphing Inverse Functions
Q How are the graphs of a function and its inverse related?
A Take a look at the figure which shows the graphs of f(x) = x2+1 and its inverse, f-1(x) = (x-1)1/2.
We have also included the graph of y = x to show the symmetry. Pretend that you held the two ends of the line y = x and flipped the portion of the coordinate plane shown so that the x-axis wound up on top of the original y-axis. Then the graph of f would land exactly on top of the graph of f-1. In other words:
Also notice something else from the graph: The range of f is [2, +) and the domain of f-1 is also [2, +). This is true in general:
(Note that, for example, (2,5) is a point on the graph of f, and the corresponding point on the graph of f-1 is (5,2). This is the way that individual points on the two graphs correspond: we interchange coordinates to go from the graph of f to the graph of f-1.
Q Why is this so?
A Let's suppose that the inverse functions we are considering are called f and f-1. Then their graphs are the graphs of the equations y = f(x) and y = f-1(x). Look at the equation y = f(x) for a moment. If we apply f-1 to both sides, we get f-1(y) = f-1(f(x)) = x, in other words, x = f-1(y). Thus, a point on the graph of f also satisfies the equation x = f-1(y). In other words, it is a point on the graph of f-1 if the x- and y-axes are interchanged. This means that the graph of f is the same as the graph of f-1, but with the roles of x and y interchanged. An easy way to interchange x and y is to flip the coordinate plane about the line y = x as shown below.
Q Do all functions have inverses?
A No. Think of how the graph of a function is related to the graph of its inverse. If the inverse function is to exist, then its graph is obtained from the graph of the original function by flipping about the line y = x. Now the resulting graph had better be the graph of a function, so it had better pass the vertical line test. But vertical lines correspond to horizontal lines under the operation of flipping. Thus, in order for the graph of the inverse to pass the vertical line test, it had better be the case that the function we started with passed the horizontal line test. This amounts to saying that the original function had better be one-to-one if it is to have an inverse!
Q Do all one-to-one functions have inverses?
A Yes. If f is one-to-one, then the flipping operation results in a graph that passes the vertical line test, and so is the graph of a function. A little thought will convince you that this function undoes what the function f did-in other words, that the new function is the inverse of f.
Here is a summary of what we have learned so far.
Graphing the Inverse of a One-to-One Function
Finding Inverse Functions Algebraically
Q I can obtain the graph of f-1 by flipping the graph of the one-to-one function f about the line y = x. This is a graphical way of obtaining the inverse. Is there an algebraic way of doing this? In other words, if I am given a formula for f(x), how do I get the formula for f-1(x)?
A This is answered by the next example.
Example 4 Finding the Inverse of a Function
Find the inverse of the function f(x) = x1/3 + 2
First, we must check that this function is one-to-one. Its graph is the graph of y = x1/3 shifted up 2 units. To convince yourself that it passes the horizontal line test, either graph it on the Function Evaluator & Grapher or Excel Grapher. Alternatively, press here to see a picture of the graph. Thus the function f is one-to-one, and hence does have an inverse.
Here is a two-step method to find the inverse:
Before we go on... This process yields the inverse because of the above discussion on graphing an inverse function: the equation associated with the inverse function of a given function f is the curve y = f(x), but with x and y reversed. Thus, to see the form of this equation, we must write it in the form x = g(y), which explains why we solved for x.
This process won't work for a function that is not one-to-one. For example, when we try to solve y = x2 + 1 for x, we get two possible solutions; x =(y-1)1/2 and x = -(y-1)1/2. In this case, the given function f is not invertible; if you look at its graph (a parabola) and flip it about the line y = x, you get a curve (a sideways parabola) that is not the graph of a function, as it fails the vertical line test. Alternatively, the graph of f fails the horizontal line test.
Here is one for you to participate in.
Logarithmic and Exponential Functions
Here is the graph of the exponential function f(x) = 2x.
To obtain its inverse, we set
and solve for x. But doing this amounts to nothing more than writing the above equation in logarithmic form:
Hence the inverse of the exponential function f(x) = 2x is
Here are the graphs of f and f-1 on the same set of axes.
The relationship between logarithmic and exponential functions does not, or course, depend on the base 2, and we have the following more general result.
Exponential and Logarithmic Functions
The following identities follow from the inverse relationship between f and g.
In particular, we can choose the base b to be either e or 10 and obtain:
Q Is there a direct way of seeing why those identities are true?
A Look at the first identity,
The left-hand side, logb( bx), means the power to which you must raise b in order to get bx, and this is obviously the x power. Turning to the second identity,
the left-hand side is, in words, b raised to the power to which you must raise b in order to get x. Surely that must give you x! (It's like saying "The name of the person whose name is Earl, is Earl!")
You can now now go on and try the exercise set for this topic.
Last Updated:February, 2003 | http://people.hofstra.edu/stefan_waner/realworld/calctopic1/inverses.html | 13 |
50 | Open to all Middle and High School Classes
Division I – 6th – 8th grade
Division II – 9th – 12th grade
Due: January 25, 2013
Table of Contents
- The Challenge
- Range of Activities
- Essential Questions
- Student Outcomes
- Evaluation Rubric
- Curricular Goals
Imagine a game show that poses a question to two experts. Each expert gives a different answer and rationale for their answer. The contestant in this game show must decide who is telling the truth and who is bluffing.
Here’s an example: A biologist has put a single bacterium in a jar with unlimited nutrients at 11:00 pm. The bacteria double every minute. The jar is exactly full at midnight (an hour later). At what time was the jar half full?
- A) 11:30 pm
- B) 11:45 pm
- C) 11:59 pm
Only one of the above answers is correct (C).
In this Challenge, create a video (about 3 minutes, no more than 4) that presents a game show that answers this question on exponential growth as well as several more, which the group will create.
- For Division I schools: Please create two additional questions (also related to exponential growth in a way that expands our understanding of the concept) with three possible answers. This question does not need to be related to bacteria.
- For Division II schools: Please create three additional questions (also related to exponential growth in ways that expand our understanding of the concept), each with three possible answers. At least one of the new questions must deal directly with the visualizationof exponential growth. When presenting this question, the group must display a visual representation. These questions do not need to be related to bacteria.
- Consider using volume as a guiding concept in addition to simply using amount. For instance, if we assume that each bacterium looks like a microscopic cube that is 10-7 meters (one ten-millionth of a meter) on a side, each bacterium has a volume of 10-7 x 10-7 x 10-7 = 10-21 cubic meters. If you suppose the bacteria are able to continue doubling until 1:00 am, you’ll find that the bacteria now occupy a volume so large that they cover the entire surface of the Earth in a layer more than 6 feet deep! So, how long would it take to cover your school? Your town?
The game show will have two so-called experts. For each question, there are three possible scenarios:
- One expert is telling the truthful answer, while the other is bluffing;
- Both experts are bluffing and the third answer is actually the correct answer;
- Both experts are telling the truth (keep in mind that in mathematics there are often more than one right answer).
All experts must justify (provide rationalizations) why they believe their answer is correct, in an attempt to convince the contestants that they are telling the truth.
- In the end, the real answer needs to be revealed, including a rationalization as to why this answer is correct.
The exact format of the game show is up to the group (is there a contestant guessing or is it the video watchers at home trying to guess? What is the role of the host? Is the tone of the show serious or funny? etc.).
- Game show video (this is the only Meridian Stories deliverable)
- Final verbatim script, which includes written explanations of the problems’ answers.
- Investigation of the properties of exponential growth and the various applications of this concept in life
- Creative Brainstorming about compelling ways to communicate the content inside of a specific TV genre – the game show
- Script writing
- Video – Pre-production, Production, Post-production
- Directing, Casting, Rehearsing, Video Editing, Audio Editing
We recommend that this Meridian Stories Challenge take place inside of a three to four week time frame. The students must work in teams of 3-4. All internal reviews by the teacher are at the discretion of the teacher. Below is a suggested breakdown for the students’ work.
During Phase One, student teams will:
- Work through and solve the given problem.
- Focus on justifying why your answer is the correct one.
- Also, come up with (false) explanations for one of the incorrect answers, as if you were trying to convince someone to pick the wrong answer.
- Do some research and complete a few problems on exponential growth, until the concept is clear to all group members.
- Be sure to understand why exponential growth applies to the given problem.
- Using the given problem as an example, begin to develop your own problem(s) with 3 potential answers each.
- Middle School Groups: Create 2 additional problems.
- High School Groups: Create 3 additional problems. One must deal directly with visualization of exponential growth. Consider using volume as a guiding concept.
|Meridian Storiesprovides two forms of support for the student teams.
Recommended review, as a team, for this Challenge include:
|Media Innovators and Artists||Meridian Tips|
|On Mathematics in Everyday Life – Eric GazeOn Directing Comedy – Davis Robinson
On Acting – Abbie Killeen
On Editing – Tom Pierce
|“Creative Brainstorming Techniques”“Producing: Tips for the Shoot”|
During Phase Two, student teams will:
- Complete development and scripting of additional exponential growth problem(s) and answers.
- Brainstorm about the exact format of the game show.
- Is there a host?
- Are the ‘experts’ supposed to be ‘math experts’ or celebrities? Or someone else?
- How do you visualize this playing out? Against what kind of setting – some thing colorful and playful, or academic?
- What is the tone of this video? Is it comic or serious?
- Is there, perhaps, another context for these questions – as if this game show were a part of a larger narrative and the correct answers to these questions could mean the difference between…? You decide.
- Once you have mapped out the general format for the presentation of the content, begin pre-production. This primarily includes casting the roles of the experts and possibly the host; choosing the setting/location for the game show; choosing the costumes and props for the characters; and planning the logistics for the shoot.
- Rehearse and block the game show in your chosen location.
During Phase Three, student teams will:
- Finalize the script for the game show. The running time should be approximately 3 minutes (no more than 4 minutes).
- Shoot the game show video.
- Edit the video.
- Post-produce the video, adding music, sound effects, etc. as desired.
- Why is an exponential model appropriate for bacteria growth?
- How does the concept of doubling relate to exponential growth, which is defined as a constant rate of growth per unit of time change?
- Why does using an exponential growth model work in numerous different situations/problems?
- How can you recognize a situation in which exponential growth is appropriate to use?
- How does doubling, related to exponential growth, always result in astronomically large numbers no matter what the starting number or the period of change is?
- How fast does it grow?
- How can creating a visual representation enhance your understanding of the explosive growth of the exponential function?
- How does the need to justify an answer enhance your understanding of the concept?
- What are the challenges of creating an educational as well as an entertaining video program?
- How has working on a team changed the learning experience?
- The student will understand the definition of exponential growth as multiplying by a constant growth factor per unit of time change.
- The student will understand doubling as a specific instance of exponential growth.
- The student will be able to identify situations that can be modeled by exponential growth.
- The student will understand the explosive growth nature of the exponential function regardless of starting conditions.
- The student will gain a deeper understanding of exponential growth through constructing visual representations and creating viable rationales.
- The student will have explored the often-conflicting relationship between education and entertainment, and the spaces where the two can co-exist.
- The student will know the basic constructs of creating a game show program.
- The student will have an increased awareness of the challenges and rewards of team collaboration.
|CONTENT COMMAND – Clear understanding of exponential growth (its definition, modeling applications, and explosive nature) as demonstrated through question answers, question development, and answer justifications.|
|Correct Answers||The correct answers are either inaccurate or not clearly communicated. It is unclear that the students understand exponential growth||The correct answers are accurate, but not presented fully. The students seem to have a basic understanding of exponential growth||The correct answers are accurate and presented fully. The students have a clear understanding of exponential growth|
|Question Development||The newly developed question(s) are not clearly related to exponential growth||The newly developed question(s) are related to exponential growth, but don’t add to our understanding of the topic||The newly developed question(s) are directly related to exponential growth, and add to our understanding of the topic|
|Incorrect Answer Justifications||The incorrect answer(s) and their justifications are not well crafted around the content||The incorrect answer(s) and their justifications are reasonably plausible||The incorrect answer(s) and their justifications are plausible, engaging and thought provoking|
|STORYTELLING COMMAND – Effective use of character, the game show format, and tone/mood to create an engaging program.|
|Character||The presentation of the experts (and others) as characters is not particularly engaging or suitable||The presentation of the experts (and others) serves the game show effectively||The presentation of the experts (and others) is engaging and entertaining|
|Tone/Mood||The tone and/or mood of the game show is unclear or detracts from the overall engagement with the game show||The tone and/or mood are interesting choices that at times enhance our engagement with the video||The tone and/or mood are well chosen and enhance our engagement with the video|
|MEDIA COMMAND – Effective use of media to communicate narrative.|
|Directing/Acting||The directing and acting lack coherence and discipline||The directing and acting are solid, but inconsistently engaging||The directing is clear and coherent and the acting is convincing and believable|
|Setting/Format||The setting and creative approach to the game show don’t enhance our understanding and enjoyment||The setting and creative approach to the game show are interesting choices, but inconsistently engaging||The setting and creative approach enhance our enjoyment and understanding of the game show|
|Editing/Music/Sound||The game show feels patched together and the overall editing and use of music/sound detracts from the game show||The game show flows, but there are occasional editing/sound/musical distractions||The game show is edited cleanly and effectively, and the addition of music and/or sound enhance our enjoyment|
|21ST CENTURY SKILLS COMMAND (for teachers only) – Effective use of collaborative thinking, creativity and innovation, and initiative and self-direction to create and produce the final project.|
|Collaborative Thinking||The group did not work together effectively and/or did not share the work equally||The group worked together effectively and had no major issues||The group demonstrated flexibility in making compromises and valued the contributions of each group member|
|Creativity and Innovation||The group did not make a solid effort to create anything new or innovative||The group was able to brainstorm new and inventive ideas, but was inconsistent in their realistic evaluation and implementation of those ideas.||The group brainstormed many inventive ideas and was able to evaluate, refine and implement them effectively|
|Initiative and Self-Direction||The group was unable to set attainable goals, work independently and manage their time effectively.||The group required some additional help, but was able to complete the project on time with few problems||The group set attainable goals, worked independently and managed their time effectively, demonstrating a disciplined commitment to the project|
The Exponential Growth Game Show addresses a range of curricular objectives that have been articulated by the new Common Core Curricular Standards – Mathematics.
Below please find the standards that are addressed, either wholly or in part.
Common Core Curricular Standards – Mathematics
Overall Standards for Mathematical Practice
- Make sense of problems and persevere in solving them.
- Reason abstractly and quantitatively.
- Attend to the meaning of quantities.
- Construct viable arguments and critique the reasoning of others.
- Model with mathematics.
- Look for and make use of structure.
- Look for and express regularity in repeated reasoning.
High School – Functions
- Linear, Quadratic, and Exponential Models (F-LE)
- Prove that exponential functions grow by equal factors over equal intervals (F-LE 1.a).
- Recognize situations in which a quantity grows or decays by a constant percent rate per unit interval relative to another (F-LE 1.b).
- Interpret expressions for functions in terms of the situation they model (F-LE 5).
- Interpret the parameters in a linear or exponential function in terms of a context.
High School – Modeling
- Formulating tractable models, representing such models, and analyzing them is appropriately a creative process.
- Example of such situations might include: Modeling savings account balance, bacterial colony growth, or investment growth.
- Models can also shed light on the mathematical structures themselves, for example, as when a model of bacterial growth makes more vivid the explosive growth of the exponential function. | http://www.meridianstories.com/challenges/mathematics-challenge-2-exponential-growth-game-show/ | 13 |
397 | 2008/9 Schools Wikipedia Selection. Related subjects: Mathematics
|Topics in calculus|
Lists of integrals
In calculus, a branch of mathematics, the derivative is a measurement of how a function changes when the values of its inputs change. Loosely speaking, a derivative can be thought of as how much a quantity is changing at some given point. For example, the derivative of the position or distance of a car at some point in time is the instantaneous velocity, or instantaneous speed (respectively), at which that car is traveling (conversely the integral of the velocity is the car's position).
A closely-related notion is the differential of a function.
The derivative of a function at a chosen input value describes the best linear approximation of the function near that input value. For a real-valued function of a single real variable, the derivative at a point equals the slope of the tangent line to the graph of the function at that point. In higher dimensions, the derivative of a function at a point is a linear transformation called the linearization..
Differentiation and the derivative
Differentiation is a method to compute the rate at which a quantity, y, changes with respect to the change in another quantity, x, upon which it is dependent. This rate of change is called the derivative of y with respect to x. In more precise language, the dependency of y on x means that y is a function of x. If x and y are real numbers, and if the graph of y is plotted against x, the derivative measures the slope of this graph at each point. This functional relationship is often denoted y = f(x), where f denotes the function.
The simplest case is when y is a linear function of x, meaning that the graph of y against x is a straight line. In this case, y = f(x) = m x + c, for real numbers m and c, and the slope m is given by
where the symbol Δ (the uppercase form of the Greek letter Delta) is an abbreviation for "change in." This formula is true because
- y + Δy = f(x+ Δx) = m (x + Δx) + c = m x + c + m Δx = y + mΔx.
It follows that Δy = m Δx.
This gives an exact value for the slope of a straight line. If the function f is not linear (i.e. its graph is not a straight line), however, then the change in y divided by the change in x varies: differentiation is a method to find an exact value for this rate of change at any given value of x.
The idea, illustrated by Figures 1-3, is to compute the rate of change as the limiting value of the ratio of the differences Δy / Δx as Δx becomes infinitely small.
In Leibniz's notation, such an infinitesimal change in x is denoted by dx, and the derivative of y with respect to x is written
suggesting the ratio of two infinitesimal quantities. (The above expression is pronounced in various ways such as "d y by d x" or "d y over d x". The oral form "d y d x" is often used conversationally, although it may lead to confusion.)
The most common approach to turn this intuitive idea into a precise definition uses limits, but there are other methods, such as non-standard analysis.
Definition via difference quotients
Let y=f(x) be a function of x. In classical geometry, the tangent line at a real number a was the unique line through the point (a, f(a)) which did not meet the graph of f transversally, meaning that the line did not pass straight through the graph. The derivative of y with respect to x at a is, geometrically, the slope of the tangent line to the graph of f at a. The slope of the tangent line is very close to the slope of the line through (a, f(a)) and a nearby point on the graph, for example (a + h, f(a + h)). These lines are called secant lines. A value of h close to zero will give a good approximation to the slope of the tangent line, and smaller values (in absolute value) of h will, in general, give better approximations. The slope of the secant line is the difference between the y values of these points divided by the difference between the x values, that is,
This expression is Newton's difference quotient. The derivative is the value of the difference quotient as the secant lines get closer and closer to the tangent line. Formally, the derivative of the function f at a is the limit
of the difference quotient as h approaches zero, if this limit exists. If the limit exists, then f is differentiable at a. Here f′ (a) is one of several common notations for the derivative ( see below).
Equivalently, the derivative satisfies the property that
which has the intuitive interpretation (see Figure 1) that the tangent line to f at a gives the best linear approximation
to f near a (i.e., for small h). This interpretation is the easiest to generalize to other settings ( see below).
Substituting 0 for h in the difference quotient causes division by zero, so the slope of the tangent line cannot be found directly. Instead, define Q(h) to be the difference quotient as a function of h:
Q(h) is the slope of the secant line between (a, f(a)) and (a + h, f(a + h)). If f is a continuous function, meaning that its graph is an unbroken curve with no gaps, then Q is a continuous function away from the point h = 0. If the limit exists, meaning that there is a way of choosing a value for Q(0) which makes the graph of Q a continuous function, then the function f is differentiable at the point a, and its derivative at a equals Q(0).
In practice, the continuity of the difference quotient Q(h) at h = 0 is shown by modifying the numerator to cancel h in the denominator. This process can be long and tedious for complicated functions, and many short cuts are commonly used to simplify the process.
The squaring function f(x) = x² is differentiable at x = 3, and its derivative there is 6. This is proven by writing the difference quotient as follows:
Then we get the simplified function in the limit:
The last expression shows that the difference quotient equals 6 + h when h is not zero and is undefined when h is zero. (Remember that because of the definition of the difference quotient, the difference quotient is always undefined when h is zero.) However, there is a natural way of filling in a value for the difference quotient at zero, namely 6. Hence the slope of the graph of the squaring function at the point (3, 9) is 6, and so its derivative at x = 3 is f '(3) = 6.
More generally, a similar computation shows that the derivative of the squaring function at x = a is f '(a) = 2a.
Continuity and differentiability
If y = f(x) is differentiable at a, then f must also be continuous at a. As an example, choose a point a and let f be the step function which returns a value, say 1, for all x less than a, and returns a different value, say 10, for all x greater than or equal to a. f cannot have a derivative at a. If h is negative, then a + h is on the low part of the step, so the secant line from a to a + h will be very steep, and as h tends to zero the slope tends to infinity. If h is positive, then a + h is on the high part of the step, so the secant line from a to a + h will have slope zero. Consequently the secant lines do not approach any single slope, so the limit of the difference quotient does not exist.
However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function y = |x| is continuous at x = 0, but it is not differentiable there. If h is positive, then the slope of the secant line from 0 to h is one, whereas if h is negative, then the slope of the secant line from 0 to h is negative one. This can be seen graphically as a "kink" in the graph at x = 0. Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function y = 3√x is not differentiable at x = 0.
Most functions which occur in practice have derivatives at all points or at almost every point. However, a result of Stefan Banach states that the set of functions which have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that differentiable functions are very atypical among continuous functions. The first known example of a function that is continuous everywhere but differentiable nowhere is the Weierstrass function.
The derivative as a function
Let f be a function that has a derivative at every point a in the domain of f. Because every point a has a derivative, there is a function which sends the point a to the derivative of f at a. This function is written f′(x) and is called the derivative function or the derivative of f. The derivative of f collects all the derivatives of f at all the points in the domain of f.
Sometimes f has a derivative at most, but not all, points of its domain. The function whose value at a equals f′(a) whenever f′(a) is defined and is undefined elsewhere is also called the derivative of f. It is still a function, but its domain is strictly smaller than the domain of f.
Using this idea, differentiation becomes a function of functions: The derivative is an operator whose domain is the set of all functions which have derivatives at every point of their domain and whose range is a set of functions. If we denote this operator by D, then D(f) is the function f′(x). Since D(f) is a function, it can be evaluated at a point a. By the definition of the derivative function, D(f)(a) = f′(a).
For comparison, consider the doubling function f(x) =2x; f is a real-valued function of a real number, meaning that it takes numbers as inputs and has numbers as outputs:
The operator D, however, is not defined on individual numbers. It is only defined on functions:
Because the output of D is a function, the output of D can be evaluated at a point. For instance, when D is applied to the squaring function,
D outputs the doubling function,
which we named f(x). This output function can then be evaluated to get f(1) = 2, f(2) = 4, and so on.
Let f be a differentiable function, and let f′(x) be its derivative. The derivative of f′(x) (if it has one) is written f′′(x) and is called the second derivative of f. Similarly, the derivative of a second derivative, if it exists, is written f′′′(x) and is called the third derivative of f. These repeated derivatives are called higher-order derivatives.
A function f need not have a derivative, for example, if it is not continuous. Similarly, even if f does have a derivative, it may not have a second derivative. For example, let
An elementary calculation shows that f is a differentiable function whose derivative is
f′(x) is twice the absolute value function, and it does not have a derivative at zero. Similar examples show that a function can have k derivatives for any non-negative integer k but no (k + 1)-order derivative. A function that has k successive derivatives is called k times differentiable. If in addition the kth derivative is continuous, then the function is said to be of differentiability class Ck. (This is a stronger condition than having k derivatives. For an example, see differentiability class.) A function that has infinitely many derivatives is called infinitely differentiable or smooth.
On the real line, every polynomial function is infinitely differentiable. By standard differentiation rules, if a polynomial of degree n is differentiated n times, then it becomes a constant function. All of its subsequent derivatives are identically zero. In particular, they exist, so polynomials are smooth functions.
The derivatives of a function f at a point x provide polynomial approximations to that function near x. For example, if f is twice differentiable, then
in the sense that
If f is infinitely differentiable, then this is the beginning of the Taylor series for f.
Notations for differentiation
The notation for derivatives introduced by Gottfried Leibniz is one of the earliest. It is still commonly used when the equation y=f(x) is viewed as a functional relationship between dependent and independent variables. Then the first derivative is denoted by
Higher derivatives are expressed using the notation
for the nth derivative of y = f(x) (with respect to x).
With Leibniz's notation, we can write the derivative of y at the point x = a in two different ways:
Leibniz's notation allows one to specify the variable for differentiation (in the denominator). This is especially relevant for partial differentiation. It also makes the chain rule easy to remember:
One of the most common modern notations for differentiation is due to Joseph Louis Lagrange and uses the prime mark, so that the derivative of a function f(x) is denoted f′(x) or simply f′. Similarly, the second and third derivatives are denoted
Beyond this point, some authors use Roman numerals such as
for the fourth derivative, whereas other authors place the number of derivatives in parentheses:
The latter notation generalizes to yield the notation f (n) for the nth derivative of f — this notation is most useful when we wish to talk about the derivative as being a function itself, as in this case the Leibniz notation can become cumbersome.
Newton's notation for differentiation, also called the dot notation, places a dot over the function name to represent a derivative. If y = f(t), then
denote, respectively, the first and second derivatives of y with respect to t. This notation is used almost exclusively for time derivatives, meaning that the independent variable of the function represents time. It is very common in physics and in mathematical disciplines connected with physics such as differential equations. While the notation becomes unmanageable for high-order derivatives, in practice only very few derivatives are needed.
Euler's notation uses a differential operator D, which is applied to a function f to give the first derivative Df. The second derivative is denoted D2f, and the nth derivative is denoted Dnf.
If y = f(x) is a dependent variable, then often the subscript x is attached to the D to clarify the independent variable x. Euler's notation is then written
- or ,
although this subscript is often omitted when the variable x is understood, for instance when this is the only variable present in the expression.
Euler's notation is useful for stating and solving linear differential equations.
Computing the derivative
The derivative of a function can, in principle, be computed from the definition by considering the difference quotient, and computing its limit. For some examples, see Derivative (examples). In practice, once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones.
Derivatives of elementary functions
In addition, the derivatives of some common functions are useful to know.
- Derivatives of powers: if
where r is any real number, then
wherever this function is defined. For example, if r = 1/2, then
and the function is defined only for non-negative x. When r = 0, this rule recovers the constant rule.
- Inverse trigonometric functions:
Rules for finding the derivative
In many cases, complicated limit calculations by direct application of Newton's difference quotient can be avoided using differentiation rules. Some of the most basic rules are the following.
- Constant rule: if f(x) is constant, then
- Sum rule:
- for all functions f and g and all real numbers a and b.
- Product rule:
- for all functions f and g.
- Quotient rule:
- Chain rule: If f(x) = h(g(x)), then
The derivative of
Here the second term was computed using the chain rule and third using the product rule: the known derivatives of the elementary functions x², x4, sin(x), ln(x) and exp(x) = ex were also used.
Derivatives in higher dimensions
Derivatives of vector valued functions
A vector-valued function y(t) of a real variable is a function which sends real numbers to vectors in some vector space Rn. A vector-valued function can be split up into its coordinate functions y1(t), y2(t), …, yn(t), meaning that y(t) = (y1(t), ..., yn(t)). This includes, for example, parametric curves in R2 or R3. The coordinate functions are real valued functions, so the above definition of derivative applies to them. The derivative of y(t) is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is,
if the limit exists. The subtraction in the numerator is subtraction of vectors, not scalars. If the derivative of y exists for every value of t, then y′ is another vector valued function.
If e1, …, en is the standard basis for Rn, then y(t) can also be written as y1(t)e1 + … + yn(t)en. If we assume that the derivative of a vector-valued function retains the linearity property, then the derivative of y(t) must be
because each of the basis vectors is a constant.
This generalization is useful, for example, if y(t) is the position vector of a particle at time t; then the derivative y′(t) is the velocity vector of the particle at time t.
Suppose that f is a function that depends on more than one variable. For instance,
f can be reinterpreted as a family of functions of one variable indexed by the other variables:
In other words, every value of x chooses a function, denoted fx, which is a function of one real number. That is,
Once a value of x is chosen, say a, then f(x,y) determines a function fa which sends y to a² + ay + y²:
In this expression, a is a constant, not a variable, so fa is a function of only one real variable. Consequently the definition of the derivative for a function of one variable applies:
The above procedure can be performed for any choice of a. Assembling the derivatives together into a function gives a function which describes the variation of f in the y direction:
This is the partial derivative of f with respect to y. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee".
In general, the partial derivative of a function f(x1, …, xn) in the direction xi at the point (a1 …, an) is defined to be:
In the above difference quotient, all the variables except xi are held fixed. That choice of fixed values determines a function of one variable
and, by definition,
In other words, the different choices of a index a family of one-variable functions just as in the example above. This expression also shows that the computation of partial derivatives reduces to the computation of one-variable derivatives.
An important example of a function of several variables is the case of a scalar-valued function f(x1,...xn) on a domain in Euclidean space Rn (e.g., on R² or R³). In this case f has a partial derivative ∂f/∂xj with respect to each variable xj. At the point a, these partial derivatives define the vector
This vector is called the gradient of f at a. If f is differentiable at every point in some domain, then the gradient is a vector-valued function ∇f which takes the point a to the vector ∇f(a). Consequently the gradient determines a vector field.
If f is a real-valued function on Rn, then the partial derivatives of f measure its variation in the direction of the coordinate axes. For example, if f is a function of x and y, then its partial derivatives measure the variation in f in the x direction and the y direction. They do not, however, directly measure the variation of f in any other direction, such as along the diagonal line y = x. These are measured using directional derivatives. Choose a vector
The directional derivative of f in the direction of v at the point x is the limit
Let λ be a scalar. The substitution of h/λ for h changes the λv direction's difference quotient into λ times the v direction's difference quotient. Consequently, the directional derivative in the λv direction is λ times the directional derivative in the v direction. Because of this, directional derivatives are often considered only for unit vectors v.
If all the partial derivatives of f exist and are continuous at x, then they determine the directional derivative of f in the direction v by the formula:
This is a consequence of the definition of the total derivative. It follows that the directional derivative is linear in v.
The same definition also works when f is a function with values in Rm. We just use the above definition in each component of the vectors. In this case, the directional derivative is a vector in Rm.
The total derivative, the total differential and the Jacobian
Let f be a function from a domain in R to R. The derivative of f at a point a in its domain is the best linear approximation to f at that point. As above, this is a number. Geometrically, if v is a unit vector starting at a, then f′ (a) , the best linear approximation to f at a, should be the length of the vector found by moving v to the target space using f. (This vector is called the pushforward of v by f and is usually written f * v.) In other words, if v is measured in terms of distances on the target, then, because v can only be measured through f, v no longer appears to be a unit vector because f does not preserve unit vectors. Instead v appears to have length f′ (a). If m is greater than one, then by writing f using coordinate functions, the length of v in each of the coordinate directions can be measured separately.
Suppose now that f is a function from a domain in Rn to Rm and that a is a point in the domain of f. The derivative of f at a should still be the best linear approximation to f at a. In other words, if v is a vector on Rn, then f′ (a) should be the linear transformation that best approximates f. The linear transformation should contain all the information about how f transforms vectors at a to vectors at f(a), and in symbols, this means it should be the linear transformation f′ (a) such that
Here h is a vector in Rn, so the norm in the denominator is the standard length on Rn. However, f′ (a)h is a vector in Rm, and the norm in the numerator is the standard length on Rm. The linear transformation f′ (a), if it exists, is called the total derivative of f at a or the (total) differential of f at a.
If the total derivative exists at a, then all the partial derivatives of f exist at a. If we write f using coordinate functions, so that f = (f1, f2, ..., fm), then the total derivative can be expressed as a matrix called the Jacobian matrix of f at a:
The existence of the Jacobian is strictly stronger than existence of all the partial derivatives, but if the partial derivatives exist and satisfy mild smoothness conditions, then the total derivative exists and is given by the Jacobian.
The definition of the total derivative subsumes the definition of the derivative in one variable. In this case, the total derivative exists if and only if the usual derivative exists. The Jacobian matrix reduces to a 1×1 matrix whose only entry is the derivative f′ (x). This 1×1 matrix satisfies the property that f(a + h) − f(a) − f′(a)h is approximately zero, in other words that
Up to changing variables, this is the statement that the function is the best linear approximation to f at a.
The total derivative of a function does not give another function in the same way that one-variable case. This is because the total derivative of a multivariable function has to record much more information than the derivative of a single-variable function. Instead, the total derivative gives a function from the tangent bundle of the source to the tangent bundle of the target.
The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point.
- An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers C to C. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. However, this innocent definition hides some very deep properties. If C is identified with R² by writing a complex number z as x + i y, then a differentiable function from C to C is certainly differentiable as a function from R² to R² (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy Riemann equations — see holomorphic functions.
- Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold M is a space which can be approximated near each point x by a vector space called its tangent space: the prototypical example is a smooth surface in R³. The derivative (or differential) of a (differentiable) map f: M → N between manifolds, at a point x in M, is then a linear map from the tangent space of M at x to the tangent space of N at f(x). The derivative function becomes a map between the tangent bundles of M and N. This definition is fundamental in differential geometry and has many uses — see pushforward (differential) and pullback (differential geometry).
- Differentiation can also be defined for maps between infinite dimensional vector spaces such as Banach spaces and Fréchet spaces. There is a generalization both of the directional derivative, called the Gâteaux derivative, and of the differential, called the Fréchet derivative.
- One deficiency of the classical derivative is that not very many functions are differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average".
- The properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology — see, for example, differential algebra. | http://www.pustakalaya.org/wiki/wp/d/Derivative.htm | 13 |
57 | We are now ready to define the concept of a function being continuous. The idea is that we want to say that a function is continuous if you can draw its graph without taking your pencil off the page. But sometimes this will be true for some parts of a graph but not for others. Therefore, we want to start by defining what it means for a function to be continuous at one point. The definition is simple, now that we have the concept of limits:
Note that for to be continuous at , the definition in effect requires three conditions:
- that is defined at , so exists,
- the limit as approaches exists, and
- the limit and are equal.
If any of these do not hold then is not continuous at .
The idea of the definition is that the point of the graph corresponding to will be close to the points of the graph corresponding to nearby -values. Now we can define what it means for a function to be continuous in general, not just at one point.
We often use the phrase "the function is continuous" to mean that the function is continuous at every real number. This would be the same as saying the function was continuous on (−∞, ∞), but it is a bit more convenient to simply say "continuous".
Note that, by what we already know, the limit of a rational, exponential, trigonometric or logarithmic function at a point is just its value at that point, so long as it's defined there. So, all such functions are continuous wherever they're defined. (Of course, they can't be continuous where they're not defined!)
A discontinuity is a point where a function is not continuous. There are lots of possible ways this could happen, of course. Here we'll just discuss two simple ways.
The function is not continuous at . It is discontinuous at that point because the fraction then becomes , which is undefined. Therefore the function fails the first of our three conditions for continuity at the point 3; 3 is just not in its domain.
However, we say that this discontinuity is removable. This is because, if we modify the function at that point, we can eliminate the discontinuity and make the function continuous. To see how to make the function continuous, we have to simplify , getting . We can define a new function where . Note that the function is not the same as the original function , because is defined at , while is not. Thus, is continuous at , since . However, whenever , ; all we did to to get was to make it defined at .
In fact, this kind of simplification is often possible with a discontinuity in a rational function. We can divide the numerator and the denominator by a common factor (in our example ) to get a function which is the same except where that common factor was 0 (in our example at ). This new function will be identical to the old except for being defined at new points where previously we had division by 0.
However, this is not possible in every case. For example, the function has a common factor of in both the numerator and denominator, but when you simplify you are left with , which is still not defined at . In this case the domain of and are the same, and they are equal everywhere they are defined, so they are in fact the same function. The reason that differed from in the first example was because we could take it to have a larger domain and not simply that the formulas defining and were different.
Not all discontinuities can be removed from a function. Consider this function:
Since does not exist, there is no way to redefine at one point so that it will be continuous at 0. These sorts of discontinuities are called nonremovable discontinuities.
Note, however, that both one-sided limits exist; and . The problem is that they are not equal, so the graph "jumps" from one side of 0 to the other. In such a case, we say the function has a jump discontinuity. (Note that a jump discontinuity is a kind of nonremovable discontinuity.)
Just as a function can have a one-sided limit, a function can be continuous from a particular side. For a function to be continuous at a point from a given side, we need the following three conditions:
- the function is defined at the point,
- the function has a limit from that side at that point and
- the one-sided limit equals the value of the function at the point.
A function will be continuous at a point if and only if it is continuous from both sides at that point. Now we can define what it means for a function to be continuous on a closed interval.
Notice that, if a function is continuous, then it is continuous on every closed interval contained in its domain.
Intermediate Value Theorem
A useful theorem regarding continuous functions is the following:
Application: bisection method
The bisection method is the simplest and most reliable algorithm to find zeros of a continuous function.
Suppose we want to solve the equation . Given two points and such that and have opposite signs, the intermediate value theorem tells us that must have at least one root between and as long as is continuous on the interval . If we know is continuous in general (say, because it's made out of rational, trigonometric, exponential and logarithmic functions), then this will work so long as is defined at all points between and . So, let's divide the interval in two by computing . There are now three possibilities:
- and have opposite signs, or
- and have opposite signs.
In the first case, we're done. In the second and third cases, we can repeat the process on the sub-interval where the sign change occurs. In this way we hone in to a small sub-interval containing the zero. The midpoint of that small sub-interval is usually taken as a good approximation to the zero.
Note that, unlike the methods you may have learned in algebra, this works for any continuous function that you (or your calculator) know how to compute. | http://en.m.wikibooks.org/wiki/Calculus/Continuity | 13 |
168 | The evolution of verbal behavior in children.Introduction
Complex language is one of the unique repertoires of the human species. Others include teaching and certain "types of imitation" (Premack, 2004), although these too may be pre-or co-requisites for certain functional uses of language. Over the last 40 years linguists have proposed theories and provided evidence related to their interpretation of the structure of language (Chomsky, 1959; Chomsky & Place, 2000, MacCorquodale, 1970; Pinker, 1999). Neuroscientists have identified neurological correlates associated with some aspects of language (Deacon, 1979, Holden, 2004). Behavior analysts have focused on the source of and controlling variables for the function of language as behavior per se (Catania, Mathews, & Shimoff, 1990; Greer & Ross, 2005; Michael, 1984; Skinner, 1957).
More recently, scholars have come to view human language as a product of evolution; "Linguists and neuroscientists armed with new types of data are moving beyond the non-evolutionary paradigm once suggested by Noam Chomsky and tackling the origins of speech head-on." (Culotta & Brooks-Hanson, 2004, p. 1315). The current work focuses on the evolution of both the non-oral motor and oral components of speech (Deacon, 1997; Holden, 2004), although some arguments are characterized necessarily more by theory than data.
Despite the evidence that primates and pigeons can be taught certa in features of verbal behavior (D. Premack & A. Premack; 2003; Savage-Rumbaugh, Rumbaugh, & Boysen, 1978; Epstein, Lanza, & Skinner, 1980), the speaker-as-own listener capability makes complex verbal behavior possible and may represent what is most unique about human verbal functions (Barnes-Holmes, Barnes-Holmes, & Cullinan, 2001; Lodhi & Greer, 1989; Horne & Lowe, 1996). Some suggest that oral communication evolved from clicking sounds to sounds of phonemes, and cite the extant clicking languages as evidence (Pennisi, 2004). It is likely that sign language and gesture predated both vocal forms; but it is the evolution of the spoken and auditory components of language that are seen as critical to the evolution of language. Some of these include changes in the anatomy of the jaw--Homo sapiens have more flexible jaw than did Neanderthals. Also, the location of the larynx relative to the trachea is different for Homo sapiens, and this anatomical feature made it possible for the humans to emit a wider range of speech sounds (Deacon, 1997). The combination of these anatomical changes, together with the identification of separate, but proximate, sites in the brain for speaking, listening, and imitation seem to be critical parts of what made spoken language possible (Deacon, 1997). The presence of these anatomical and physiological properties made it possible for the evolution of verbal functions through the process of cultural selection (Catania, 2001). The functional effects of speech sounds were acquired by the consequences provided within verbal communities. This latter focus is what constitutes the subject matter of verbal behavior.
The new foci on language, as an evolved anatomical and physiological capacity, do not necessarily suggest the existence of a universal grammar; nor, in fact, does it eliminate the possibility of an evolved universal grammar, as some argue (Pinker, 1999). Some of the linguistic neuropsychological searches for an evolved universal grammar now follow the PET and MRI trails and focus on identifying blood flow associated with the speech and hearing centers in the brain (Holden, 2004). Interesting and as important as this work may be, little, if any, is devoted to the function of language as behavior per se. Nor is it concerned with the biological or cultural evolution of verbal function in our species or in the lifespan of the individual, although anthropological linguists point to functions as the initial source. Only the research associated with Skinner's (1957) theory of verbal behavior as behavior per se, and expansions of the theory by contemporary behavior analysts, provide the means for analyzing how cultural selection gave rise to the function of language (Greer, 2002; Greer & Ross, in press; Hayes, Barnes-Holmes, & Roche, 2000; Lowe, Horne, Harris, & Randle, 2002). Currently, the linguistic, neuropsychological, and behavior analytic foci remain separate sciences, though they need not remain so (Catania, 1998). While the role of cultural selection in the evolution verbal behavior for the species remains theoretical, the development of verbal behavior within the ontogeny of the individual is empirically verifiable.
From Theory to Research
For decades after the publication of Skinner's (1957) book on verbal behavior, the majority of the publications on the theory remained theoretical. There is now a significant body of research supporting and expanding Skinner's theory of verbal behavior. We have identified over 100 experiments devoted to testing the theory and utility for educational purposes. There is an additional significant body of related work in relational frame theory that includes at least an equal number of studies (Hayes et al., 2000). In our program of research alone, we have completed at least 48 experiments (25 published papers, several in press, and recent dissertations) and a number of replications. Our particular research program was driven by our efforts to develop schools that provide all of the components of education based solely on teaching and schooling as a scientific endeavor. While the existing work in the entire corpus of behavior analysis provided a strong foundation for a science of schooling, much was still missing. Cognitive psychology offered a plethora of theories and findings, and when they were germane to our efforts, these findings proved to be operationally synonymous to those identified in behavior analysis. However Skinner's (1957) Verbal Behavior showed the way for a research program to fill in much of what was missing in the literature in a manner that allowed us to operationalize complex cognitive repertoires.
In our commitment to a thoroughgoing scientific approach to schooling, we needed functional curricula that identified repertoires of verbal operants or higher order operants, including "generative" or "productive" verbal behavior. Our efforts included using pre-existing conceptual and applied verbal behavior research, identifying the needs of children who were missing certain repertoires, and identifying the validity of untested components of Skinner's theory through new experiments done by others and us (Greer, McCorkle, & Williams, 1989; Selinske, Greer, & Lodhi, 1991). Through this process we have been able to meet real educational needs, or at least the most pressing needs--the recognition of which were missing in the existing science of behavior and cognitive psychology. Of course, these educational voids were also apparent in normative practices in education based on pre-scientific approaches that treat teaching as an art. We needed findings that worked in the day-to-day operation of our schools, if we were to educate the "whole child." Along the way, we discovered some interesting aspects of verbal behavior that may prove useful to a behavioral developmental psychology (Baer, 1970; Bijou & Baer, 1978; Gewirtz, Baer, Roth, 1958). Indeed, the evidence suggests that we have identified what Rosales-Ruiz and Baer (1996) described as "behavioral cusps"--in our case verbal behavior cusps. Rosales-Ruiz and Baer stated that,
A cusp is a change [a change in the capability of the child] that (1) is often difficult, tedious, subtle, or otherwise problematic to accomplish, yet (2) if not made, means little or no further development is possible in its realm (and perhaps in several realms); but (3) once it is made, a significant set of subsequent developments suddenly becomes easy or otherwise highly probable which (4) brings the developing organism into contact with other cusps crucial to further, more complex, or more refined development in a thereby steadily expanding, steadily more interactive realm. (Rosales-Ruiz & Baer, 1996, p. 166). [The italics in brackets were inserted into the quotation.]
Repertoires of Verbal Behavior for Instructional Purposes
First, applications of the research findings in verbal behavior in our CABAS[R] schools led to the categorization of children for instructional purposes according to levels of verbal behavior or verbal capabilities that we extrapolated from Skinner's analysis of the components of verbal behavior (Greer, 2002). (1) Traditional diagnoses or developmental constructs are useful for some inquiries, but they are not very useful for instructional purposes. The identification of the functional verbal capabilities of children, however, that we extrapolated from Skinner's work was very helpful. Skinner described the different verbal repertoires of the speaker and the relation of the speaker and listener in terms of his observations of highly literate individuals. These repertoires seemed to constitute what individuals needed to posses if they were to be verbally competent. Moreover, those verbal functions provided operational descriptions for most of the complex educational goals that had been prescribed by educational departments throughout the western world (Greer & Keohane, 2004; Greer & McCorkle, 2003). For educational purposes, the capabilities or cusps provided us with behavioral functions for a curriculum for listening, speaking, reading, writing, and the combinations that made up complex cognitive functions.
The verbal categorization proved useful in: (a) determining the ratio of instructors to students that would produce the best outcomes for students (Table 1), (b) identifying what existing tactics from the research worked for children with and without particular verbal capabilities (See Greer, 2002, Chapters 5 and 6), isolating the specific repertoires children could be taught given what each child initially brought to the table, and the development of a curricula composed of functional repertoires for complex human behavior. Most importantly, we identified the verbal "developmental cusps" (Rosales-Ruiz & Baer, 1996) or specific verbal capabilities we needed to induce, if we were to make real progress with our children. The categories provided a continuum of instructional sequences and developmental interventions that provided a functional approach to cognitive academic repertoires, and the recasting of state and international educational standards into functional repertoires of operants or higher order operants rather than structural categories alone (Greer, 1987, 2002; Greer & McCorkle, 2003). Each of the major verbal categories also identified levels of learner independence (i.e., operational definitions of autonomy) as well as what we argue are valid measures of socialization. Table 1 lists the broad verbal stages as we have related them to independence and social function.
Much of our work as teacher scientists is devoted to experimentally identifying prerequisites or co-requisites repertoires needed by each child to progress through the capabilities listed in Table 1. Once these were identified, we used or developed scientifically based tactics for moving children with the lack of a particular verbal capability from one level of verbal capability to the next level in the continuum.
When we found it necessary, and were able to teach the missing repertoires, the children made logarithmic increases in learning and emergent relations ensued. That is they acquired what has been characterized in the literature as behavioral cusps. As the evidence accumulated with individual children across numerous experiments, we also began to identify critical subcomponents of the verbal capabilities. As we identified more subcomponents, we worked our way inductively to the identification of the developmental components within the verbal capabilities suggested by Skinner. The quest led serendipitously to increased attention on the listener and speaker-as-own listener repertoires, a focus that began to be evident in the work of others also (Catania, Mathews, & Shimoff, 1990; Hayes, et al., 2000; Horne & Lowe, 1996). Table 2 lists the verbal capabilities and the components and prerequisites that we are beginning to identify as well as some of the related research.
It was evident that without the expertise to move children with language delays through a sequence of ever more sophisticated verbal capabilities or cusps, we could make only minimal progress. As we began to identify ways to provide missing capabilities, the children began to make substantial gains. As the magnitude of the differences became apparent in what the children were capable of learning following the attainment of missing repertoires, we came to consider the possibility that these verbal repertoires represented developmental verbal capabilities or verbal behavior cusps.
We have shown that certain environmental experiences evoked the capabilities for our children. However, we are mindful that providing particular prerequisite repertoires that are effective in evoking more sophisticated verbal capabilities in children with language disabilities or language delays does not necessarily demonstrate that the prerequisites are component stages in all children's verbal or cognitive development. While Gilic (2005) demonstrated that typically developing 2-year old children develop naming through the same experiences that produced changes in our children with verbal delays, others can argue effectively that typically developing children do not require specially arranged environmental events to evoke new verbal capabilities. A definitive rejoinder to this criticism awaits further research, as does the theory that incidental experiences are not required. See Pinker (1999) for the argument that such experiences are not necessary.
Milestones of the Development of Verbal Function: Fundamental Speaker and Listener Repertoires
Our rudimentary classifications of children's verbal development adhered to Skinner's (1957) focus on the verbal function of language as distinguished from a structural or linguistic focus. Skinner focused on antecedent and consequent effects of language for an individual as a means of identifying function, as distinguished from structure (Catania, 1998). Eventually, his theory led to a research program devoted to the experimental analyses of verbal behavior with humans. In a recent paper (Greer, & Ross, 2004) and a book in progress (Greer & Ross, in press), we have suggested that this research effort might be best described as verbal behavior analysis, often without distinction between its basic or applied focus. We have incorporated the listener role in our work, in addition to the speaker functions. While Skinner's self-avowed focus was the speaker, a careful reading of Verbal Behavior (Skinner, 1957/92, 1989) suggests much of his work necessarily incorporated the function of listening (e.g., the source of reinforcement for the listener, the speaker as listener). Our research on the role of the listener was necessitated by the problems encountered in teaching children and adolescents with language delays, of both native and environmental origin, to achieve increasingly complex cognitive repertoires of behavior. Without a listener repertoire many of our children could not truly enter the verbal community. We needed to provide the listener roles that were missing, but that were necessary if the repertoires of the speaker were to advance. Skinner made the point that a complete understanding of verbal behavior required the inclusion of the role of the listener (See the appendix to the reprint edition of Verbal Behavior, published by the B. F. Skinner Foundation, 1992, pp.461-470). Moreover, new research and theories based on Skinner's work have led to a more complete theory of verbal behavior that incorporates the role of the listener repertoire. These efforts include, but are not limited to:
* Research done by relational frame theorists (Barnes-Holmes, Barnes-Holmes, & Cullinan, 1999; Hayes, Barnes-Holmes, & Roche, B., 2000),
* Naming research by Horne and Lowe and their colleagues (Horne & Lowe, 1996; Lowe, Horne, Harris, & Randle 2002),
* Research on auditory matching and echoics (Chavez-Brown & Greer, 2004)
* Research on the development of naming (Greer, et al., 2005b)
* Research on conversational units and speaker-as-own-listener (Donley & Greer, 1993; Lodhi & Greer, 1989), and
* Research on learn units (Greer & McDonough, 1999).
Our levels of verbal capability incorporate the listener as part of our verbal behavior scheme (Skinner, 1989). The broad categories that we have identified to date are: (a) the pre listener stage (the child is dependent on visual cues, or, indeed, may not even be under the control of visual stimuli), (b) the listener stage (the child is verbally governed as in doing as others say) (c) the speaker stage (the child emits mands, tacts, autoclitics, intraverbal operants), (d1) the stage of rotating speaker-listener verbal episodes with others (the child emits conversational units and related components of learn units in interlocking operants between individuals), (d2) the speaker-as-own listener stage (the child engages in self talk, naming, speaker-as-own-listener editing function, and say-do correspondence), (e) reader (the child emits textual responding, textual responding as a listener and emergent joint stimulus control, and the child is verbally governed by text), (f) the writer stage (the child verbally governs the behavior of a reader for aesthetic and technical effects), (g) writer-as-own reader (the child reads and revises writing based on a target audience), and uses verbal mediation to solve problems (the child solves problems by performing operations form text or speech). Each of these has critical subcomponents and the subcomponents of the categories that we have identified to date are shown in Table 2.
The Listener Repertoire
In the verbal community a pre-listener is totally dependent on others for her care, nourishment, and very survival. Pre-listeners often learn to respond to a visual and tactile environment; but if they do not come under the control of the auditory properties of speech they remain pre-listeners. For example, in some situations they learn to sit when certain visual cues are present. It is often not the spoken stimuli such as "sit still, "look at me," or do this" to which they respond, but rather certain instructional sequences or unintentional visual cues given by teachers and caretakers. They do no respond to, or differentiate among, the auditory properties of speech as stimuli that evoke specific responses. When the basic listener repertoire is missing, children cannot progress beyond visual or other non-auditory stimulus control. However, substantial gains accrue when children achieve the listener capability, as we shall describe.
Auditory Matching. It is increasingly apparent, that children need to match word sounds with word sounds as a basic step in learning to discriminate between words, and even distinguish words from non-word sounds. While most infants acquire auditory matching with apparent ease, some children do not acquire this repertoire incidentally. Adults experience similar difficulties in echoing a new language.
Chavez-Brown & Greer (2003) and Chavez-Brown (2005) taught children who could not emit vocal verbal behavior or whose vocal speech was flawed to match pictures using BigMack[R] buttons as a pre-training procedure to teach them to use the apparatus. The teacher touched a single button set before her that had a picture on it and then touched each of the two buttons the students had in front of them (one with the target picture and one with a foil picture). Then students responded by depressing the button in front of them that matched the picture of the button in front of the teacher that had been touched by the teacher. Once the children mastered the visual matching task, used as a means to introduce them to the apparatus, we removed the pictures. In the next phase the children were taught to match the sound generated by the teacher's button (the buttons produced individual pre-recorded words or sounds). At this second stage, the depression of one of the students' buttons produced a sound and the depression of one of their buttons had no sound. Once they mastered matching sounds contrasted with no sound buttons, they learned to match words with non-word sounds as foils. Next, they learned to match particular words contrasted with different words. Finally, they learned generalized matching for words produced by pushing the buttons (i.e., they learned to match novel word sets with no errors). Our findings showed that children, who had never vocalized before, began to approximate or emit echoic responses under mand and tact-establishing operations when they mastered generalized word matching. Moreover, a second set of children, who had only approximations (i.e., faulty articulations), learned full echoics that graduated to independent mands and tacts. This matching repertoire may be an early and necessary step in the acquisition of speaking and may also be key to more advanced listening. See also correlations between auditory matching and the emissions of verbal operants identified by Marion et al. (2003) that suggested the auditory matching research we described above.
The Emersion of Basic Listener Literacy. When children have "auditory word matching" they can be taught the discriminative function needed to become verbally governed. Over the past few years, we found that children without listener repertoires reached a learning plateau and were no longer making progress in instruction beyond extensions of visual matching. We believe that children around the world who have these deficits are not making progress in early and intensive behavioral interventions. These children require inordinate numbers of instructional presentations, or learn units, and still do not make progress in acquiring repertoires that require verbal functions that are the very basic building blocks of learning. In an attempt to help these children become listeners, we developed an intervention that we call listener emersion (Greer, et al. 2005a). During listener emersion, we suspend all of the children's instructional programs and provide intensive instruction in responding to the discriminative acoustical properties of speech. This instruction continues until children's listener responses are fluent. (2)
In the listener emersion procedure, children learned to respond to words (i.e., vowel-consonant relations) spoken in person by a variety of individual voices as well as to voices recorded on tapes and other sources. By "fluent," we mean that the children learned to respond to four or more sets composed of five instructions such as "point to--," "match--", "do this," "stand up," and "turn around." The children also learned not to respond to nonsensical, impossible, or non-word vowel-consonant combinations that were inserted into the program as part of each set (i.e., "jump out the window," "blahblahblah"). These sets were presented in a counterbalanced format with criterion set at 100% accuracy. Next the children learned to complete the tasks at specified rates of accurate responding ranging from 12 to 30 per minute. Finally, they learned to respond to audio taped, mobile phone, or computer generated instructions across a variety of adult voices. Once the children's basic listener literacy emerged (i.e., the children met the listener emersion criteria), we compared the numbers of learn units required by each student to meet major instructional goals before and after listener emersion. The achievement of the objectives for the listener emersion procedure constitutes our empirical definition of basic listener literacy. This step insures that the student is controlled by vowel-consonant speech patterns of speakers. After acquiring basic listener literacy, the numbers of instructional trials or learn units the children required to achieve instructional objectives across the range of his or her instructional objectives decreased from four to ten times that which had been required prior to their obtaining basic listener literacy.
The Speaker Stage
Acquisition of Rudimentary Speaker Operants. In the late eighties, we identified procedures for inducing first instances of vocal speech that proved more effective than the operant shaping of spoken words as linguistic requests (Williams & Greer, 1989). That is, rather than teaching parts of words as vowel consonant blends, as had been the existing behavioral procedure (Lovaas, 1977), we arranged the basic establishing operations and obtained true mands and tacts using echoic -to-mand and echoic-to-tact procedures (Williams & Greer, 1989). Once true verbal operants were taught, the children used "spontaneous speech." The children came under the relevant establishing operations and antecedent stimuli (Michael, 1982, 1984, 1993) associated with mand and tact operants and related autoclitics, rather than verbal antecedent such as, "What do you want?" They did not require intraverbal prompts as a means of teaching pure tacts. In another procedure Sundberg, Loeb, Hale, and Eighenheer (2000/2001) evoked the emission of impure tacts and the emission of impure tacts and mands; these are necessary repertoires as well. In still other work Pistoljevic and Greer (2006) and Schauffler and Greer (2006) demonstrated that intensive tact instruction led to the emission of novel tacts and appropriate audience control.
Children who do not speak can be taught verbal behavior through the use of signs, pictures, or electronic speaking devices. Even so, we submit that speech is simply more useful; speech works in the community at large. When we are unable teach speech, we too, use these substitutes, although as our research has progressed there have been fewer children that we cannot teach to speak. The second choice for topography for us is electronic speaking devices as such devices supply the possibilities for speaker as own listener. The importance of speech becomes apparent when we reach the critical verbal repertoires of speaker-as own-listener and reader.
Although the use of the above procedures significantly increased the numbers of children we could teach vocal verbal operants, there were some children we still could not teach to speak. While we could teach these children to use substitute topographies for speech, the development of speech is critical for subsequent verbal capabilities. For those children who did not learn to speak using our basic echoic to-mand and echoic-to-tact procedures (Williams & Greer, 1989), others and we, designed and tested several tactics to induce first instances of speech. We taught children who had acquired fluent generalized imitation, but who could not speak, to perform chains of generalized imitation of large and small movement responses at a rate of approximately 30 correct per minute at 100% accuracy. These children were then deprived of preferred items for varying periods of time and were only able to obtain the items contingent on speech under conditions in which they first performed a rapid chain of generalized imitation (moving from large motor movements to fine motor movements related to touching their lips and tongue). As soon as the last motor movement step in the teaching chain was completed we offered the item under deprivation as we spoke its name. After several presentations as described, the children spoke their first echoic mands. Some of these children were as old as nine years of age and their first words were not separate phonemes but were mands like "baseball card," "Coke," or "popcorn." Once the echoic -to-mand was induced for a single word or words, other echoic responses were made possible and their independent mand repertoire was expanded--they acquired function. Follow-ups done years after these children spoke their first words showed that they maintained and expanded their mand and eventually their tact repertoires extensively (Ross & Greer, 2003). We currently think that the procedure acted to induce joint stimulus control across the two independent behaviors of imitating and echoing (see Skinner, 1957 for the important distinction between imitation and echoic responding). See-do joined a higher order class and a new behavioral cusp was acquired.
In a replication and an extension of this work, Tsiouri & Greer (2003) found that the same procedure could be used to develop tact repertoires, when the establishing operation was deprivation of generalized reinforcers. See Skinner (1957, page 229) for a source for the establishing operations for the tact. Moreover, tacts and mands could be evoked in tandem fashion when emission of the tact operants resulted in an opportunity to mand as a result of using the tandem procedures developed in Williams & Greer (1989) (Tsiouri & Greer, 2003).
The establishing operation is key to the development of these rudimentary operants (Michael, 1982, 1984, 1993). There appear to be three tested establishing operation tactics: (a) the interrupted chain (Sundberg, et al., 2001/2002), the incidental teaching procedure in which the incidental establishing operations opportunities are captured (Hart & Risely, 1975), and the momentary deprivation procedure (Williams & Greer, 1989). Schwartz (1994) compared the three procedures. She found them equally effective, although the momentary deprivation procedure resulted in slightly greater maintenance and required significantly less time. It is suggested that more powerful results may accrue if each of these establishing operations are taught in a multiple exemplar fashion providing the child with a range of establishing operations for controlling the emission of rudimentary operants. Still, other establishing operation tactic s are needed, such as identification of establishing operations for tacts described in Tsiouri and Greer (2003). Indeed, what is characterized in the literature as "naturalistic language" interventions, derived from Hart and Risely's incidental procedure, are essentially suggestions for capturing establishing operations as they occur in situ (McDuff, Krantz, McDuff, & McClannahan, 1988). The difficulty with relying solely on the capture of incidental establishing operations is that there are simply not enough opportunities to respond. There are now an abundance of tested tactics for evoking establishing operations in instructional sessions that can be used without waiting for an incidental occasion, although it is critical to capture incidental opportunities as well.
From Parroting to Verbal Operants. The stimulus-stimulus pairing procedure of Sundberg et al. (1996) evoked first instances of parroting of words as a source of automatic reinforcement. These investigators paired preferred events, such as tickling the children while the experimenters said words; the children began to parrot the words or sounds. Moreover, the children emitted the words in free play, suggesting that the saying of the words had acquired automatic reinforcement status. Yoon (1998) replicated the Sundberg et al. procedure, and after the parroting was present for her students, used the echoic-to-mand tactic described above (Williams & Greer, 1989), to evoke true echoics that, in turn, became independent mands. Until the parroted words were under the echoic to mand contingencies, the children were simply parroting as defined by Skinner (1957); however, obtaining the parroting as an automatic reinforcer made the development of true echoics possible. The emission of a parroting response may be a crucial first step in developing echoic responses and may be an early higher order verbal operant (3). The children in these studies moved from the listener to the speaker stage as a result of the implementation of extraordinary instructional procedures (See Sundberg & Partington, 1998 for an assessment and curriculum). Once a child has acquired a speaker repertoire the speaker-listener repertoire becomes possible. Speaker capabilities opened up extraordinary new possibilities for these children, as they did for our ancestors in the combined evolution of phylogenic capabilities in the context of capabilities evoked by cultural selection.
Transformation of Establishing Operations across Mand and Tact Functions. Initially, learning one form (e.g., word or words) in a mand or tact function does not result in usage of the form in the untaught function without direct instruction (Lamarre & Holland, 1985, Twyman, 1996). For example, a child may emit a word as a mand (e.g., "milk") under conditions of deprivation, such that the emission of "milk" results in the delivery of milk. But, the child cannot use the same form ("milk") under tact conditions (i.e., the emission of the word in the presence of the milk when the reinforcement is a social or other generalized reinforcement probability). The independence of these two functions has been reliably replicated in young typically and non-typically developing children; however, at some point most children can use forms acquired initially as mands and use the same forms as tacts, or vice versa. Some see this as evidence of something like a neurologically based universal grammar that makes such language phenomena possible (Pinker, 1999). Clearly, neural capacities must be present just as the acoustic nerve must be intact to hear. But, the unequivocal existence of a universal grammar does not necessarily follow; the source is at least as likely to lie in the contingencies of reinforcement and punishment and the capacity to be affected by these contingencie s in the formation of relational frames/higher order operants. One example of the acquisition of this verbal cusp or higher order operant is the acquisition of joint establishing operation control of a form in either mand or tact functions after learning only one function. When this verbal cusp is achieved, a child can use a form in an untaught function without direct instruction.
Nuzzolo-Gomez and Greer (2004) found that children who could not use a form learned in a mand function as a tact, or vice versa without direct instruction in the alternate function (Lamarre & Holland, 1985; Twyman, 1996a, 1996b), could be taught to do so when they were provided with relevant multiple exemplar experiences across establishing operations for a subset of forms. Greer, et al. (2003b) replicated these findings and we have used the procedure effectively with numerous children in CABAS schools. The new verbal capability doubled both incidental and direct instructional outcomes.
Speaker Immersion. Even after the children we taught had acquired a number of rudimentary speaker operants, some did not use them as frequently as we would have liked. Speaking had emerged; but it was not being used frequently, perhaps because the children had not received an adequate number of opportunities of incidental establishing operations. We designed a procedure for evoking increases in speaker behavior that we called speaker immersion (Ross, Nuzzolo, Stolfi, & Natarelli, 2006). In this procedure we immersed the children for whom the operants had already emerged in instruction devoted to the continuous use of establishing operations requiring speaking responses. All reinforcement was related to speaking and opportunities were provided throughout the day. As a result, the children's use of verbal operants dramatically increased as they learned to maximize gain with minimal effort. The children learned that it was easier and more efficient to get things done by speaking pure tacts and mands than by emitting responses that required the expenditure of more effort, thereby extending Carr and Durand's (1985) findings.
Milestones of Speaker and Listener Episodes: Interlocking Verbal Operants between Individuals
Verbal Episodes between Individuals
Verbal behavior is social as Skinner proclaimed, and perhaps one cannot be truly social without verbal behavior. A major developmental stage for children is the acquisition of the repertoire of exchanging speaker and listener roles with others--what Skinner (1957) called verbal episodes. A marker and a measure of one type of verbal episode is the conversational unit, while another type of verbal episode is a learn unit. We developed these measures as indices of interlocking verbal operants. No account of verbal behavior can be complete without the incorporation of interlocking verbal operants.
Epstein, et al. (1980) demonstrated verbal episodes between two pigeons. We argue that they demonstrated a particular kind of interlocking verbal operant that we identify as a learn unit. In that study, after extensive training, the researchers had two pigeons, Jack and Jill, respond as both speaker and listener in exchanges that simulated verbal episodes between individuals. Each pigeon responded as both speaker and listener and they exchanged roles under the relevant discriminative stimuli as well as under the conditions of reinforcement provided by each other's speaker and listener responses (a procedure also used in part by Savage-Rumbaugh, Rumbaugh, & Boysen, 1978). The pigeon that began the episode, the teacher pigeon, controlled the reinforcement in the same way that teachers deliver effective instruction (Greer & McDonough, 1999). That is, the teacher pigeon had to observe the responses of the student pigeon, judge its accuracy, and consequate the student pigeon's response. Premack (2004) argued that the lack of this kind of teaching observation in primates is evidence that this is one of the repertoires unique to humans. In the Epstein et al. study, special contingencies were arranged in adjacent operant chambers to evoke or simulate the teaching repertoire. Note that the pigeon that acted as a student did not emit the reciprocal observation that we argue needs to be present in the verbal episode we characterize as a conversational unit. In a conversational unit both parties must observe, judge, and consequate each other's verbal behavior.
We used the determination of verbal episodes as measures in studies by Becker (1989), Donley & Greer (1993), and Chu (1998) as well as related research by Lodhi and Greer (1989) and Schauffler and Greer (2006). The verbal episodes in these studies were measured in units and included a rotation of initiating episodes between individuals as well as a reciprocal observation accruing from reinforcement received as both a speaker and a listener. We called these episodes conversational units. A conversational unit begins when a speaker responds to the presence of a listener with a speaker operant that is then reinforced by the listener. This first piece of the verbal interaction is what Vargas (1982) identified as a sequelic. Next, the listener assumes a speaker role, under the control of the initial speaker who is now a listener. That is, the listener function results in the extension of sensory experiences from the speaker to the listener as evidenced by the speaker response from the individual who was the initial listener. The initial speaker then functions as a listener who must be reinforced in a listener function (i.e., the initial listener as speaker extends the sensory capacities of the initial speaker as a listener). A new unit begins when either party emits another speaker operant. Interestingly, in the cases of children with diagnoses like autism, we can now teach them a sequelic speaker function in fairly straightforward fashion using procedures described above. However, these children often have little interest in what the speaker has to say. The reinforcement function for listening is absent. We are currently working on procedures to address this problem.
Conversational units are essential markers and measures of social behavior and, we argue, their presence is a critical developmental milestone in the evolution of verbal behavior. By arranging natural establishing operations, Donley and Greer (1993) induced first instances of conversation between several severely delayed adolescents who had never before been known to emit conversation with their peers. Coming under the contingencies of reinforcement related to the exchange of roles of listener and speaker is the basic component of being social. Chu (1998) found that embedding mand operant training within a social skills package led to first instances of, and prolonged use of, conversational units between children with autism and their typically developing peers. Moreover, the use of conversational units resulted in the extinction of assaultive behavior between the siblings thereby extending Carr and Durand's (1985) finding.
Learn units are verbal episodes in which the teacher, or preprogrammed teaching device (Emurian, Hu, Wang & Durham, 2000), controls the onset of the interactions, the nature of the interactions, and most of the sources of reinforcement for the student. The teacher bases her responses on the behavior of the student by reinforcing correct responses or correcting incorrect response. The interactions provided in the Epstein et al. (1980) and the Savage-Rumbaugh et al. (1978) studies are learn units rather than conversational units as we described above. (See Greer, 2002, Chapter 2, for a thorough discussion of the learn unit and Greer & McDonough, 1999 for a review of the research).
Milestone of Speaker as Own Listener: Verbal Episodes "Within the Skin"
As Skinner pointed out, the speaker may function as her own listener as in the case of "self-talk." Lodhi and Greer empirically identified speaker as own listener in young typically developing children who engaged in self-talk while playing alone (Lodhi & Greer, 1989). This appears to be an early, if not the first, identification of conversational units in self-talk emitted by individuals under controlled experimental conditions. The developmental literature is replete with research on self-talk and its importance, but until the functional components defining self-talk were identified, self-talk remained essentially a topographical measure because the speaker and listener functions were not identified. It is very likely that speaker as own listener types of learn units are detectable also, although we have not formally tested for them except in our studies on print control that resulted in students acquiring self administration of learn units (Marsico, 1998).
We agree with Horne and Lowe (1996) that a speaker as own listener interchange occurs in the phenomenon that they identified as naming. Naming occurs when an individual hears a speaker emit a tact, and that listener experience allows the individual to emit the tact in a speaker function without direct instruction and further to respond as a listener without direct instruction. Horne and Lowe (1996) identified the phenomenon with typically developing children. Naming is a basic capability that allows children to acquire verbal functions by observation. It is a bi-directional speaker listener episode.
But what if the child does not have the repertoire? For example, matching, pointing to (both listener responses, although the point to is a pure listener response), tacting, and responding intraverbally to multiple controls for the same stimulus (the speaker response as an impure tact) are commonly independent at early instructional stages. This is the case because, although the stimulus is the same, the behaviors are very different. The child learns to point to red but does not tact (i.e., does not say "red" in the presence of red objects, or tacts and does not intraverbally respond to "What color?"). This, of course, is a phenomenon not understood well by linguists because they operate on the assumption that understanding is an automatic given--a human example of generative verbal behavior, if you will. It is a source of many problems in learning for typically developing and non-typically developing children, as well as college students who demonstrate differences in their responses to multiple -choice questions (selection responding) versus their responses to short answer or essay questions (production responding). At some point children can learn a match or point-to response and can emit a tact or intraverbal response without direct training. This is not, however, automatic for some children. Thus, we asked ourselves this question: If naming were not in a child's repertoire, could it be taught?
Induction of One Component of Naming. Greer, et al. (2005a) found that one could isolate experimentally a particular instructional history that led to naming for 2-dimensional stimuli (pictures) in children who did not initially have the repertoire. After demonstrating that the children did not have the repertoire for tacts, we provided a multiple exemplar instructional intervention with a subset of stimuli involving rotating match, point to, tact, and intraverbal responding to stimuli until the children could accurately do all of the responses related to the subset. We then returned to the initial set and a novel set as well and showed that the untaught speaker and listener repertoires had emerged.
These data suggested that the acquisition of naming, or one component of naming (i.e., going from listener to speaker) could be induced with multiple exemplar experiences. Naming is a generative verbal repertoire that Catania (1998) has called a "higher order class." The Relational Frame Theorists described this particular higher order operant as an incidence of transformation of stimulus function (Hayes, et al., 2000). Skinner referred to the phenomenon as responding in different media to the same stimulus (i.e., thematic grouping) and Relational Frame Theorists provided feasible environmental sources for this and related phenomena (i.e., multiple exemplar experiences). That is, a particular response to a single stimulus or category of stimuli when learned either as a listener repertoire or as a speaker repertoire is immediately available to the individual as a response without direct instruction once the individual has stimulus transformation across speaker and listener functions. We found that the naming repertoire emerged as a function of specific instructional experiences. This represents another case of the emergence of generative verbal behavior that is traceable to environmental circumstances. Fiorile and Greer (2006) replicated this finding. Naming also represents the acquisition of one of the speaker as own listener stages. When children have acquired it they have new verbal capabilities. Other types of generative behavior are traceable to multiple exemplar experiences, as we will discuss later.
Induction of Untaught Irregular and Regular Past Tense Responding. Still another case of speaker as own listener repertoires probably occurs in the emission of verb endings colloquially often associated with the cliche "kids say the darnedest things" (Pinker, 1992). We recently found that we could evoke untaught correct usage of regular and incorrect but "spontaneous" emission of irregular verbs (i.e., "he singed last night") as a result of multiple exemplar instruction with young children with developmental disabilities who could not emit either regular or irregular novel past tense forms without direct instruction (Greer & Yuan, 2004). The children learned to emit novel regular past tense forms without direct instruction and this abstraction was extended to irregular verbs. That is, they emitted incorrect irregular forms such as "he singed" as do young typically developing children. In a related study, Speckman (2005) found that multiple exemplar experiences also resulted in the emission of untaught suffixes as autoclitic frames for tacts. However, it is important to recognize that Pinker (1999) says the fact that children begin to use the correct irregular forms at some point and stop using the incorrect forms is a more important capability. He argues that there is no direct instruction leading to this revision in verb usage by typically developing children. But just as the initial incorrect usage has been traced to a sufficient set of experiences, it is possible that there are incidental sources of experience that make this change possible. We suspect that multiple experiences could induce this capability too, although further research remains to be done.
Milestones of Reading, Writing, Self-Editing: Extensions of the Speaker and Listener Repertoires
Reading involves textually responding (seeing a printed word and saying the word), matching various responses to the text as comprehension (printed stimulus to picture or actions, the spoken sound and all of the permutations and combinations of this relationship) (Sidman, 1994). At first glance, the reader stage appears to be simply an extension of the listener repertoire; however, on closer scrutiny, reading is necessarily an advanced speaker-as-own-listener repertoire because the reader must listen to what is read. Reading consists of speaker-listener relationships under the control of print stimuli, actions or pictures. Textually responding requires effortless rates of responding to print stimuli in order to "hear" the spoken word. After all, it was only after the Middle Ages that we began to read silently, and many religious and other ancient cultural practices still adhere to ceremonies in which one person reads aloud to an audience while the audience views the text.
The capacity to hear what one reads is important because the acoustical physical properties of sound allow more "bits" to be transmitted by sound than is possible with signs. For example, children who are deaf from birth have extreme difficulty developing reading comprehension beyond Grade 6 (Karchmer & Mitchell, 2003). There are special auditory properties of speech that allow a great deal of information or bits to be used for the benefits of the reader (aesthetic or functional), or at least this was the case before computers. Good phonetic instruction results in children textually emitting untaught combinations of morphemes and if those words are in their listener repertoire they can comprehend (See Becker, 1992 for the relevant research on multiple exemplar instruction and the emission of abstracted textual responses to untaught morphemes). However, even if a child can respond textually and thereby emit an accurate response to printed stimuli, and she does not have listener comprehension, the child "will not understand" what she has read (i.e., the child will be unable to match the sounds to a picture or action). We can textually respond to foreign language print aloud and have no idea about what we are saying. Thus, the listener component is key. For example, adolescents with multiple year delays in their reading achievement may not comprehend because they can not emit a textual response to a particular word or group of words, but once they hear a spoken version they immediately comprehend, because their listener vocabulary exceeds their textual repertoire. The listener component of reading is as important as the textual speaking component. Thus, a reader must be a reader-as-own listener, so to speak.
There is still a more basic component of reading that we identify as conditioned reinforcement for observing print and pictures in books. Tsai & Greer (2006) found that when they conditioned books such that 2 and 3 year old children chose to look at books in free time, with toys as alternate choices, the children required significantly fewer learn units to acquire textual responses. The book stimuli selected out the children's observing responses, and once the children were observing they were already closer to acquiring print stimuli as discriminative stimuli for textual responses. Thus, an early predictor for children's success in textually responding appears to be the conditioned reinforcement for observing book stimuli. Conditioned reinforcement for books may constitute a new capability. We currently also believe that pre-listener children who do not orient toward speakers and who are having listening and speaking difficulties may need to have unfamiliar and familiar adult voices acquire conditioned stimulus control for observing (Decasper & Spence, 1987). This too may be a crucial stage in the acquisition of listener repertoires.
Writing is a separate behavior from reading, and like the repertoire of speaking, represents a movement up the verbal scale. But writing from a functional verbal perspective requires that the writer affect the behavior of the reader; that is they must observe the effects of their writing and in turn modify their writing until the writing affects the behavior of the reader. In the case of technical writing the writer must provide technical information that affects the reader's behavior, ranging from influencing a shopper through the provision of a shopping list, to the provision of an algorithm that affects complex scientific decisions. Writing, as in the case of speaking, needs to be under the control of the relevant establishing operations if the writing is to be truly verbal. In several experiments we provided establishing operations for writing for students whose writing did not affect the behavior of the reader, using a tactic we call writer immersion. In the writer immersion procedure, all communication is done in written form for extended periods throughout the day. Written responses are revised until the reader responds as the writer requires. This procedure resulted in functionally effective writing, measured in effects on the behavior of readers, and improvements in the structural components of writing for the writer (grammar, syntax, vocabulary, punctuation, spelling) (Greer, Gifaldi & Pereira, 2003a; Keohane, Greer, & Mariano-Lapidus, 2004; Jadlowski, 2000; Madho, 1997; Reilly-Lawson & Greer, 2006). The experience taught the students to write such that they read as the target readers would read. The editing experience appears to evoke writer as own reader outcomes of self-editing, not unlike speaker as own listener (Jadlowski, 2000). This repertoire then appears to be an advanced speaker as own listener stage--one that requires one to read what one writes from the perspective of the target audience whose behavior the writer seeks to influence. Thus, like the reader function, the writer function builds on the speaker as own listener. Some individuals have difficulties in writing and reading that are probably traceable to missing components of the speaker, listener, or speaker as own listener components.
Complex Verbally Governed and Verbally Governing Behavior
Technical Writing. Another key component of the complex cognitive repertoires of individuals involves reading or being verbally governed by print for technical outcomes. Marsico (1998) found that teaching students to follow scripts under conditions that allowed the investigators to observe the control of the print over the students' responses resulted in students "learning to learn" new concepts in math and more complex reading repertoires by acquiring verbally governed responding from print sources. This repertoire allowed the students to be verbally governed by print. As this repertoire becomes more sophisticated it leads to the more complex repertoire of solving complex problems from algorithms as in the case of the following of decision protocols. Keohane and Greer (2005) showed that teacher scientists could perform complex data decision steps using algorithms based on the verbal behavior of the science, and this new repertoire resulted in significant improvements in the outcomes of the teachers' students. Verbal rules guided measurable responses involving data analysis, complex strategic analyses, and tactical decisions that were implemented with the teachers' students.
Nuzzolo-Gomez (2002) found that teachers who received direct learn units on describing tactics, or observed other teachers receive learn units on accurately describing tactics, required significantly fewer learn units to teach their children to achieve instructional objectives. Observations showed that the teachers' instruction was reliably driven by the verbal descriptions of the tactics they learned by direct or indirect instruction. These studies are analyses of the verbal behavior of scientists and the verbal stimulus control involved in either scientific complex problem solving repertoires suggested by Skinner (1957) and demonstrated in Keohane & Greer (2005), or the control of verbal behavior about the science over teacher performance as identified in Nuzzolo-Gomez (2002). We argue that these studies investigated observable responses that are both verbal and nonverbal and that such responses are directly observed instances of thinking.
While neuroscientists could probably locate electrical activity in the brain associated with our putative thinking responses, it is only the behavior outside the skin that distinguishes the electrical activity as thinking as opposed to some other event that might be correlated with the activity. Verbal stimuli control the complex problem solving, not the electrical activity. The electrical activity, although interesting, may be necessary, and important, but is not thinking per se. One might argue that the electrical activity is light in a black box; although we see within "the black box" we do not see outside of the black box. This is an interesting reversal of the black box puzzle. If the electrical activity were to begin before the relevant contingencies in the environment were to be in place the problem in the environment would not be solved.
One of the key components in writing is the process of spelling. Spelling involves two different and initially independent responses: (1) saying the letters for a dictated word and (2) writing the letters. At some point we do emit an untaught response after learning a single one of these behaviors (See Skinner, 1957, 1992, p.99). How does a single stimulus (i.e., hearing the word) come to control these two very different behavioral topographies of writing and orally saying the letters? Recently we found that for children who initially could not perform the untaught function, providing multiple exempla r instruction for a subset of words across the two responses under a single audited vocal stimulus resulted in these students acquiring the repertoire with novel stimuli (Greer, et al. 2004c). Like transformation of establishing operations for mands and tacts, and transformation of stimulus functions across speaker and listener in naming, the transformation of writing and saying in the spelling repertoires is still another environmental source for generative verbal behavior as an overarching operant or a higher order operant (Catania, 1998; Hayes et al., 2000). These repertoires consist of learned arbitrary relations between listening, speaking, and writing. It is not too far-fetched to infer that typically developing children acquire this joint stimulus control across independent responses as higher order operants or relational frames through multiple exemplar experiences. Such multiple exemplar experiences involve the rotation of writing and saying opportunities may occur incidentally rather than as a result of the programmed experiences we provided our children. Once the child has transformation of stimulus control over written and spoken spelling, only a single response need be taught.
In related research, Gautreaux, Keohane, & Greer (2003) found that multiple exemplar instruction also resulted in transformation of selection and production topographies in geometry. That is, middle school children who could not go from multiple -choice responding to production responding prior to multiple exemplar instruction, did so after an instructional history was created by multiple exemplar instruction across a subset of selection and production experiences. This study highlighted the difficulties experienced by some older children that may be due to a lack of prior verbal instructional histories. The replacement of missing verbal capabilities may be the key to solving instructional difficulties experienced later in life by individuals as they encounter more complex subjects. When an individual has difficulty with aspects of reading and writing, it is possible that the remediation of the difficulty only truly occurs when the missing capability is put in place. In effect, they have a missing or inadequate verbal developmental cusp. Inducing that cusp may solve the learning problems.
Aesthetic Writing. In an earlier section we described writing repertoires that were of a technical nature. Aesthetic writing has a different function than technical writing (Skinner, 1957). Aesthetic writing seeks to affect the emotions of the reader. To date, little empirical work has been accomplished with the aesthetic writing repertoire. A critical, if not the most basic component of aesthetic writing, is the writer's use of metaphors as extended tacts. Meincke (2005) and Meincke, Keohane, Gifaldi, and Greer (2003) identified the emergence of novel metaphorical extensions resulting from multiple exemplar instruction. This effort points to the importance of isolating and experimentally analyzing experiential components of aesthetic writing and suggests the role of metaphorical comprehension in reading for aesthetic effects. This also suggests that rather than teaching the aesthetics of reading through literary analysis as an algorithm, a student should have the relevant metaphoric experiences and perhaps these may be pedagogically simulated. It is likely that these metaphoric experiences provide the basis for the aesthetic effects for the reader. In order for the exchange to occur the target audience for the writer must have the repertoires necessary to respond to the emotional effects. Of course, the analysis of aesthetic writing functions is probably more complex than the analysis of technical repertoires, but we believe empirical analyses like the one done by Meincke et al. are becoming increasingly feasible. If so, the aesthetic and functional writer and reader repertoire may be revealed as new stages of verbal behavior.
From Experimental Effects to a Theory of Verbal Development
We believe we have identified several verbal repertoires that are key in children's development of successively complex repertoires of verbal behavior. Providing several of these repertoires to children who did not have them allowed these students to advance in their cognitive, social, technical, and aesthetic capabilities. As a result of this work we were increasingly persuaded that these levels of verbal capabilities did, in fact, represent empirically identifiable developmental cusps.
For our children the capabilities that they acquired were not tied to tautological relationships associated with age (Baer, 1970; Bijou & Baer, 1978; Morris, 2002). Age may simply provide a coincidental relation between experiences that bring about verbal capabilities and the probabilities of increased opportunities for those experiences. Hart and Risely (1996) showed that impoverished children who had no native disabilities, but who had significantly fewer language experiences than their more better off peers, demonstrated significant delays by the time they reached kindergarten. When children with these deficits in experience with language continued in schools that did not or could not compensate for their sparse vocabulary, these children were diagnosed as developmentally disabled by grade 4 (Greenwood, Delquadri, & Hall, 1984). It is not too farfetched to suggest that absence of the kinds of experiences necessary to evoke the higher order verbal operants or cusps that we have identified may also be part of the reason for these delays. We suggest that the presence of incidental multiple exemplar experiences provide the wherewithal for most typically developing children to seamlessly acquire the verbal milestones we described, probably because they have both the environmental experiences and neural capabilities (Gilic, 2004). For children without native disabilities who lack multiple exemplar experiences (Hart & Risely, 1996), as well as children with native disabilities who lack the necessary verbal capabilities, intensive multiple exemplar instruction has induced missing repertoires (Nuzzolo-Gomez & Greer, 2004). Such experiences probably result in changes in behavior both within and outside of the skin. Indeed biological evidence suggests that, "DNA is both inherited and environmentally responsive" (Robinson, 2004, p. 397. Also see Dugatkin, 1996 for research on the influence of the environment on changes in genetically programmed behavior affected by environmental events). What may be an arbitrary isolation of behavior beneath and outside the skin may dissolve with increased research in the environmental effects on both types of behavior.
Our induction of these repertories in children, who did not have them prior to instruction, suggests it is not just age (time) but particular experiences (i.e., environmental contingencies including contingencies that evoke higher order operants) that make certain types of verbal development possible, at least for the children that we studied. Intensive instruction magnified or exaggerated these experiences and provided our children with the wherewithal (i.e., verbal developmental cusps) to achieve new verbal capabilities. We speculate also that the induction of these verbal capabilities in children who do not have them prior to special experiences creates changes in neural activity. Of course, a test of this is the real challenge facing developmental neuroscience (Pinker, 1999). A joint analysis using the science of verbal behavior combined with instrumentation of the neurosciences might prove very useful in assisting children. Incidentally, such an analysis might also act to enrich academic debate towards more useful outcomes.
Tables 1 and 2 showed the levels of verbal functions for the pre-listener through the early reader stages in summary form. We described the evidence that has proved useful in our efforts to induce and expand progressively sophisticated verbal functions. The capabilities that we addressed were originally identified based on the responses of individual children; specifically they were based on our empirical tests for the presence or absence of the repertoires for individual children. In our educational work, when a particular repertoire was missing, we applied the existing research based tactics to provide the child with the repertoire. When we encountered children for whom the existing tactics were not effective, we researched new tactics or investigated potential prerequisite repertoires and related experiences that appeared to be missing for the child. The searches for possible prerequisite repertoires led to the identification of several subcomponents which when taught by providing subcomponent repertoires led to the emergence of verbal capabilities that were not present prior to our having provided the prerequisite instructional experience.
Summary of Identified and Induced Verbal Capabilities
We continue to locate other prerequisites and believe that there are many others that remain to be identified. Examples of rudimentary verbal functions that have been identified in the research include: (a) the emergence of better acquisition rates across all instructional areas as a function of teaching basic listening (Greer et al., 2005a), (b) the induction of parroting (Sundberg, et al., 1996) and then echoics that led to independent mand and tact functions (Yoon, 1996), and relevant autoclitics, for children with no speech or other verbal functions (Ross & Greer, 2003; Tsiouri & Greer, 2003), (c) transformation of establishing operations across the mand and tact function for children for whom a form taught in one function could not be used in an untaught function prior to multiple exemplar instruction (Nuzzolo & Greer, 2004), (d) the identification of interlocking speaker as own listener operants in self-talk with typically developing children (Lodhi & Greer, 1989), (e) induction of conversational units with children who had no history of peer conversational units (Donley & Greer, 1993), (f) the induction of naming in children who did not have naming prior to multiple exemplar instructional experience (Fiorile, 2005; Fiorile & Greer, 2006; Greer, et al., 2005b), (g) the emission of untaught past tenses for regular and irregular verbs as a function of multiple exemplar instruction (Greer & Yuan, 2004), (h) the emission of untaught contractions, morphemes and suffix endings as a function of multiple exemplar experiences or having children tutor using multiple exemplar experiences (i.e., observational learning through multiple exemplars) (Greer, et al., 2004a; Speckman, 2004), (i) faster acquisition rates for textual responses as a function of conditioning books as preferred stimuli for observing (Longano & Greer, 2006; Tsai & Greer, 2003), (j) and the induction or expansion of echoic responding a function of the acquisition of generalized auditory matching (Chavez-Brown, 2004).
The more advanced writer, writer as own reader or self-editing milestones are key complex cognitive repertoires. Research in this area includes: (a) teaching more effective writer effects on readers and structural responses of writing as a function of establishing operations for writing (Madho, 1997. Greer & Gifaldi, 2003; Reilly-Lawson & Greer, 2006), (b) the induction of rule governed or verbally governed responding and its effects on the verbal stimulus control of algorithms (Keohane & Greer, 2005; Marsico, 1998; Nuzzolo-Gomez, 2002), (c) the role of multiple exemplar instruction on the emergence of metaphors (Meincke et al., 2003), (d) transformation of stimulus function across vocal and written responding (Greer, et al., 2004c), and (e) the acquisition of joint stimulus control across selection and production topographies (Gautreaux, et al., 2003). These more complex repertoires appear to build on the presence of speaker as own listener capabilities.
While we are not ready to declare emphatically that the capabilities that we have identified experimentally, or by extrapolation from experiments, have been definitively identified as verbal developmental stages, the evidence to date shows that they are useful for instructional functions. Furthermore, they suggest possible natural fractures in the development of verbal function (4). For typically developing children, these fractures may occur as a result of brief experiences with exemplars. For some typically developing 2-year old children that we have studied, simply having a few experiences with exemplars going from listener to speaker, followed by single exemplars going from speaker to listener resulted in bidirectional naming for 3-dimensional stimuli that they did not have prior to those experiences that were separate and juxtaposed (Gilic, 2004). While our children with language delays required the rapid rotation across listener and speaker exemplars to induce naming, typically developing children may need only the incidental rotation of speaker listener experiences with single stimuli. It would appear that now that these generative or productive verbal capabilities have been traced to experiences for the children we have studied, the claim by some (Pinker, 1999) that productive or generative verbal capabilities is not traceable to learning experience is no longer credible.
Some of the research we described is not yet published and our references include papers presented at conferences or unpublished dissertations not yet submitted for publication. Thus, these are early days in our work on some of the stages. But it is important to note also that we have been on a quest for the last 20 years to remediate learning problems based on verbal behavior deficits in children with and without disabilities. The quest has moved forward based on progressively more complex strategic analyses as we stumbled on what we now believe may be developmental milestones in verbal behavior. We have replicated most of the effects we have identified with numerous children in our CABAS schools in the USA, England, and Ireland (Greer & Keohane, 2004; Greer, Keohane, & Healey, 2002). Thus, we believe that the evidence is robust and we hope that it can be useful to behavior analysts, neuroscientists, and linguists interested in a thorough analysis of the evolution of verbal behavior in children's development.
We have also speculated on the cultural evolution of verbal functions for our species relative to our proposed verbal developmental scheme (i.e., the role of cultural selection). Of course theories on the evolution of language are so extensive that some linguistic societies have banned their proliferation; yet, anthropologists and linguists are now suggesting there is new evidence to support the evolution of language (Holden, 2004). Some linguistic anthropologists may find the evolution of cultural selection of verbal operants and higher order verbal operants useful. It is even possible that the capacity for higher order operants and relational frames constitutes that which has been heretofore attributed to a universal grammar. Speaker and listener responses could have evolved from basic verbal operants to interlocking speaker and listener responding between individuals and within the skin of individuals (self-talk and naming)--an evolution made possible by our anatomical and physiological capacities to acquire higher order operants combined with cultural selection. Moreover, reading and writing functions also probably evolved as an extension of the basic speaker and listener functions; without them reading and writing would not have been possible, at least in the way it has evolved for the species.
"The human species, at its current level of evolution, is basically verbal, but it was not always so. ... A verbal behavior could have arisen from nonverbal sources and its transmission from generation to generation, would have been subject to influences which account for the multiplication of norms and controlling relations and the increasing effectiveness of verbal behavior as a whole." (Skinner, 1992, p.470)
Speaker/writer operants and listener/reader responses constitute an important if not the most important aspect of human behavior as adaptation to what is increasingly a verbal environment. Simply speaking, verbal behavior analysis is the most important subject of a science of behavior. We hope it is not too presumptuous of us to suggest that verbal behavior analysis can contribute to a developmental psychology that treats environmental contributions as seriously as it treats the non-environmental contributions. After all, biology has come to do so (Dugatkin, 1996; Robison, 2004).
While we can simulate human listener and human speaker functions with nonhuman species (Epstein, et al., 1980; Savage-Rumbaugh et al., 1978), the simulation of naming and other speaker-as own listener functions with nonhuman species remains to be demonstrated. Premack (2004) argues from the data that nonhumans lack the capacity for recursion. "Recursion makes it possible for words in a sentence to be widely separated yet be dependent on one another." (Premack, 2004, p. 320). We suggest that recursion may have made possible by the evolution of speaker as own listener capabilities in humans as a function of both neural capabilities and cultural selection. Premack (2204) also presents evidence that teaching is a strictly human endeavor. "Unlike imitation, in which the novice observes the expert, the teacher observes the novice--and not only observes, but judges and modifies." (Premack, 2004, p. 320; D. Premack & A. Premack, 2003). This describes the interaction we have characterized as what takes place in a learn unit. The conversational unit differs from the learn unit in that the conversational unit requires a reciprocal observation. Observational repertoires like those Premack (2004) described may be fundamental components that underlie and presage the evolution of nonverbal to verbal behavior.
While observation has been studied as a phenomenon, few if any studies have sought the possible environmental source for observational learning. We argue that observational learning differs from other indirect effects on behavior in that, observational learning results in the acquisition of new operants. Other types of observational effects on behavior result in the emission of operants that were already in the observer's repertoire. The kind of behavior change identified by Bandura (1986) was most likely of the latter sort since the presence or absence of the operants was not determined prior to the observational experience. Imitation results from a history that reinforces correspondence between the imitator and a model's behavior.
Some children do not have observational learning or have weak observational repertoires. In cases where observational learning has been missing we have induced it by providing certain experiences. It also may be possible that children do not have observational learning until they have certain experiences. In one study, we increased observational learning as a function of having individuals function as tutors using learn units that required tutors to reinforce or correct the responses of their tutees. It was the application of the learn unit per se, specifically the consequence component that produced the new observational repertoire (Greer, et al., 2004a). In another case with children who did not learn by observing peers, we taught them to monitor learn unit responses of their peers and observational learning emerged (Greer et al., 2004b; Pereira-Delgado, 2005).
This observing phenomenon involves a kind of consequent benefit similar to what the listener gains--specifically the extension of sensory reinforcement. Perhaps the teaching capacity involving reinforcement of the observed behavior of the learner is related to particular listener capabilities, while the recursion phenomenon is related to the interlocking speaker-listener capability. It is the interlocking speaker-listener-as-own-listener functions that make the more sophisticated milestones of verbal function possible. These functions make thinking, problem solving, and true social discourse possible . They also support the development of repertoires compellingly described in relational frame theory (Hayes, et al., 2000). Speech, and we argue, the compression of information through auditory stimuli in the human species, makes possible the more advanced speaker as own listener or textual responder as own listener and perhaps by extension the phenomenon of recursion. Regardless of whether our interpretations of the evidence is compelling, the evidence does reveal that a more complete picture of verbal behavior is evolving and that the role of the listener, and particularly the interrelationship between speaker and listener, is key to further advances in our understanding of verbal functions and their development within the individual.
Verbal Behavior Analysis, Comparative Psychology and the Neuroscience of Language
None of the work that we have described or related work in verbal behavior obviates the role of genetically evolved brain functions as neurology correlated with the presence of our suggested milestones of verbal behavior and the generative aspects of behavior cum language. The research in verbal behavior does not question, or eliminate, the importance or usefulness of neuropsychological researches. Alternately, the work in the neuroscience of language does not obviate the environmental verbal functions of language as behavior per se and as higher order operants that are increasingly identified in verbal behavior analysis. They are simply different sciences involved with different aspects of language. On the one hand, work in verbal behavior analysis is beginning to identify key environmental experiences in cultural selection and to suggest how neuropsychology can make the journey from MRI analyses to real verbal function--behaving with language outside of the skin. On the other hand, the work in the neurosciences of language is beginning to identify the behavior beneath the skin. It is compelling to consider the mutual benefit to obtaining a more comprehensive understanding of language by relating the efforts. Most importantly, combining the evidence and types of inquiry from both fields can help us teach a few more children to be truly verbal.
Behavior analysts have simulated language functions in non-humans (Epstein, et al. 1980; Savage-Rumbaugh, et al., 1978) and comparative psychologists have identified differences between the verbal behavior of primates and the verbal behavior of humans (Premack, 2004). Non-human species have not demonstrated a speaker-as-own-listener status. However, research in verbal behavior analysis has led to the acquisition of listener repertoires, speaker repertoires, speaker as own listener repertoires, and generative verbal behavior in humans who did not have those repertoires prior to special environmental experiences. Perhaps work in verbal behavior analysis with individuals who can acquire verbal repertoires as a result of special interventions provides a bridge. While our particular work is driven by applied concerns, it may have some relevance to the basic science of behavior, comparative psychology, and the neuroscience of language.
Reprinted with permission from the 2005 issue of Behavioral Development, 1, 31-48. The references were updated from the original publication, relevant quotations were added, and minor editorial changes were made.
We would like to dedicate this paper to the memory of B. F. Skinner who would have been 100 years old at its writing. His mentorship and encouragement to the first author served to motivate our efforts to master his complex book and engage in our experimental inquires. We are also indebted to others who kept verbal behavior alive in times when the critics were harsh and the audience was narrow. Among these are Jack Michael, Charles Catania, Ernest Vargas, Julie Vargas, Mark Sundberg, U. T. Place, Kurt Salzinger, Joe Spradlin, Joel Greenspoon, and the children we worked with who needed what verbal behavior could offer in order for them to become social and more cognitively capable. While the audience remains narrow, we are confident that the effects of research in verbal behavior will select out a larger audience.
Baer, D. M. (1970). An age -irrelevant concept of development. Merrill Palmer Quarterly, 16, 238-245.
Bandura, A, (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice-Hall.
Barnes-Holmes, D., Barnes-Holmes, Y., & Cullinan, V. (2001). Relational frame theory and Skinner's Verbal Behavior. The Behavior Analyst, 23, 69-84.
Becker, B.J. (1989). The effect of mands and tacts on conversational units and other verbal operants. (Doctoral dissertation, 1989, Columbia University). Abstract from: UMI Proquest Digital Dissertations [on-line]. Dissertations Abstracts Item: AAT 8913097.
Becker, W. (1992). Direct instruction: A twenty-year review. In R. West & L. Hamerlynck, Design for educational excellence: The legacy of B. F. Skinner (pp.71-112). Longmont CO, Sopris West.
Bijou, S., & Baer, D. M. (1978). A behavior analysis of child development. Englewood Cliffs, NJ: Prentice-Hall.
Carr, E.G. & Durand, V.M. (1985). Reducing behavior problems through functional communication training. Journal of Applied Behavior Analysis, 18, 111-126.
Catania, A.C. (1998). Learning. Englewood Cliffs, N.J: Prentice-Hall.
Catania, A. C. (2001. Three types of selection and three centuries. Revista Internacional de Psicologia y Therapia Psicologia, I (1). 1-10.
Catania, A. C., Mathews, B. A., & Shimoff, E. H., (1990). Properties of rule governed behavior and their implications. In D. E. Blackman and H. Lejeune (Eds.) Behavior analysis in theory and practice, (pp. 215-230). Hillsdale NJ: Erlbaum.
Chavez-Brown, M. (2004). The effect of the acquisition of a generalized auditory word match-to-sample repertoire on the echoic repertoire under mand and tact conditions. Dissertation Abstracts International, 66(01), 138A. (UMI No. 3159725).
Chavez-Brown, M. & Greer, R. D. (2003, July) The effect of auditory matching on echoic responding. Paper Presented at the First European Association for Behavior Analysis in Parma Italy.
Chomsky, N. (1959). A review of B.F. Skinner's Verbal Behavior. Language, 35, 26-58.
Chomsky, N. & Place, U. (2000). The Chomsky-Place Correspondence 1993-1994, Edited with an introduction and suggested readings by Ted Schoneberger. The Analysis of Verbal Behavior, 17, 738.
Chu, H.C. (1998). A comparison of verbal behavior and social skills approaches for development of social interaction skills and concurrent reduction of aberrant behaviors of children with developmental disabilities in the context of matching theory. Dissertation Abstracts International, 59(06), 1974A. (UMI No. 9838900).
Culotta, E., & Hanson, B. (2004). First words. Science, 303, 1315
Decasper, A. J & Spence, M. J. (1987). Prenatal maternal speech influences ON newborn's perception of speech sounds. Infant Behavior and Development, 2, 133-150.
Deacon, T. (1997). The symbolic species: The co-evolution of language and the brain. New York: W. W. Norton & Company.
Donley, C. R., & Greer, R. D. (1993). Setting events controlling social verbal exchanges between students with developmental delays. Journal of Behavioral Education, 3(4), 387-401.
Dugatkin, L. A. (1996). The interface between culturally-based preference and genetic preference: Female mate choices in Pecilla Roticulata. Proceedings of the National Academy of Science, USA, 93, 2770-2773.
Emurian, H. H., Hu, X., Wang, J., & Durham, D. (2000). Learning JAVA: A programmed instruction approach using applets. Computers in Human Behavior, 16, 395-422.
Epstein, R, Lanza, R. P., & Skinner, B. F. (1980). Symbolic communication between two pigeons (Columbia livia domestica). Science, 207 (no. 4430), 543-545.
Fiorile, C.A. (2004). An experimental analysis of the transformation of stimulus function from speaker to listener to speaker repertoires. Dissertation Abstracts International, 66(01), 139A. (UMI No. 3159736).
Fiorile, C. A. & Greer, R. D. (2006). The induction of naming in children with no echoic-to-tact responses as a function of multiple exemplar instruction. Manuscript submitted.
Gautreaux, G., Keohane, D. D., & Greer, R. D. (2003, July). Transformation of production and selection functions in geometry as a function of multiple exemplar instruction. Paper presented at the First Congress of the European Association for Behavior Analysis, Parma, Italy.
Gewirtz, J. L., Baer, D. M., & Roth, C. L. (1958). A note on the similar effects of low social availability of an adult and brief social deprivation on young children's behavior. Child Development, 29, 149152.
Gilic, L. (2005). Development of naming in two -year old children. Unpublished doctoral dissertation, Columbia University.
Greenwood, C. R., Delquadri, J. C., & Hall, R. V. (1984). Opportunity to respond and student achievement. In W. L. Heward, T. E. Heron, J. Trapp-Porter, & Hill, D. S., Focus on behavior analysis in Education (pp. 58-88). Columbus OH: Charles Merrill.
Greer, R. D. (1987). A manual of teaching operations for verbal behavior. Yonkers, NY: CABAS and The Fred S. Keller School.
Greer, R. D. (2002. Designing teaching strategies: An applied behavior analysis systems approach. New York: Academic Press.
Greer, R. D., Chavez-Brown, M. Nirgudkar, A. S., Stolfi, L., & Rivera-Valdes, C. (2005a). Acquisition of fluent listener responses and the educational advancement of young children with autism and severe language delays. European Journal of Behavior Analysis, 6 (2), xxxxx-xxx.
Greer, R. D., Gifaldi, H., & Pereira, J. A. (2003, July). Effects of Writer Immersion on Functional Writing by Middle School Students. Paper presented at the First Congress of the European Association for Behavior Analysis, Parma, Italy.
Greer, R. D., & Keohane, D. (2004). A real science and technology of teaching. In J. Moran & R. Malott, (Eds.), Evidence-Based Educational Methods (pp. 23-46). New York: Elsevier/Academic Press.
Greer, R. D., Keohane, D. D. & Healey, O. (2002). Quality and applied behavior analysis. The Behavior Analyst Today, 3 (1), 2002. Retrieved December 20, 2002 from http://www.behavior-analystonline. com
Greer, R. D., Keohane, D., Meincke, K., Gautreaux, G., Pereira, J., Chavez-Brown, M., & Yuan, L. (2004a). Key components of effective tutoring. In D. J. Moran & R. W. Malott, (Eds.), Evidence-Based Educational Methods (pp. 295-334). New York: Elsevier/Academic Press.
Greer, R. D. & McCorkle, N. P. (2003). CABAS[R] Curriculum and Inventory of Repertoires for Children from Pre-School through Kindergarten, 3rd Edition. Yonkers, NY: CABAS[R]/Fred S. Keller School. (Publication for use in CABAS? Schools Only)
Greer, R.D., McCorkle. N. P., & Williams, G. (1989). A sustained analysis of the behaviors of schooling. Behavioral Residential Treatment, 4, 113-141.
Greer, R. D., & McDonough, S. (1999). Is the learn unit the fundamental measure of pedagogy? The Behavior Analyst, 20, 5-16.
Greer, R. D., Nirgudkar, A., & Park, H. (2003, May). The effect of multiple exemplar instruction on the transformation of mand and tact functions. Paper Presented at the International Conference of the Association for Behavior Analysis, San Francisco, CA.
Greer, R. D., Pereira, J. & Yuan, L. (2004b, August). The effects of teaching children to monitor learn unit responses on the acquisition of observational learning. Paper presented at the Second International Conference of the Association of Behavior Analysis, Campinas, Brazil.
Greer, R. D., & Ross, D. E. (2004). Research in the Induction and Expansion of Complex Verbal Behavior. Journal of Early Intensive Behavioral Interventions, 1.2, 141-165. Retrieved May 20, 2005 from http:/www.the-behavior-analyst-today.com
Greer, R.D., & Ross (in press). Verbal behavior analysis: Developing and expanding complex communication in children severe language delays. Boston: Allyn and Bacon.
Greer, R. D., Stolfi, L., Chavez-Brown, M., & Rivera-Valdez, C. (2005b). The emergence of the listener to speaker component of naming in children as a function of multiple exemplar instruction. The Analysis of Verbal Behavior, 21, 123-134.
Greer, R. D. & Yuan, L. (2004, August). Kids say the darnedest things. Paper presented at the International Conference of the Association for Behavior Analysis and the Brazil Association for Behavior Medicine and Therapy.
Greer, R. D. Yuan, L. & Gautreaux, G. (2005c). Novel dictation and intraverbal responses as a function of a multiple exemplar history. The Analysis of Verbal Behavior, 21, 99-116.
Hart, B. M., & Risley, T. R. (1975). Incidental teaching of language in the preschool. Journal of Applied Behavior Analysis, 8, 411-420.
Hart B. & Risely, T. R. (1996). Meaningful Differences in the Everyday Life of America's Children. NY: Paul Brookes.
Hayes, S. Barnes-Homes, D., & Roche, B. (2000). Relational frame theory: A Post-Skinnerian account of human language and cognition. New York: Kluwer/Academic Plenum.
Heagle, A. I., & Rehfeldt, R. A. (2006). Teaching perspective taking skills to typically developing children through derived relational responding. Journal of Early and Intensive Behavior Interventions, 3 (1), 1-34. Available online at http:/www.the-behavior-analyst-online.com
Holden, C. The origin of speech. Science, 303, 1316-1319.
Horne, P. J. & Lowe, C. F. (1996). On the origins of naming and other symbolic behavior. Journal of the Experimental Analysis of Behavior, 65, 185-241.
Jadlowski, S.M. (2000). The effects of a teacher editor, peer editing, and serving as a peer editor on elementary students' self-editing behavior. Dissertation Abstracts International, 61(05), 2796B, (UMI No. 9970212).
Karchmer, M.A. & Mitchell, R.E. (2003). Demographic and achievement characteristics of deaf and hard-of-hearing students. In Marschark, M & Spencer, P.E. (ed.) Deaf Studies, Language, and Education. Oxford, England: Oxford University Press.
Karmali, I. Greer, R. D., Nuzzolo-Gomez, R., Ross, D. E., & Rivera-Valdes, C. (2005). Reducing palilalia by presenting tact corrections to young children with autism, The Analysis of Verbal Behavior, 21, 145-154.
Keohane, D, & Greer, R. D. (2005). Teachers use of a verbally governed algorithm and student learning. Journal of Behavioral and Consultation Therapy, 1 (3), 249-259. Retrieved February 22, 2006 from http:/www.the-behavior-analyst-today.com
Keohane, D, Greer, R. D. & Ackerman, S. (2005a, November). Conditioned observation for visual stimuli and rate of learning. Paper presented at the third conference of the International Association for Behavior Analysis, Beijing, China.
Keohane, D., Greer, R. D., & Ackerman, S. (2005b, November). Training sameness across senses and accelerated learning. Paper presented at the third conference of the International Association for Behavior Analysis, Beijing, China.
Keohane, D. D., Greer, R. D., Mariano-Lapidus (2004, May). Derived suffixes as a function of a multiple exemplar instruction. Paper presented at the Annual Conference of the Association for Behavior Analysis, Boston, MA.
Lamarre, J. & Holland, J. (1985). The functional independence of mands and tacts. Journal of Experimental Analysis of Behavior, 43, 5-19.
Lodhi, S. & Greer, R.D. (1989). The speaker as listener. Journal of the Experimental Analysis of Behavior, 51, 353-360.
Longano, J. & Greer, R. D. (2006). The effects of a stimulus-stimulus pairing procedure on the acquisition of conditioned reinforcement for observing and manipulating stimuli by young children with autism. Journal of Early and Intensive Behavior Interventions, 3.1,135-150. Retrieved February 22 from http://www.behavior-analyst-online.com
Lovaas, O.I. (1977). The autistic child: Language development through behavior modification. New York: Irvington Publishers, Inc.
Lowe, C. F., Horne, P. J., Harris, D. S., & Randle, V. R.L. (2002). Naming and categorization in young children: Vocal tact training. Journal of the Experimental Analysis of Behavior, 78, 527-549.
MacCorquodale, K. (1970). On Chomsky's review of Skinner's Verbal Behavior. Journal of the Experimental Analysis of Behavior, 13, 83-99.
Madho, V. (1997). The effects of the responses of a reader on the writing effectiveness of children with developmental disorders. . (Doctoral dissertation, Columbia University, 1997). Abstract from: UMI Proquest Digital Dissertations [on-line]. Dissertations Abstracts Item: AAT 9809740.
Marion, C., Vause, T., Harapiak, S., Martin, G. L., Yu, T., Sakko, G., & Walters, K. L. (2003). The hierarchical relationship between several visual and auditory discriminations and three verbal operants among individuals with developmental disabilities. The Analysis of Verbal Behavior, 19, 91-106.
Marsico, M.J. (1998). Textual stimulus control of independent math performance and generalization to reading. Dissertation Abstracts International, 59 (01, 133A. (UMI No. 9822227).
McDuff, G. S., Krantz, P. J., McDuff, M. A., & McClannahan, L. E. (1988). Providing incidental teaching for autistic children: A rapid training procedure for therapists. Education and Treatment of Children, 11, 205-217.
Meincke-Mathews, K. (2005). Induction of metaphorical responses in middle school students as a function of multiple exemplar experiences. Dissertation Abstracts International, 66(05), 1716A. (UMI No. 3174851).
Meincke, K., Keohane, D. D., Gifaldi, H. & Greer (2003, July). Novel production of metaphors as a function of multiple exemplar instruction. Paper presented at the First European Association for Behavior Analysis Congress, Parma, Italy.
Michael, J. (1993). Establishing operations. The Behavior Analyst, 16, 191-206.
Michael, J. (1982). Skinner's elementary verbal relations: Some new categories. The Analysis of Verbal Behavior, 1,1-3.
Michael, J. (1984). Verbal behavior. Journal of the Experimental Analysis of Behavior, 42, 363-376.
Morris, E. K. (2002). Age irrelevant contributions to developmental science: In remembrance of Donald M. Baer. Behavioral Development Bulletin, 1, 52-54.
Nuzzolo-Gomez, R. (2002). The Effects of direct and observed supervisor learn units on the scientific tacts and instructional strategies of teachers. Dissertation Abstracts International, 63(03), 907A. (UMI No. 3048206).
Nuzzolo-Gomez, R. & Greer, R. D. (2004). Emergence of Untaught Mands or Tacts with Novel Adjective-Object Pairs as a Function of Instructional History. The Analysis of Verbal Behavior, 24, 30-47.
Park, H. L. (2005) Multiple exemplar instruction and transformation of stimulus function from auditory-visual matching to visual-visual matching. Dissertation Abstracts International, 66(05), 1715A. (UMI No. 3174834).
Pereira-Delgado, J. A. (2005). Effects of peer monitoring on the acquisition of observational learning. Unpublished doctoral dissertation, Columbia University.
Pennisi, E. (2004, February 24). The first language? Science, 303 (5662), 1319-1320.
Pinker, S. (1999). Words and rules. New York: Perennial.
Pistoljevic, N. and Greer, R. D. (2006). The Effects of Daily Intensive Tact Instruction on Preschool Students' Emission of Pure Tacts and Mands in Non-Instructional Setting. Journal of Early and Intensive Behavioral Interventions, 103-120. Available online at http://www.behavior-analystonline. org
Premack, D. (2004, January 16). Is language key to human intelligence? Science, 303, 318-320.
Premack, D. & Premack, A. (2003). Original intelligence. New York: McGraw-Hill.
Reilly-Lawson, T. & Greer, R. D. (2006). Teaching the function of writing to middle school students with academic delays. Journal of Early and Intensive Behavior Interventions, 3.1,135-150. Available online at http://www.behavior-analyst-online.org
Robinson, G. E. (2004, April). Beyond nature and nurture. Science, 304, 397-399.
Rosales-Ruiz, J. & Baer, D. M. (1996). A behavior analytic view of development pp. 155-180. In S. M. Bijou, New Directions in Behavior Development. Reno, NV, Context Press.
Ross, D. E. & Greer, R. D. (2003). Generalized imitation and the mand: Inducing first instances of speech in young children with autism. Research in Developmental Disabilities, 24, 58-74.
Ross, D.E., D. E., Nuzzolo, R., Stolfi, L., & Natarelli, S. (2006). Effects of speaker immersion on the spontaneous speaker behavior of preschool children with communication delays, Journal of Early and Intensive Behavior Interventions, 3.1,135-150. Available online at http://www.behavioranalystonline.com
Savage-Rumbaugh, E. S., Rumbaugh, D. M. & Boysen, S. (1978). Science, 201, 64-66.
Schauffler, G. and Greer, R. D. (2006). The Effects of Intensive Tact Instruction on Audience-Accurate Tacts and Conversational Units and Conversational Units. Journal of Early and Intensive Behavioral Interventions, 120-132. Available online at http://www.behavior-analyst-online.com
Schwartz, B.S. (1994). A comparison of establishing operations for teaching mands. Dissertation Abstracts International, 55(04), 932A. (UMI No. 9424540).
Selinski, J, Greer, R.D., & Lodhi, S. (1991). A functional analysis of the Comprehensive Application of Behavior Analysis to Schooling. Journal of Applied Behavior Analysis, 24, 108-118.
Sidman, M. (1994). Equivalence relations and behavior: A research story. Boston, MA: Authors Cooperative.
Skinner, B. F. (1989). The behavior of the listener. In S. C. Hayes (Ed.), Rule-governed behavior: Cognition, contingencies and instructional control (85-96). New York: Plenum.
Skinner, B.F. (1957, 1992). Verbal Behavior. Acton, MA: Copley Publishing Group and the B. F. Skinner Foundation.
Speckman, J. (2005). Multiple exemplar instruction and the emergence of generative production of suffixes as autoclitic frames. Dissertation Abstracts International, 66(01), 83A. (UMI No. 3159757).
Sundberg, M. L., Loeb, M., Hale, L., & Eighenheer (2001/2002). Contriving establishing operations for teaching mands for information. The Analysis of Verbal Behavior, 18, 15-30.
Sundberg, M.L., Michael, J., Partington, J.W., & Sundberg, C.A. (1996). The role of automatic reinforcement in early language acquisition. The Analysis of Verbal Behavior, 13, 21-37.
Sundberg, M.L. & Partington, J.W. (1998). Teaching language to children with autism or other developmental disabilities. Pleasant Hill CA: Behavior Analysts, Inc.
Tsai, H., & Greer, R. D. (2006). Conditioned preference for books and accelerated acquisition of textual responding by preschool children. Journal of Early and Intensive Behavior Interventions, 3.1, 3561. Available online at http:/www.the-behavior-analyst.com.
Tsiouri, I., & Greer, R. D. (2003). Inducing vocal verbal behavior through rapid motor imitation training in young children with language delays. Journal of Behavioral Education, 12, 185-206.
Twyman, J.S (1996a). An analysis of functional independence within and between secondary verbal operants. Dissertation Abstracts International, 57(05), 2022A. (UMI No. 9631793).
Twyman, J. (1996b). The functional independence of impure mands and tacts of abstract stimulus properties. The Analysis of Verbal Behavior, 13, 1-19.
Vargas, E.A. (1982). Intraverbal behavior: The codic, duplic, and sequelic subtypes. The Analysis of Verbal Behavior, 1, 5-7.
Williams, G. & Greer, R.D. (1993). A comparison of verbal-behavior and linguistic -communication curricula for training developmentally delayed adolescents to acquire and maintain vocal speech. Behaviorology, 1, 31-46.
Yoon, S.Y. (1998). Effects of an adult's vocal sound paired with a reinforcing event on the subsequent acquisition of mand functions. Dissertation Abstracts International, 59(07), 2338A. (UMI No. 9839031).
Author Contact Information
R. Douglas Greer, Ph.D., SBA, SRS
Graduate School of Arts & Sciences and Columbia University Teachers College
Box 76 Teachers College Columbia University
525 West 120th
New York NY, 10027
Dolleen-Day Keohane, Ph.D., SBA, Asst.RS
CABAS Schools and Columbia University Teachers College
2728 Henry Hudson Parkway
Riverdale, NY 10463
(1.) For information on and the evidence base for teaching as a science in CABAS schools and the CABAS[R] System see Greer (2002), Greer, Keohane, & Healy (2002), Selinski, Greer, & Lodhi (1991), Greer, McCorkle, & Williams, 1989, and http:/www.cabas.com. The findings of the research we describe have been replicated extensively with children and adolescents in CABAS[R] Schools in the USA, Ireland, Argentina and England and we believe they are robust. A book that describes the verbal behavior research and procedures in detail is in progress for publication in 2006 (Greer & Ross, in progress).
(2.) We chose the term listener emersion because it seemed particularly appropriate. The Oxford English Dictionary 2nd Edition, Volume V describes one usage of the term emersion as follows, "The action of coming out or issuing (from concealment or confinement). Somewhat rare." (OED, p. 177) Thus, once a child has acquired the listener repertoire, the child may be said to have come out of confinement to a pre-listener status. They have acquired an essential component of what is necessary to progress along the verbal behavior continuum--a verbal behavior development cusp.
(3.) It would seem that a certain history must transpire in order for a point-to-point correspondence between a word spoken by a parent and the repetition of the word by a child to qualify as an echoic operant rather than parroting. The child needs to say the word under the relevant deprivation conditions associated with the mand or the tact and then have that echoic evolve into either a mand or a tact. Once at least one of these events transpires, the parroting can move to an echoic. While more sophisticated operants and higher order operants or relational frames are basic to many sophisticated aspects of verbal behavior, the move from parroting is probably just as complex. The acquisition of echoing is the fundamental speech component of verbal functioning. One wonders how long, and under what conditions, it took for the echoic repertoire to evolve in our species. To evoke true echoics in children who have never spoken is probably one of the major accomplishments of the behavioral sciences. Indeed, the procedures we now use in verbal behavior analysis to induce first instances of vocal verbal operants have never been tried with primates, nor has the procedure to induce parroting. However, procedures for inducing parroting and echoics and other first instances of vocal verbal behavior have been successful in developing functional vocal verbal behavior in individuals who probably would have never spoken without these procedures. Amazing! There are even more fundamental components underlying even these response capabilities and aspects of observation show rich potential (Premack, 2004).
(4.) We use the term natural fracture to differentiate numerically scaled hypothetical relations from relations that are absolute natural events as in the determination of geological time by the identification of strata. To further illustrate our point, "receptive speech" is a hypothetical construct based on an analogy made between the computer "receiving inputs" to auditory speech events. It is an analogy, not a behavior or response class. Measures of receptive behavior are scaled measures tied to that analogy, as in test scores on "receptive" speech. However, listener behavior is composed of actual natural fractures (i.e., the child does or does not respond to spoken speech by another). In still another example, operants are natural fractures, whereas an IQ is a scaled measure of a hypothetical construct. Moreover, acquisitions of higher order operants such as the acquisition of joint stimulus control for spelling are also natural fractures.
Table 1. Evolution of Verbal Milestones and Independence Verbal Milestones Effects on Independent Functioning 1) Pre Listener Humans without listener repertoires are entirely Status dependent on others for their lives. Interdependency is not possible. Entrance to the social community is not possible. 2) Listener Status Humans with basic listener literacy can perform verbally governed behavior (e.g., come here, stop, eat). They can comply with instructions, track tasks (e.g., do this, now do this), and avoid deleterious consequences while gaining habilitative responses. The individual is still dependent, but direct physical or visual contact can be replaced somewhat by indirect verbal governance. Contributions to the well being of society become possible since some interdependency is feasible and the child enters the social community. 3) Speaker Status Humans who are speakers and who are in the in the presence of a listener can govern consequences in their environment by using another individual to mediate the contingencies (e.g., eat now, toilet, coat, help). They emit mands and tacts and relevant autoclitics to govern others. This is a significant step towards controlling the contingencies by the speaker. The culture benefits proportionately too and the capacity to be part of the social community is greatly expanded. 4) Speaker Listener a) Sequelics. Humans with this repertoire can Exchanges with responds as a listener-speaker to intraverbals, Others (Sequelics including impure tacts and impure mands. and Conversational Individuals can respond to questions for mand or Units) tact functions or to intraverbals that do not have mand or tact functions. The individual can respond as a speaker to verbal antecedents and can answer the queries of others such as, "what hurts?" "What do you want?" "What's that?" "What do you see, hear or feel?" One is reinforced as a listener with the effects of the speaker response. b) Conversational Units. Humans with this repertoire carry on conversational units in which they are reinforced as both speaker and listener. The individual engages in interlocking verbal operants of speaker and listener. The individual is reinforced both as a listener for sensory extensions, and also as a speaker in the effects speaking has on having a listener mediate the environment for the speaker. 5) Speaker as Own a) Say and Do. Individuals with this repertoire Listener Status can function as a listener to their own verbal Say Do behavior (e.g., first I do this, then I do Conversational that), reconstructing the verbal behavior given Units by another or eventually constructing verbal Naming speaker-listener behavior). At this stage, the person achieves significant independence. The level of independence is dependent on the level of the person's listener and speaker sophistication. b) Self-talk. When a human functions as a reinforced listener and speaker within the same skin they have one of the repertoires of speaker-as-own-listener. The early evidence of this function is self-talk; young children emit such repertoires when playing with toys, for example (Lodhi & Greer, 1990). c) Naming. When an individual hears a speaker's vocal term for a nonverbal stimulus as a listener and can use it both as a speaker and listener without direct instruction, the individual has another repertoire of speaker as own listener. This stage provides the means to expand verbal forms and functions through incidental exposure. 6) Reader Status Humans who have reading repertoires can supply useful, entertaining, and necessary responses to setting events and environmental contingencies that are obtainable by written text. The reader may use the verbal material without the time constraints controlling the speaker-listener relationship. The advice of the writer is under greater reader control than the advice of a speaker for a listener; that is, one is not limited by time or distance. Advice is accessible as needed independent of the presence of a speaker. 7) Writer Status A competent writer may control environmental contingencies through the mediation of a reader across seconds or centuries in the immediate vicinity of a reader on a remote continent. This stage represents an expansion of the speaker repertoires such that a listener need not be present at the time or at the same location as the writer. The writer affects the behavior of a reader. 8) Writer as Own As writers increase their ability to read their Reader: The Self- own writing from the perspective of the eventual Editing Status audience, writers grow increasingly independent of frequent reliance on prosthetic audiences (e.g., teachers, supervisors, colleagues). A more finished and more effective behavior-evoking repertoire provides the writer with wide-ranging control over environmental contingencies such that time and distance can be virtually eliminated. Writing can be geared to affect different audiences without immediate responses from the target audience 9) Verbal A sophisticated self-editor under the verbal Mediation for expertise associated with formal approaches to Solving Problems: problem solving (e.g., methods of science, logic, authority) can solve complex problems in progressively independent fashion under the control of verbal stimuli (spoken or written). The characterization of the problem is done with precise verbal descriptions. The verbal descriptions occasion other verbal behavior that can in turn direct the action of the person to solve the particular problem. A particular verbal community (i.e., a discipline) is based on verbal expertise and modes of inquiry are made possible. Table 2. Verbal Milestones and Components Mile- Components (Does the Child Have These Capabilities?) stones Pre- * Conditioned reinforcement for voices (voices of others listener controls prolonged auditory observation and can set the stage for visual or other sensory discriminations) (Decasper & Spence, 1987) * Visual tracking (visual stimuli control prolonged observation) (Keohane, Greer, & Ackerman, 2005a) * Capacity for "sameness" across senses (multiple exemplar experiences across matching across olfactory, auditory, visual, gustatory, tactile results in capacity for sameness across senses) (Keohane, Greer, & Ackerman, 2005b) * Basic compliance based on visual contexts and the teacher or parent as a source of reinforcement (The child need not be under any verbal control.) Listener * Discrimination between words and sounds that are not words (Conditioned reinforcement for voices occasions further distinctions for auditory vocal stimuli) * Auditory matching of certain words (as a selection/listener response) (Chavez-Brown, 2005; Greer & Chavez-Brown, 2003) * Generalized auditory matching of words (as a selection/listener response) (Chavez-Brown, 2005) * Basic listener literacy with non-speaker responses (Greer, Chavez-Brown, Nirgudkar, Stolfi, & Rivera-Valdes, 2005) * Visual discrimination instruction to occasion opportunities for instruction in naming (Greer & Ross, in press) * Naming (Greer, Stolfi, Chavez-Brown, & Rivera-Valdes, 2005) * Observational naming and observational learning prerequisites (Greer, Keohane, Meincke, Gautreaux, Pereira, Chavez-Brown, & Yuan, 2004) * Reinforcement as a listener (A listener is reinforced by the effect the speaker has on extending the listener's sensory experience; the listener avoids deleterious consequences and obtains vicarious sensory reinforcement.) (Donley & Greer, 1993) * Listening to one's own speaking (the listener is speaker) (Lodhi & Greer, 1989) * Listening to one's own textual responses in joining print to the naming relation (Park, 2005) * Listening and changing perspectives: Mine, yours, here, there, empathy (extension of listener reinforcement joins speaker) (Heagle & Rehfeldt, 2006) Speaker * Vocalizations * Parroting (Pre-echoic vocalizations with point -to-point correspondence, here-say joins see-do as a higher order operant), auditory matching as a production response (Sundberg, Michael, Partington, & Sundberg, 1996) * Echoics that occur when see-do (imitation) joins hear-say (echoic) as a higher order duplic operant (Ross & Greer, 2003; Tsiouri & Greer, 2003) * [Faulty echoics of echolalia and palilalia related to faulty stimulus control or establishing operation control] (Karmali, Greer, Nuzzolo-Gomez, Ross, & Rivera-Valdes, 2005) * Basic Echoic-to-mand function (a consequence is specified in and out of sight, here-say attains function for a few verbalizations leading to rapid expansion of echoics for functions mediated by a listener) (Ross & Greer, 2003; Yoon, 1996) * Echoic-to-tact function (generalized reinforcement control, the child must have conditioned reinforcement for social attention) (Tsiouri & Greer, 2003) * Mand and tacts and related autoclitics are independent (learning a form in one function does not result in use in another without direct instruction) (Twyman, 1996a, 1996b) * Mands and tacts with basic adjective-object acquire autoclitic functions (a response learned in one function results in usage in another under the control of the relevant establishing operation) (Nuzzolo-Gomez & Greer, 2005). This Transformation of establishing operations across mands and tacts replicated by Greer, Nirgudkar, & Park (2003) * Impure mands (mands under multiple control--deprivation plus verbal stimuli of others, visual, olfactory, tactile, gustatory stimuli) (Carr & Durand, 1985) * Impure tacts (tacts under multiple controls--deprivation of generalized reinforcers plus verbal stimuli of others, visual, olfactory, tactile, gustatory stimuli) (Tsiouri & Greer, 2003) * Tacts and mands emerging from incidental experience (naming and the speaker repertoires) (Fiorile, 2004; Fiorile & Greer, 2006; Greer, et al, 2005b; Gilic, 2005) * Comparatives: smaller/larger, shorter/longer, taller/shorter, warmer/colder in mand and tact functions as generative function (Speckman, 2005) * Generative tense usage (Greer & Yuan, 2004) * "Wh" questions in mand and tact function (i.e., what, who, why, where, when, which) (Pistoljevic & Greer, 2006) * Expansion of tact repertoires resulting in greater "spontaneous" speech (Pistoljevic & Greer, 2006; Schauffler & Greer, 2006) * Speaker Listener Exchanges with Others: Does the Child Have These Capabilities? * Sequelics as speaker (Becker, 1989) * Sequelics as listener-speaker (Becker, 1989; Donley & Greer, 1993) * Conversational units (reciprocal speaker and listener control) (Donley & Greer, 1993) Speaker as * Basic naming from the speaker perspective (learns tact Own and has listener response) (Fiorile & Greer, 2006; Listener Horne & Lowe, 1996) * Observational naming from the speaker perspective (hears others learn tact and has tact) (Fiorile & Greer, 2006; Greer, et al., 2004b) * Verbal governance of own speaker responses (say and do correspondence as extension of listener literacy for correspondence for what others say and nonverbal correspondence that is reinforced) (Rosales-Ruiz & Baer, 1996) * Conversational units in self-talk (listener and speaker functions within one's own skin in mutually reinforcing exchanges) (Lodhi & Greer 1989) Early * Conditioned reinforcement for observing books (Tsai & Reader Greer, 2006) * Textual responses: see word-say word at adequate rate improved by prior conditioning of print stimuli as conditioned reinforcement for observing (Tsai & Greer, 2006) * Match printed word, spoken word by others and self and printed word, spoken word and picture/object, printed word and picture/action (Park, 2005) * Responds as listener to own textual responding (vocal verbalization results in "comprehension" if the verbalizations are in the tact repertoire, e.g., hearing tact occasions match of speech with nonverbal stimuli) Writer * Effortless component motor skills of printing or typing (see-write as extension of see-do) * Acquisition of joint stimulus control across written and spoken responding (learning one response either vocal or written results in the other) (Greer, Yuan, & Gautreaux, 2005) * Writer affects the behavior of a reader for technical functions (mand, tact, autoclitic functions) (Reilly-Lawson & Greer, 2006) * Transformation of stimulus function for metaphoric functions (word used metaphorically such as in, "she is sharp as a pin") (Meincke-Mathews, 2005; Meincke, Greer, Keohane & Mariano-Lapidus, 2003) * Writes to affect the emotions of a reader for aesthetic functions (mand, tact, autoclitic functions as well as simile and metaphor for prose, poetry, and drama and meter and rhyme scheme for poetry) Writer as * Is verbally governed by own writing for revision Own functions (finds discrepancies between what she reads Reader and what she has written, writer and reader in the same skin) (Madho, 1997; Reilly-Lawson & Greer, 2006) * Verbally governs a technical audience by reading what is written as would the target audience (editing without assistance from others, acquire listener function of target audience requiring joint stimulus control between the writer and the listener audience) (Reilly-Lawson & Greer, 2006) * Verbally governs an aesthetic audience as a function of reading what is written as would the target audience (editing without assistance from others, acquire aesthetic listener function of target audience with tolerance for ambiguity) (Meincke-Mathews, 2005) Verbal * (Is verbally governed by print to perform simple Mediation operations (verbal stimuli control operations) for (Marsico, 1998) Problem * Is verbally governed by print to learn new stimulus Solving control and multiple step operations (the characterization of the problem is done with precise verbal descriptions). The verbal descriptions occasion other verbal behavior that can in turn direct the action of the person to solve the particular problem (Keohane & Greer, 2005). A particular verbal community, or discipline, is based on verbal expertise tied to the environment and modes of inquiry are made possible.) | http://www.thefreelibrary.com/The+evolution+of+verbal+behavior+in+children.-a0217040852 | 13 |
66 | This is the prototype page for Elementary Calculus. Contents written here may be transferred to there when a section is written here and if its content is deemed of acceptable standard for Elementary Calculus book.
Function generates only one value for each value of an element. Function can be tested using Vertical Line Test.
Let's try to express the area of an equilateral triangle as a function of one of its sides. This problem can be solved by using property of 30-60-90 special right triangle. We know that 30-60-90 right triangle has side ratio of . Also, we know that we can divide an equilateral triangle into two 30-60-90 right triangles. Let x denote the base of an equilateral triangle and A(x) be the area of an equilateral triangle. Then,
Functions can be classified into power function, root function, a degree of polynomial, rational function, algebraic function, trigonometric function, exponential function, or trigonometric function. Let's try to classify some functions ourselves. is a root function. is an algebraic function. is a polynomial of degree 9. is a rational function. is a trigonometric function. is a logarithmic function. Transcendental functions include trigonometric, inverse trigonometric, exponential, and logarithmic functions.
In statistical study of linear regression, there is a way to conceptualize function as joint distribution of marginal distribution P(x) of explanatory variable x and conditional distribution P(y|x) of response variable y. Explanatory variable is also called endogenous variable or input variable. Response variable is also called exogenous variable or output variable. Of course, there can be more than one explanatory variables in marginal distribution and more than one response variables in conditional distribution.
1. (a) Sketch and state its domain and range.
To find the domain we know that x>0, so the domain is x>0. To find the range we consider x,y-intercepts, domain, and shape of the curve. The x-intercept is 16, the y-intercept is -4, we know the domain is x>0, and we know that f(x) is moved downwards by 4 units. Therefore, the range for f(x) is y>-4.
(b) , use this to find the inverse function of f(x) and state its domain and range.
So, the inverse function of f(x) is . By the definition of inverse function, we know that the domain of the inverse function is the range of f(x), and the range of the inverse function is the domain of f(x). Therefore, the domain of the inverse function is x>-4, and the range is y>0.
- composite functions of trignometric and inverse trigonometric functions | http://en.m.wikibooks.org/wiki/User:Mmmooonnnsssttteeerrr/Calculus | 13 |
96 | Gravity of Earth
The gravity of Earth, denoted g, refers to the acceleration that the Earth imparts to objects on or near its surface. In SI units this acceleration is measured in meters per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). It has an approximate value of 9.81 m/s2, which means that, ignoring the effects of air resistance, the speed of an object falling freely near the Earth's surface will increase by about 9.81 meters (about 32.2 ft) per second every second. This quantity is sometimes referred to informally as little g (in contrast, the gravitational constant G is referred to as big G).
There is a direct relationship between gravitational acceleration and the downwards weight force experienced by objects on Earth, given by the equation F = ma (force = mass × acceleration). However, other factors such as the rotation of the Earth also contribute to the net acceleration.
The precise strength of Earth's gravity varies depending on location. The nominal "average" value at the Earth's surface, known as standard gravity is, by definition, 9.80665 m/s2 (about 32.1740 ft/s2). This quantity is denoted variously as gn, ge (though this sometimes means the normal equatorial value on Earth, 9.78033 m/s2), g0, gee, or simply g (which is also used for the variable local value). The symbol g should not be confused with g, the abbreviation for gram (which is not italicized).
Variation in gravity and apparent gravity
A perfect sphere of spherically uniform density (density varies solely with distance from centre) would produce a gravitational field of uniform magnitude at all points on its surface, always pointing directly towards the sphere's centre. However, the Earth deviates slightly from this ideal, and there are consequently slight deviations in both the magnitude and direction of gravity across its surface. Furthermore, the net force exerted on an object due to the Earth, called "effective gravity" or "apparent gravity", varies due to the presence of other factors, such as inertial response to the Earth's rotation. A scale or plumb bob measures only this effective gravity.
Apparent gravity on the earth's surface in metres per second squared varies by around 0.6%, from about 9.776 near the equator or at high elevation to 9.832 at the poles.
The surface of the Earth is rotating, so it is not an inertial frame of reference. At latitudes nearer the Equator, the outward centrifugal force produced by Earth's rotation is larger than at polar latitudes. This counteracts the Earth's gravity to a small degree – up to a maximum of 0.3% at the Equator – and reduces the apparent downward acceleration of falling objects.
The second major reason for the difference in gravity at different latitudes is that the Earth's equatorial bulge (itself also caused by inertia) causes objects at the Equator to be farther from the planet's centre than objects at the poles. Because the force due to gravitational attraction between two bodies (the Earth and the object being weighed) varies inversely with the square of the distance between them, an object at the Equator experiences a weaker gravitational pull than an object at the poles.
In combination, the equatorial bulge and the effects of the Earth's inertia mean that sea-level gravitational acceleration increases from about 9.780 m·s−2 at the Equator to about 9.832 m·s−2 at the poles, so an object will weigh about 0.5% more at the poles than at the Equator.
The same two factors influence the direction of the effective gravity. Anywhere on Earth away from the Equator or poles, effective gravity points not exactly toward the centre of the Earth, but rather perpendicular to the surface of the geoid, which, due to the flattened shape of the Earth, is somewhat toward the opposite pole. About half of the deflection is due to inertia, and half because the extra mass around the Equator causes a change in the direction of the true gravitational force relative to what it would be on a spherical Earth.
Gravity decreases with altitude as one rises above the earth's surface because greater altitude means greater distance from the Earth's centre. All other things being equal, an increase in altitude from sea level to 30 000 ft (9144 metres) causes a weight decrease of about 0.29%. (An additional factor affecting apparent weight is the decrease in air density at altitude, which lessens an object's buoyancy. This would increase a person's apparent weight at an altitude of 30 000 ft by about 0.08%)
It is a common misconception that astronauts in orbit are weightless because they have flown high enough to "escape" the Earth's gravity. In fact, at an altitude of 400 kilometres (250 miles), equivalent to a typical orbit of the Space Shuttle, gravity is still nearly 90% as strong as at the Earth's surface, and weightlessness actually occurs because orbiting objects are in free-fall.
The effect of ground elevation depends on the density of the ground (see "Slab correction" below). A person flying at 30 000 ft above sea level over mountains will feel more gravity than someone at the same elevation but over the sea. However, a person standing on the earth's surface feels less gravity when the elevation is higher.
The following formula approximates the Earth's gravity variation with altitude:
- gh is the gravitational acceleration at height above sea level.
- re is the Earth's mean radius.
- g0 is the standard gravitational acceleration.
This formula treats the Earth as a perfect sphere with a radially symmetric distribution of mass; a more accurate mathematical treatment is discussed below.
An approximate depth dependence of density in the Earth can be obtained by assuming that the mass is spherically symmetric (it depends only on depth, not on latitude or longitude). In such a body, the gravitational acceleration is towards the center. The gravity at a radius r depends only on the mass inside the sphere of radius r; all the contributions from outside cancel out. This is a consequence of the inverse-square law of gravitation. Another consequence is that the gravity is the same as if all the mass were concentrated at the center of the Earth. Thus, the gravitational acceleration at this radius is
where G is the gravitational constant and M(r) is the total mass enclosed within radius r. If the Earth had a constant density ρ, the mass would be M(r) = (4/3)πρr3 and the dependence of gravity on depth would be
If the density decreased linearly with increasing radius from a density ρ0 at the centre to ρ1 at the surface, then ρ(r) = ρ0 − (ρ0 − ρ1) r / re, and the dependence would be
The actual depth dependence of density and gravity, inferred from seismic travel times (see Adams–Williamson equation), is shown in the graphs below.
Local topography and geology
Local variations in topography (such as the presence of mountains) and geology (such as the density of rocks in the vicinity) cause fluctuations in the Earth's gravitational field, known as gravitational anomalies. Some of these anomalies can be very extensive, resulting in bulges in sea level, and throwing pendulum clocks out of synchronisation.
The study of these anomalies forms the basis of gravitational geophysics. The fluctuations are measured with highly sensitive gravimeters, the effect of topography and other known factors is subtracted, and from the resulting data conclusions are drawn. Such techniques are now used by prospectors to find oil and mineral deposits. Denser rocks (often containing mineral ores) cause higher than normal local gravitational fields on the Earth's surface. Less dense sedimentary rocks cause the opposite.
Other factors
In air, objects experience a supporting buoyancy force which reduces the apparent strength of gravity (as measured by an object's weight). The magnitude of the effect depends on air density (and hence air pressure); see Apparent weight for details.
The gravitational effects of the Moon and the Sun (also the cause of the tides) have a very small effect on the apparent strength of Earth's gravity, depending on their relative positions; typical variations are 2 µm/s2 (0.2 mGal) over the course of a day.
Comparative gravities in various cities around the world
The table below shows the gravitational acceleration in various cities around the world; amongst these cities, it is lowest in Mexico City (9.776 m/s2) and highest in Oslo (Norway) and Helsinki (Finland) (9.825 m/s2).
|Location||Acceleration in m/s2|
|New York City||9.802|
|Rio de Janeiro||9.788|
Mathematical models
Latitude model
If the terrain is at sea level, we can estimate g:
- = acceleration in m·s−2 at latitude :
Helmert's equation may be written equivalently to the version above as either:
The difference between the WGS-84 formula and Helmert's equation is less than 0.68·10−6 m·s−2.
Free air correction
The first correction to be applied to the model is the free air correction (FAC), which accounts for heights above sea level. Gravity decreases with height at a rate which near the surface of the Earth is such that linear extrapolation would give zero gravity at a height of one half the radius is 9.8 m·s−2 per 3,200 km.
Using the mass and radius of the Earth:
The FAC correction factor (Δg) can be derived from the definition of the acceleration due to gravity in terms of G, the Gravitational Constant (see Estimating g from the law of universal gravitation, below):
At a height h above the nominal surface of the earth gh is given by:
So the FAC for a height h above the nominal earth radius can be expressed:
This expression can be readily used for programming or inclusion in a spreadsheet. Collecting terms, simplifying and neglecting small terms (h<<rEarth), however yields the good approximation:
Using the numerical values above and for a height h in metres:
Grouping the latitude and FAC altitude factors the expression most commonly found in the literature is:
where = acceleration in m·s−2 at latitude and altitude h in metres. Alternatively (with the same units for h) the expression can be grouped as follows:
Slab correction
- Note: The section uses the galileo (symbol: "Gal"), which is a cgs unit for acceleration of 1 centimeter/second2.
For flat terrain above sea level a second term is added for the gravity due to the extra mass; for this purpose the extra mass can be approximated by an infinite horizontal slab, and we get 2πG times the mass per unit area, i.e. 4.2×10−10 m3·s−2·kg−1 (0.042 μGal·kg−1·m2)) (the Bouguer correction). For a mean rock density of 2.67 g·cm−3 this gives 1.1×10−6 s−2 (0.11 mGal·m−1). Combined with the free-air correction this means a reduction of gravity at the surface of ca. 2 µm·s−2 (0.20 mGal) for every metre of elevation of the terrain. (The two effects would cancel at a surface rock density of 4/3 times the average density of the whole earth. The density of the whole earth is 5.515 g·cm−3, so standing on a slab of something like iron whose density is over 7.35 g·cm−3 would increase one's weight.)
For the gravity below the surface we have to apply the free-air correction as well as a double Bouguer correction. With the infinite slab model this is because moving the point of observation below the slab changes the gravity due to it to its opposite. Alternatively, we can consider a spherically symmetrical Earth and subtract from the mass of the Earth that of the shell outside the point of observation, because that does not cause gravity inside. This gives the same result.
Estimating g from the law of universal gravitation
From the law of universal gravitation, the force on a body acted upon by Earth's gravity is given by
where r is the distance between the centre of the Earth and the body (see below), and here we take m1 to be the mass of the Earth and m2 to be the mass of the body.
Additionally, Newton's second law, F = ma, where m is mass and a is acceleration, here tells us that
Comparing the two formulas it is seen that:
So, to find the acceleration due to gravity at sea level, substitute the values of the gravitational constant, G, the Earth's mass (in kilograms), m1, and the Earth's radius (in metres), r, to obtain the value of g:
Note that this formula only works because of the mathematical fact that the gravity of a uniform spherical body, as measured on or above its surface, is the same as if all its mass were concentrated at a point at its centre. This is what allows us to use the Earth's radius for r.
The value obtained agrees approximately with the measured value of g. The difference may be attributed to several factors, mentioned above under "Variations":
- The Earth is not homogeneous
- The Earth is not a perfect sphere, and an average value must be used for its radius
- This calculated value of g only includes true gravity. It does not include the reduction of constraint force that we perceive as a reduction of gravity due to the rotation of Earth, and some of gravity being "used up" in providing the centripetal acceleration
There are significant uncertainties in the values of r and m1 as used in this calculation, and the value of G is also rather difficult to measure precisely.
If G, g and r are known then a reverse calculation will give an estimate of the mass of the Earth. This method was used by Henry Cavendish.
Comparative gravities of the Earth, Sun, Moon, and planets
The table below shows comparative gravitational accelerations at the surface of the Sun, the Earth's moon, each of the planets in the Solar System and their major moons, Pluto, and Eris. The "surface" is taken to mean the cloud tops of the gas giants (Jupiter, Saturn, Uranus and Neptune). For the Sun, the surface is taken to mean the photosphere. The values in the table have not been de-rated for the inertia effect of planet rotation (and cloud-top wind speeds for the gas giants) and therefore, generally speaking, are similar to the actual gravity that would be experienced near the poles.
|Eris||0.0814 (approx.)||0.8 (approx.)|
See also
- Earth's magnetic field
- Gravity anomaly, Bouguer anomaly
- Gravitation of the Moon
- Gravitational acceleration
- Gravity Field and Steady-State Ocean Circulation Explorer
- Gravity Recovery and Climate Experiment
- Newton's law of universal gravitation
- Bureau International des Poids et Mesures (2006). "Chapter 5". The International System of Units (SI). 8th ed. Retrieved 2009-11-25. "Unit names are normally printed in roman (upright) type ... Symbols for quantities are generally single letters set in an italic font, although they may be qualified by further information in subscripts or superscripts or in brackets."
- "SI Unit rules and style conventions". National Institute For Standards and Technology (USA). September 2004. Retrieved 2009-11-25. "Variables and quantity symbols are in italic type. Unit symbols are in roman type."
- Boynton, Richard (2001). "Precise Measurement of Mass". Sawe Paper No. 3147. Arlington, Texas: S.A.W.E., Inc. Retrieved 2007-01-21.
- "Curious About Astronomy?", Cornell University, retrieved June 2007
- "I feel 'lighter' when up a mountain but am I?", National Physical Laboratory FAQ
- "The G's in the Machine", NASA, see "Editor's note #2"
- Tipler, Paul A. (1999). Physics for scientists and engineers. (4th ed. ed.). New York: W.H. Freeman/Worth Publishers. pp. 336–337. ISBN 9781572594913.
- Dziewonski, A.M.; Anderson, D.L.. "Preliminary reference Earth model". Physics of the Earth and Planetary Interiors 25: 297–356.
- Gravitational Fields Widget as of Oct 25th, 2012 – WolframAlpha
- International Gravity formula
- value of standard gravity. This value excludes the adjustment for centrifugal force due to Earth's rotation and is therefore greater than the 9.80665 m/s2 | http://en.wikipedia.org/wiki/Earth's_gravity | 13 |
52 | Slide 1 : 1 / 30 : Surface Modeling
Slide 2 : 2 / 30 : Surface Modeling : Introduction
Slide 3 : 3 / 30 : Implicit Functions
Slide 4 : 4 / 30 : Polygon Surfaces
Slide 5 : 5 / 30 : Polygon Tables
Objects : set of vertices and associated attributes
Geometry : stored as three tables : vertex table, edge table, polygon table
Edge table ?
Tables also allow to store additional information
Slide 6 : 6 / 30 : Example: Plane Equations
Often in the graphics pipeline, we need to know the orientation of an object. It would be useful to store the plane equation with the polygons so that this information doesn't have to be computed each time.
The plane equation takes the form:
P(M) = Ax + By + Cz + D = 0
Using any three points from a polygon, we can solve for the coefficients. Then we can use the equation to determine whether a point is on the inside or outside of the plane formed by this polygon:
Ax + By + Cz + D < 0 ==> inside
Slide 7 : 7 / 30 : Polygon Meshes
The polygons we can represent can be arbitrarily large, both in terms of the number of vertices and the area. It is generally convenient and more appropriate to use a polygon mesh rather than a single mammoth polygon.
For example, you can simplify the process of rendering polygons by breaking all polygons into triangles. Then your triangle renderer from project two would be powerful enough to render every polygon. Triangle renderers can also be implemented in hardware, making it advantageous to break the world down into triangles.
Another example where smaller polygons are better is the Gouraud lighting model. Gouraud computes lighting at vertices and interpolates the values in the interiors of the polygons. By breaking larger surfaces into meshes of smaller polygons, the lighting approximation is improved.
Whenever you can, use Triangle strip, Triangle Fan, Quad Strip
Triangle mesh produces n-2 triangles from a polygon of n vertices.
Triangle list will produce only a n/3 triangles
Quadrilateral mesh produces (n-1) by (m-1) quadrilaterals from an n x m array of vertices.
Not co-planar polygon
Specifying polygons with more than three vertices could result in sets of points which are not co-planar! There are two ways to solve this problem:
Slide 8 : 8 / 30 : From Curves to Surfaces
Slide 9 : 9 / 30 : Beziér Patches
If one parameter is held at a constant value then the
above will represent a curve. Thus P(u,a) is a curve on the surface with the
parameter v being a constant a.
In a bicubic surface patch cubic polynomials are used to represent the edge curves P(u,0), P(u,1), P(0,v) and P(1,v)as shown below. The surface is then generated by sweeping all points on the boundary curve P(u,0) (say) through cubic trajectories,defined using the parameter v, to the boundary curve P(u,1). In this process the role of the parameters u and v can be reversed.
Slide 10 : 10 / 30 : Beziér Patches
The representation of the bicubic surface patch can be
illustrated by considering the Bezier Surface Patch.
The edge P(0,v) of a Bezier patch is defined by giving four control points P00, P01, P02 and P03. Similarly the opposite edge P(1,v) can be represented by a Bezier curve with four control points. The surface patch is generated by sweeping the curve P(0,v) through a cubic trajectory in the parameter u to P(1,v). To define this trajectory we need four control points, hence the Bezier surface patch requires a mesh of 4*4 control points as illustrated above.
Slide 11 : 11 / 30 : Example: The Utah Teapot
Single shaded patch - Wireframe of the control points - Patch edges
Slide 12 : 12 / 30 : Subdivision of Beziér Surfaces
For more, See : Rendering
Cubic Bezier Patches
by Chris Bentley
Slide 13 : 13 / 30 : Deforming a Patch
The net of control points forms a polyhedron in cartesian space, and the positions of the points in this space control the shape of the surface.
The effect of lifting one of the control points is shown on the right.
Slide 14 : 14 / 30 : Patch Representation vs. Polygon Mesh
Slide 15 : 15 / 30 : Constructive Solid-Geometry Methods (CSG)
The method of Constructive Solid Geometry arose from the observation that many industrial components derive from combinations of various simple geometric shapes such as spheres, cones, cylinders and rectangular solids. In fact the whole design process often started with a simple block which might have simple shapes cut out of it, perhaps other shapes added on etc. in producing the final design. For example consider the simple solid below:
This simple component could be produced by gluing two rectangular blocks together and then drilling the hole. Or in CSG terms the the union of two blocks would be taken and then the difference of the resultant solid and a cylinder would be taken. In carrying out these operations the basic primitive objects, the blocks and the cylinder, would have to be scaled to the correct size, possibly oriented and then placed in the correct relative positions to each other before carrying out the logical operations.
The Boolean Set Operators used are:
Note that the above definitions are not rigorous and have to be refined to define the Regularised Boolean Set Operations to avoid impossible solids being generated.
A CSG model is then held as a tree structure whose terminal nodes are primitive objects together with an appropriate transformation and whose other nodes are Boolean Set Operations. This is illustrated below for the object above which is constructed using cube and cylinder primitives.
CSG methods are useful both as a method of representation and as a user interface technique. A user can be supplied with a set of primitive solids and can combine them interactively using the boolean set operators to produce more complex objects. Editing a CSG representation is also easy, for example changing the diameter of the hole in the example above is merely a case of changing the diameter of the cylinder.
However it is slow to produce a rendered image of a model from a CSG tree. This is because most rendering pipelines work on B-reps and the CSG representation has to be converted to this form before rendering. Hence some solid modellers use a B-rep but the user interface is based on the CSG representation.
Slide 16 : 16 / 30 : Patch Representation vs. Polygon Mesh
Example of torus designed using a rotational sweep. The periodic spline cross section is rotated about an axis of rotation specified in the plane of the cross section.
We perform a sweep by moving the shape along a path. At intervals along this path, we replicate the shape and draw a set of connectiong line in the direction of the sweep to obtain the wireframe reprensentation.
Slide 17 : 17 / 30 : A CSG Tree Representation
Slide 18 : 18 / 30 : Implementation with ray casting
Difference (Obj2 - Obj1)
Slide 19 : 19 / 30 : A CSG Tree Representation
Slide 20 : 20 / 30 : Example Modeling Package: Alias Studio
Slide 21 : 21 / 30 : Volume Modeling
Slide 22 : 22 / 30 : Marching Cubes Algorithm
Extracting a surface from voxel data:
Slide 23 : 23 / 30 : Marching Cube Cases
Slide 24 : 24 / 30 : Extracted Polygonal Mesh
Slide 25 : 25 / 30 : Metaballs
An Overview of Metaballs/Blobby Objects
Slide 26 : 26 / 30 : Procedural Techniques: Fractals
fractal subdivision algorithm for generating mountains
Slide 27 : 27 / 30 : Procedural Modeling...
And have a look again to the 2002 first CG assignment
or "Simulating plant growth" by Marco Grubert
Slide 28 : 28 / 30 : Physically Based Modelling Methods
Physical modelling is a way of describing the behavior of an object in terms of the interactions of external and internal forces.
Simple methods for describing motion usually resort to having the object follow a pre-determined trajectory.
Physical modelling, on the other hand, is about dynamics.
Physically based modelling methods will tell show us how a table-cloth will drape over a table or how a curtain will fall from a window.
A common method for approximating such nonrigid objects is as a network of points with flexible connections between them.
Slide 29 : 29 / 30 : Spring Networks
Slide 30 : 30 / 30 : Particle Systems
Collection of particles - A particle system is composed of one or more individual particles.
Stochastically defined attributes : some type of random element
Position, Velocity (speed and direction), Color, Lifetime, Age, Shape, Size, Transparency
Particle Life Cycle
Generation - Particles in the system are generated randomly within a predetermined location of the fuzzy object
Particle Dynamics - The attributes of each of the particles may vary over time. (particle position is going to be dependent on previous particle position and velocity as well as time
Extinction - Each particle has two attributes dealing with length of existence: age and lifetime.
Extract from : Particle Systems by Allen Martin
The term particle system is loosely defined in computer graphics. It has been used to describe modeling techniques, rendering techniques, and even types of animation. In fact, the definition of a particle system seems to depend on the application that it is being used for. The criteria that hold true for all particle systems are the following:
Collection of particles - A particle system is composed of one or more individual particles. Each of these particles has attributes that directly or indirectly effect the behavior of the particle or ultimately how and where the particle is rendered. Often, particles are graphical primitives such as points or lines, but they are not limited to this. Particle systems have also been used to represent complex group dynamics such as flocking birds.
Stochastically defined attributes - The other common characteristic of all particle systems is the introduction of some type of random element. This random element can be used to control the particle attributes such as position, velocity and color. Usually the random element is controlled by some type of predefined stochastic limits, such as bounds, variance, or type of distribution.
Each object in Reeves particle system had the following
Velocity (speed and direction)
Particle Life Cycle
Each particle goes through three distinct phases in the particle system: generation, dynamics, and death. These phases are described in more detail here:
Generation - Particles in the system are generated randomly within a predetermined location of the fuzzy object. This space is termed the generation shape of the fuzzy object, and this generation shape may change over time. Each of the above mentioned attribute is given an initial value. These initial values may be fixed or may be determined by a stochastic process.
Particle Dynamics - The attributes of each of the particles may vary over time. For example, the color of a particle in an explosion may get darker as it gets further from the center of the explosion, indicating that it is cooling off. In general, each of the particle attributes can be specified by a parametric equation with time as the parameter. Particle attributes can be functions of both time and other particle attributes. For example, particle position is going to be dependent on previous particle position and velocity as well as time.
Extinction - Each particle has two attributes dealing with length of existence: age and lifetime. Age is the time that the particle has been alive (measured in frames), this value is always initialized to 0 when the particle is created. Lifetime is the maximum amount of time that the particle can live (measured in frames). When the particle age matches it's lifetime it is destroyed. In addition there may be other criteria for terminating a particle prematurely:
Running out of bounds - If a particle moves out of the viewing area and will not reenter it, then there is no reason to keep the particle active.
Hitting the ground - It may be assumed that particles that run into the ground burn out and can no longer be seen.
Some attribute reaches a threshold - For example, if the particle color is so close to black that it will not contribute any color to the final image, then it can be safely destroyed.
When rendering this system of thousands of particles, some assumptions have to be made to simplify the process. First, each particle is rendered to a small graphical primitive (blob). Particles that map to the same pixels in the image are additive - the color of a pixel is simply the sum of the color values of all the particles that map to it. Because of this assumption, no hidden surface algorithms are needed to render the image, the particles are simply rendered in order. Effects like temporal ant-aliasing (motion blur) are made simple by the particle system process. The position and velocity are known for each particle. By rendering a particle as a streak, motion blur can be achieved. | http://escience.anu.edu.au/lecture/cg/surfaceModeling/printNotes.en.html | 13 |
71 | Most substances expand when heated and contract when cooled. The exception is water. The maximum density of water occurs at 4°. This explains why a lake freezes at the surface, and not from the bottom up. If water at 0°C is heated, its volume decreases until it reaches 4°C. Above 4°C, water behaves normally and expands in volume as it is heated. Water expands as it is cooled from 4°C to 0°C and expands even more as it freezes. That is why ice cubes float in water and pipes break when the water inside of them freezes.
The change in length in almost all solids when heated is directly proportional to the change in temperature and to its original length. A solid expands when heated and contracts when cooled: The length of a material decreases as the temperature decreases; its length increases as the temperature increases. So a rod that is 2 m long expands twice as much as a rod which is 1 m long for the same ten degree increase in temperature. Holes in materials also expand or contract with the material, if a material gets larger, the hole also gets larger.
DL = a L DT
where L is the length of the material
a is the coefficient of linear expansion
DT is the temperature change in ° C
Physical quantities such as pressure, temperature, volume, and the amount of a substance describe the conditions in which a particular material exists. They describe the state of the mateterial and are referred to as state variables. These state variables are interrelated; one cannot be changed without changing the other. Where V0, P0, and T0 represent the initial state of the material and V, P, and T represent the final states of the material. .
In physics, we use an ideal gas to repesent the material and thus simplifying the equation of state.
Ideal Gas Law The volume of a gas is proportional to the number of moles of the gas, n. The volume varies inversely with the pressure. The pressure is proportional to the absolute temperature of the gas. Combining these relationships yields the following equation of state for an ideal gas,
PV = nRT
Where T is measured in Kelvin and R is the ideal gas constant
Ideal Gas Constant In SI units, R = 8.314 J/ mol K
Ideal Gas Real gases do not follow the ideal gas law exactly. An ideal gas is one for which the ideal gas law holds precisely for all pressures and temperatures. Gas behavior approximates the ideal gas model at very low pressures when the gas molecules are far apart and at temperatures close to that at which the gas liquefies.
Kinetic theory of gases
particles in a hot body have more kinetic energy than those in a cold body; as temperature increases, kinetic energy increases. If the temperature of rises, the gas molecules move at greater speeds. If the volume remains the same, the hotter molecules would be expected to hit the walls of the container more frequently than the cooler ones, resulting in a rise in pressure.
An advanced look at the kinetic theory: The assumptions describing an ideal gas make up the postulates of the kinetic theory:
1. An ideal gas is made up of a large number of gas molecules N each with mass m moving in random directions with a variety of speeds.
2. The gas molecules are separated from each other by an average distance that is much greater than the molecule's diameter.
3. The molecules obey laws of mechanics, interacting only when they collide.
4. Collisions between the walls of the container or with other gas molecules are assumed to be perfectly elastic.
Entropy disorder; the higher the temperature, the more disorder (or entropy) a substance has
measure of an object’s kinetic energy; temperature measures how hot or how cold an object is with respect to a standard
The most common scale is the Celsius (or Centigrade, though in the
(-273.15 °C), or 0 K.
Triple Point The triple point of water serves as a point of reference. It is only at this point (273.16 °K) that the three phases of water (gas, liquid, and solid) exist together at a unique value of temperature and pressure.
Temperature is a property of a system that determines whether the system will be in thermal equilibrium with other systems.
Molecular Interpretation of Temperature The concept that matter is made up of atoms in continual random motion is called the kinetic theory. We assume that we are dealing with an ideal gas. In an ideal gas, there are a large number of molecules moving in random directions at different speeds, the gas molecules are far apart, the molecules interact with one another only when they collide, and collisions between gas molecules and the wall of the container are assumed to be perfectly elastic. The average translational kinetic energy of molecules in a gas is directly proportional to the absolute temperature. If the average translational kinetic energy is doubled, the absolute temperature is doubled.
KEav = 1/2 mvav2 = 3/2 kT
where T is the temperature in Kelvin and k is Boltzmann's constant
k = 1.38 x 10-23 J/K
The relationship between Boltzmann's constant (k), Avogadro's number (N), and the gas constant (R) is given by:
k = R/N
An advanced look at the relationship between pressure and the kinetic theory: The pressure exerted by an ideal gas on its container is due to the force exerted on the walls of the container by the collisions of the molecules with the walls of area A. The collisions cause a change in momentum of the gas molecules. These assumptions can be used to derive an expression between pressure and the average kinetic energy of the gas molecules. The pressure is directly proportional to the square of the average velocity. Since the average kinetic energy is directly proportional to the temperature, pressure is also directly proportional to the temperature (for a fixed volume).
PV = 2/3 N (1/2 mvav2)
The higher the temperature, according to kinetic theory, the faster the molecules are moving, on average.
rms speed The square root of the average speed speed in the kinetic energy expression is called the rms speed.
vrms = (3RT/M)1/2
where R is the ideal gas constant, T is temperature in Kelvin, and M is the molecular mass in units of kg/mol
Heat(symbol is Q; SI unit is Joule)
amount of thermal energy transferred from one object to another due to temperature differences (we will learn in thermodynamics why heat flows from a hot to a cold body).
Temp and Internal Energy
T = Temperature – related to the average KE of the molecules in a substace
U = Internal Energy (Thermal Energy) – the total KE of the molecules.
= # molecules * avg KE
= N * KEavg
which simplifies to U = 3/2 n R T
Mechanical Equivalent of Heat
James Joule described the reversible conversion of heat energy and work. The calorie is defined as the amount of energy needed to raise the temperature of one gram of water at 14.5° one degree Celcius. The SI unit for work and energy is the Joule.
1 calorie = 4.186 J
1000 calories is equal to 1 food Calorie
There are three basic ways in which heat is transferred. In fluids, heat is often transferred by convection, in which the motion of the fluid itself carries heat from one place to another. Another way to transfer heat is by conduction, which does not involve any motion of a substance, but rather is a transfer of energy within a substance (or between substances in contact). The third way to transfer energy is by radiation, which involves absorbing or giving off electromagnetic waves.
Heat transfer in fluids generally takes place via convection. Convection currents are set up in the fluid because the hotter part of the fluid is not as dense as the cooler part, so there is an upward buoyant force on the hotter fluid, making it rise while the cooler, denser, fluid sinks. Birds and gliders make use of upward convection currents to rise, and we also rely on convection to remove ground-level pollution.
Forced convection, where the fluid does not flow of its own accord but is pushed, is often used for heating (e.g., forced-air furnaces) or cooling (e.g., fans, automobile cooling systems).
When heat is transferred via conduction, the substance itself does not flow; rather, heat is transferred internally, by vibrations of atoms and molecules. Electrons can also carry heat, which is the reason metals are generally very good conductors of heat. Metals have many free electrons, which move around randomly; these can transfer heat from one part of the metal to another.
The equation governing heat conduction along something of length (or thickness) L and cross-sectional area A, in a time t is:
Q/t = k A ∆T / L
(Q/t) = H = rate of heat loss or gain
k is the thermal conductivity, a constant depending only on the material, and having units of J / (s m °C).
Copper, a good thermal conductor, which is why some pots and pans have copper bases, has a thermal conductivity of 390 J / (s m °C). Styrofoam, on the other hand, a good insulator, has a thermal conductivity of 0.01 J / (s m °C).
Consider what happens when a layer of ice builds up in a freezer. When this happens, the freezer is much less efficient at keeping food frozen. Under normal operation, a freezer keeps food frozen by transferring heat through the aluminum walls of the freezer. The inside of the freezer is kept at -10 °C; this temperature is maintained by having the other side of the aluminum at a temperature of -25 °C.
The aluminum is 1.5 mm thick. Let's take the thermal conductivity of aluminum to be 240 J / (s m °C). With a temperature difference of 15°, the amount of heat conducted through the aluminum per second per square meter can be calculated from the conductivity equation:
The third way to transfer heat, in addition to convection and conduction, is by radiation, in which energy is transferred in the form of electromagnetic waves. We'll talk about electromagnetic waves in a lot more detail later in the year; an electromagnetic wave is basically an oscillating electric and magnetic field traveling through space at the speed of light. Don't worry if that definition goes over your head, because you're already familiar with many kinds of electromagnetic waves, such as radio waves, microwaves, the light we see, X-rays, and ultraviolet rays. The only difference between the different kinds is the frequency and wavelength of the wave.
Note that the radiation we're talking about here, in regard to heat transfer, is not the same thing as the dangerous radiation associated with nuclear bombs, etc. That radiation comes in the form of very high energy electromagnetic waves, as well as nuclear particles. The radiation associated with heat transfer is entirely electromagnetic waves, with a relatively low (and therefore relatively safe) energy.
Everything around us takes in energy from radiation, and gives it off in the form of radiation. When everything is at the same temperature, the amount of energy received is equal to the amount given off. Because there is no net change in energy, no temperature changes occur. When things are at different temperatures, however, the hotter objects give off more energy in the form of radiation than they take in; the reverse is true for the colder objects.
We've looked at the three types of heat transfer. Conduction and convection rely on temperature differences; radiation does, too, but with radiation the absolute temperature is important. In some cases one method of heat transfer may dominate over the other two, but often heat transfer occurs via two, or even all three, processes simultaneously.
A stove and oven are perfect examples of the different kinds of heat transfer. If you boil water in a pot on the stove, heat is conducted from the hot burner through the base of the pot to the water. Heat can also be conducted along the handle of the pot, which is why you need to be careful picking the pot up, and why most pots don't have metal handles. In the water in the pot, convection currents are set up, helping to heat the water uniformly. If you cook something in the oven, on the other hand, heat is transferred from the glowing elements in the oven to the food via radiation
study of properties of thermal energy
Each of the laws of thermodynamics are associated with a variable. The zeroeth law is associated with temperature, T; the first law is associated with internal energy, U; and the second law is associated with entropy, S.
System any object or set of objects we are considering. A closed system is one in which mass is constant. An open system does not have constant mass. No energy flows into or out of a closed system which is said to be isolated.
Environment everything else
If two objects at different temperatures are placed in thermal contact (so that the heat energy can transfer from one to the other), the two objects will reach the same temperature, or become in thermal equilibrium.
Zeroth Law of Thermodynamics If two systems are in thermal equilibirum with a third system, they are in thermal equilibrium with each other.
Internal or Thermal Energy(symbol is U; unit is J)
sum of all the energy an object possesses; it cannot be measured; only changes in internal energy can be determined
The kinetic theory can be used to clearly distinguish between temperature and thermal energy. Temperature is a measure of the average kinetic energy of individual molecules. Thermal energy refers to the total energy of all the molecules in an object.
Internal Energy of an Ideal Gas The internal energy of an ideal gas only depends upon temperature and the number of moles of the gas (n).
U = 3/2 nRT
where R is the ideal gas constant, R = 8.315 J/mol K
Characteristics of an Ideal Gas:
1. An ideal gas consists of a large number of gas molecules occupying a negligible volume.
2. Ideal gas molecules have random motion.
3. Ideal gas molecules undergo elastic collisions with the walls of the container and with other gas molecules.
4. The temperature of an ideal gas is proportional to the kinetic energy of the gas molecules.
1st law of thermodynamics The total increase in the internal energy of a system is equal to the sum of the work done on the system or by the system and the heat added to or removed from the system. It is a restatement of the law of conservation of energy. Changes in the internal energy of a system are caused by heat and work.
DU = Q + W
where Q is the heat added to the system and W is the net work done on the system. In other words, heat added is positive; heat lost is negative. Work done on the system (an example would be compression of a gas) is positive; work done by the system (an example would be expansion of a gas) is negative.
The best way to remember the sign convention for work: if a gas is compressed (volume decreases), work is positive; if a gas expands (volume increases), work is negative. It is just like mechanics, if you (the environment) do work on the system, you would compress it. The work you do is considered to be positive.
A graph of pressure vs volume for a particular temperature for an ideal gas. Each curve, representing a specific constant temperature, is called an isotherm. The area under the isotherm represents the work done by the system during a volume change.
When a system undergoes a change of state from an initial state to a final state, the system passes through a series of intermediate staes. This series of states is called a path. Points 1 and 2 represent an initial state (1) with pressure P1 and volume V1 and a final state (2) with pressure P2 and volume V2. If the pressure is kept constant at P1, the system expands to volume V2 (point 3 on the diagram). The pressure is then reduced to P2 (probably by decreasing the temperature)and the volume is kept constant at V2 to reach point 2 on the diagram. The work done by the systemd during this process is the area under the line from state 1 to state 3. There is no work done during the constant volume process from state 3 to state 2. Or, the system might traverse the path state 1 to state 4 to state 2, in which case the work done is the area under the line from state 4 to state 2. Or, the system might traverse the path represented by the curved line from state 1 to state 2, in which case, the work is represented by the area underneath the curve from state 1 to state 2. The work is different for each path.
The work done by the system depends not only upon the initial and final states, but also upon the path taken.
1. Isothermal Process temperature (T) is constant. If there is no temperature change, there is no internal energy change.
DU = 0
Q = -W
The curve shown represents an isotherm.
Since the temperature is constant, no change in internal energy occurs. Internal energy changes only occur when there are temperature changes. At constant temperature, the pressure and volume of the system decrease as along the path state 1 to state 2.
Example of an isothermal process: An ideal gas (the system) is contained in a cylinder with a moveable piston. Since the system is an ideal gas, the ideal gas law is valid. For constant temperture, PV=nRT becomes PV=constant. At point 1, the gas is at pressure P1, volume V1, and temperature T. A very slow expansion occurs, so that the gas stays at the same constant temperature. If heat Q is added, the gas must expand. As the gas expands, it pushes on the moveable piston, thus doing work on the environment (or negative work). At point w, the gas now has volume V2 which is greater than V1, pressure P2 which is less than P1, and temperature T. The amount of work done by the system on the environment during its expansion has the same magnitude as the amount of heat added to the system. The amount of work done is equal to the area under the curve.
How to know if heat was added or removed in an isothermal process: if heat is added, the volume increases and the pressure decreases. Remember, pressure is determined by the number of collisions the gas molecules make with the walls of the container. If the volume increases at constant temperature, the gas molecules make fewer collisions with the walls of the container, and pressure decreases.
2. Isobaric Process pressure (P) is constant. If pressure is kept constant, the work done during the process is given by
W = - P DV
DU = Q + W
P is held constant, so the amount of work done is represented by the area underneath the path from 1 to 2. Typically, lab experiments are isobaric processes.
Example of isobaric process: An ideal gas is contained in a cylinder with a moveable piston. The pressure experienced by the gas is always the same, and is equal to the external atmospheric pressure plus the weight of the piston. The cylinder is heated, allowing the gas to expand. Heat was added to the system at constant pressure, thus increasing the volume. The change in internal energy U is equal to the sum of the work done by the system on the environment during the volume expansion (negative work) and the amount of heat added to the system. The amount of work done is equal to the area under the curve.
How to determine if heat was added or removed: in an isobaric process, heat is added if the gas expands and removed if the gas is compressed.
How to tell if the temperature is increasing or decreasing: in an isobaric process, adding heat results in an increase in internal energy. If the internal energy increases, the temperature increases. Typically, volume expansions are small and all the heat added serves to increase the internal energy. In our graph, point 2 was at a higher temperature than point 1.
3. Isochoric Process Volume (V) is constant. Since there is no change in volume, no work is done.
W = 0
DU = Q
Since V is constant, no work is done. If heat is added to the system, the internal energy U increases; if heat is removed from the system, the internal energy U decreases. In the pV diagram shown, heat is removed along the path 1 to 2, thus decreasing the pressure at constant volume.
Example of an isochoric process: An ideal gas is contained in a rigid cylinder (one whose volume cannot change). If the cylinder is heated, no work can be done even though enormous forces are generated within the cylinder. No work is done because there is no displacement (the system does not move). The heat added only increases the internal energy of the system.
How to tell if heat is added or removed: in an isochoric process, heat is added when the pressure increases.
How to tell if the temperature increases or decreases: since U=3/2 nRT, if the internal energy is increasing, then the temperature is increasing. In our diagram, point 1 is at a higher temperature than point 2.
4. Aidabatic Process No heat (Q) is allowed to flow into or out of the system. This can occur if the system is well-insulated or the process happens quickly. (in other words, Q=0)
DU = W
The internal energy and the temperature decreases if the gas expands.
In this well-insulated process shown, heat cannot
transfer to the environment. The amount of work done is represented by the area
under the path from state 1 to state 2. In this example, the volume increases
along the path from state 1 to state 2, so work is done on the environment by
the system (negative work). There is a decrease in
Example of an adiabatic process: An ideal gas is contained in a cylinder with a moveable piston. Insulating material surrounds the cylinder, preventing heat flow. The ideal gas is compressed adiabatically by pushing against the moveable piston. Work is done on the gas (positive work). Remember, Q=0. The amount of work done in the adiabatic compression results in an increase in the internal energy of the system.
This applet presents a simulation of four simple transformations in a contained ideal monoatomic or diatomic gas. The user chooses the type of transformation and, depending on the type of transformation, adds or removes heat, or adjusts the gas volume manually. The applet displays the values of the three variables of state P, V, and T, as well as a P-V or P-T graph in real time.
How to tell if the temperature increases or decreases: since U=3/2 nRt, if the internal energy increases, the temperature increases. In our example, the final temperature would be greater. than the initial temperature. In our pV diagram, the temperature at point 1 is greater than the temperature at point 2.
2nd law of thermodynamics This law is a statement about which processes can occur in nature and which cannot.
The second law of thermodynamics explains things that don't happen:
It is not possible to reach absolute zero (0 K). Since heat can only flow from a hot to a cold substance, in order to decrease the temperature of a substance, heat must be removed and transferred to a "heat sink" (something that is colder). Since there is no temperature less than absolute zero, there is no heat sink to use to remove heat to reach that temperature.
DS = Q / T
where T is the Kelvin temperature
Determining how entropy changes: When dealing with entropy, it is the change in entropy which is important.
· In a reversible process (one in which there is no friction), if heat is added to a system, the entropy of the system increases, and vice versa. If entropy increases for the system, it must decrease for the environment by the same amount, and vice versa. For reversible processes, the total entropy (the entropy of the system plus the environment) is constant.
· In an irreversible process (those in the real world), the total entropy either is unchanged or increases.
1. automobile engines-thermal energy from a high heat source is converted into mechanical energy (work) and exhaust is expelled
2. refrigerator-thermal energy is removed from a cold body (work is required) and transferred to a hot body (the room. Another example is a heat pump.
Drawing of a real engine showing transfer of heat from a high to a low termperature reservoir, performing work. The figure below shows the overall operation of a heat engine. During every cycle, heat QH is extracted from a reservoir at temperature TH; useful work is done and the rest is discharged as heat QL to a reservoir at a cooler temperature TL. Since an engine is a cycle, there is no change in internal energy adn the net work done per cycle equals the net heat transferred per cycle.
The purpose of an engine is to transform as much QH into work as possible. So...coffee can't spontaneously start swirling around because heat would be withdrawn from the coffee and totally transformed into work. A heat engine converts thermal energy into mechanical energy.
Drawing of a refrigerator showing transfer of heat from a low to a high temperature reservoir, requiring work. The purpose of a refrigerator is to transfer heat from the low-temperature to the high-temperature reservoir, doing as little work on the system as possible.
There is no perfect refrigerator because it is not possible for heat to flow from one body to another body at a higher temperature with no other change taking place. The purpose of a heat pump or a refrigerator is to convert mechanical energy into thermal energy.
Efficiency of a heat engine The efficiency e of any heat engine is defined as the ratio of the work the engine does (W) to the heat input at the high temperature (QH).
e = W / QH
or, e = (QH - (QL) / QH
Carnot (ideal) efficiency This is the theoretical limit to efficiency. It is defined in terms of the operating temperatures.
eideal = (TH - (TL) / TH | http://www.greatneck.k12.ny.us/GNPS/SHS/dept/science/wells/ap/heat%20and%20thermo%20reading.htm | 13 |
67 | While research continues to shed light on the environmental effects of shale gas development, much more remains unknown about the risks that the process known as “fracking” could pose for the Chesapeake Bay watershed.
According to a report released this week by a panel of scientific experts, additional research and monitoring—on sediment loads, on forest cover, on the best management practices that might lessen fracking’s environmental impact and more—must be done to determine how hydraulic fracturing might affect land and water resources in the region.
Image courtesy Wikimedia Commons
Hydraulic fracturing is a process that works to extract natural gas and oil from beneath the earth’s surface. During the process, a mixture of water, sand and additives is pumped at high pressure into underground rock formations—in the watershed, this formation is known as the Marcellus Shale—breaking them apart to allow the gas and oil to flow into wells for collection.
The process can impact the environment in a number of ways. According to the report, installing shale gas wells requires clearing forests and building roads, which can impact bird and fish habitat and increase the erosion of sediment into local rivers and streams. Withdrawing water from area sources—an essential part of gas extraction, unless water is brought in from off-site—can alter aquatic habitat and river flow. And the drilling process may result in the accumulation of trace metals in stream sediment.
Read more about the environmental effects of shale gas development in the watershed.
Clean air, clean water and healthy communities: the benefits of forests are vast. But as populations rise and development pressure expands, forests across the Chesapeake Bay watershed are fragmented and cut down.
In an effort to slow the loss of Chesapeake forests, the U.S. Forest Service has released a restoration strategy that outlines how officials and individuals alike can improve the environment and their communities by planting and caring for native trees.
According to the strategy, which has been endorsed by each of the watershed's seven State Foresters, expanding forest cover is critical to improving our air and water, restoring wildlife habitat, sequestering carbon and curbing home energy use.
To ensure we get the most “bang” for our tree-planting buck, the strategy targets restoration efforts toward those places in which forests would provide the greatest benefits, from wildlife corridors along streams and rivers to towns, cities and farms.
Trees along the edges of streams and rivers—called a riparian forest buffer—can keep nutrients and sediment out of our waters and nurture critters with vital habitat and food to eat. Trees in towns and cities—called an urban tree canopy—can clean and cool the air, protect drinking water and boost property values, improving the well-being of an entire neighborhood at a low cost. And trees on farms—in the form of wind breaks, forest buffers or large stands of trees—can protect crops, livestock and local wildlife while providing a farmer with a new form of sustainable income.
Other areas targeted for forest restoration include abandoned mine lands in headwater states and contaminated sites where certain tree species could remove toxic metals from the soil.
Learn more about the Chesapeake Forest Restoration Strategy.
From shopping bags and gift wrap to the train, plane and car trips that we take to visit family and friends, our carbon footprints get a little larger during the holidays. So when it comes to choosing a Christmas tree, why not do so with the environment in mind? While the "real" versus "fake" debate rages on, we have sifted through the arguments to find four tips that will make your Christmas tree "green."
Image courtesy Jo Naylor/Flickr
1. Avoid artificial. As deforestation becomes a global concern, an artificial tree might seem like a green choice. But some researchers disagree. Most of the artificial Christmas trees sold in the United States are made in China using polyvinyl chloride or PVC, a kind of plastic whose petroleum-dependent manufacturing, processing and shipping is a serious emitter of greenhouses gas. And while one study did find that reusing an artificial tree can be greener than purchasing a fresh-cut fir each December, that artificial tree would have to be used for more than two decades—and most end up in a landfill after just six to nine years.
Image courtesy Dave Mathis/Flickr
2. Don’t be a lumberjack. While going artificial might not be the greenest choice, neither is hiking up a local mountain with an axe in hand. When a tree is removed and not replaced, its ecosystem is robbed of the multiple benefits that even a single tree can provide. Trees clean our water and air, provide habitat for wildlife and prevent soil erosion. Instead of chopping down your own Christmas tree, visit a farm where trees are grown, cut and replanted just like any other crop.
Image courtesy macattck/Flickr
3. Choose a tree farm wisely. Millions of Christmas trees are grown on farms across the United States, emitting oxygen, diminishing carbon dioxide and carrying some of the same benefits of a natural forest. And some of these tree farms are sustainable, offering locally-grown, pesticide-free trees and wreaths. Find a tree farm near you.
Image courtesy Klara Kim/Flickr
4. Go “balled and burlapped.” Real Christmas trees are often turned into mulch once the season is over. But some farmers are making Christmas trees even more sustainable! Instead of cutting down a tree at its trunk, a tree’s roots are grown into a ball and wrapped in a burlap sack. Once the tree is used, it can be replanted! If your yard doesn’t have room for another evergreen, look for a company that will return for its tree after the holidays.
Sometimes, even a single tree can make a difference. And it helps when that tree is a big one.
For six seasons, Baltimore County has held a Big Trees sale in an effort to put big, native trees in Maryland backyards. Since its inception in 2009, the program has sold more than 750 trees to Maryland residents, augmenting the state’s existing forests and moving Baltimore County closer to its pollution reduction goals.
Big trees are integral to the health of the Chesapeake Bay. Forests clean polluted air and water and offer food, shelter and rest stops to a range of wildlife.
But big trees can be hard to find. To provide homeowners with the native trees that have high habitat value and the heft that is needed to trap polluted runoff, species like pin oak, sugar maple and pitch pine are grown in a Middle River, Md., reforestation nursery. The one-acre nursery, managed by Baltimore County’s Department of Environmental Protection and Sustainability (EPS), began as a staging ground for large-scale plantings but soon expanded to meet a noticeable residential need.
“We used to give incentives to homeowners to buy large trees at retail nurseries,” said Katie Beechem, Environmental Projects Worker with the EPS Forest Sustainability Program. “But we found that homeowners were buying smaller species—flowering dogwood, crape myrtle—that didn’t achieve the same benefits…that large native trees like oaks and maples and river birch can provide. We were able to fill this big tree niche.”
Emails, signs and word-of-mouth spread news of the sale to homeowners. Some travel from the next town over, while others come from as far as Gettysburg, Pa., to walk among rows of seedlings in black plastic pots.
Staff like Jon-Michael Moore, who supervises the Baltimore County Community Reforestation Program, help residents choose a tree based on growth rate and root pattern, soil drainage and sunlight, and even “urban tolerance”—a tree’s resistance to air pollution, drought, heat, soil compaction and road salt.
One Maryland resident picked up 15 trees to line a fence and replace a few that had fallen. Another purchased two trees to soak up stormwater in his one-acre space. And another chose a chestnut oak simply because she had one when she was a kid.
Out of the 12 tree species that are up for sale, oaks remain the favorite.
Whether red, black, white or pin, oaks are often celebrated as the best big tree. Oaks thrive in a range of soils, drop acorns that feed squirrels, woodpeckers and raccoons and create a home for thousands of insects.
Discussing the oak, Moore mentions University of Delaware professor Doug Tallamy. The entomologist once wrote that a single oak tree can support more than 500 species of caterpillars, which will in turn feed countless insect-loving animals.
But can one big tree make a difference for the Bay? Moore nodded: “Every little bit helps.”
The restoration of forested areas along creeks and streams in the Chesapeake Bay watershed continues to decline.
Called riparian forest buffers, these streamside shrubs and trees are critical to environmental restoration. Forest buffers stabilize shorelines, remove pollutants from contaminated runoff and shade streams for the brook trout and other fish species that thrive in cooler temperatures and the cleanest waters.
While more than 7,000 miles of forest buffers have been planted across the watershed since 1996, this planting rate has experienced a sharp decline. Between 2003 and 2006, Maryland, Virginia and Pennsylvania planted an average of 756 miles of forest buffer each year. But in 2011, the entire watershed planted just 240 miles—less than half its former average.
Farmers and agricultural landowners have been the watershed’s driving force behind forest buffer plantings, using the conservation practice to catch and filter nutrients and sediment washing off their land. But a rise in commodity prices has made it more profitable for some farmers to keep their stream buffers planted not with trees, but with crops. This, combined with an increase in funding available for other conservation practices, has meant fewer forest buffers planted each year.
But financial incentives and farmer outreach can keep agricultural landowners planting.
The Chesapeake Bay Foundation (CBF), for instance, has partnered with the U.S. Department of Agriculture and others to implement conservation practices on Pennsylvania farms. Working to put the state’s Conservation Reserve Enhancement Program (CREP) funds to use, CBF provides farmers across the Commonwealth with technical assistance and financial incentives to plant forest buffers, often on the marginal pastureland that is no longer grazed or the less-than-ideal hayland that is rarely cut for hay.
The CBF Buffer-Bonus Program has encouraged Amish and Mennonite farmers to couple CREP-funded forest buffers with other conservation practices, said Dave Wise, Pennsylvania Watershed Restoration Manager with CBF. The reason, according to Wise? “Financial incentives … make it attractive for farmers to enroll.”
Image courtesy Chesapeake Bay Foundation
For each acre of forest buffer planted, CBF will provide Buffer-Bonus Program participants with up to $4,000 in the form of a “best management practice voucher” to fund conservation work. This comes in addition to CREP cost-share incentives, which fund forest buffer planting, post-planting care and annual rental fees that run from $40 to $350 per acre.
While Wise has witnessed what he called a “natural decline” in a program that has been available for more than a decade, he believes cost-share incentives can keep planting rates up, acting as “the spoonful of sugar" that encourages farmers to conserve in a state with the highest forest buffer planting rates in the watershed.
“There are few counties [in the Commonwealth] where buffer enrollments continue to be strong, and almost without exception, those are counties that have the Buffer-Bonus Program,” Wise said.
In 2007, the six watershed states committed to restoring forest buffers at a rate of 900 miles per year. This rate was incorporated into the Chesapeake Bay Executive Order, which calls for 14,400 miles of forest buffer to be restored by 2025. The Chesapeake Forest Restoration Strategy, now out in draft form, outlines the importance of forests and forest buffers and the actions needed to restore them.
Farmers, foresters and an active coalition of landowners and citizens have been honored for their efforts to conserve, restore and celebrate Chesapeake forests.
From planting native trees and shrubs to engaging students in forest conservation, the actions of the winners from across the watershed crowned them Chesapeake Forest Champions in an annual contest sponsored by the U.S. Forest Service and the Alliance for the Chesapeake Bay.
Image courtesy Piestrack Forestlands LLC
Three farmers were named Exemplary Forest Stewards: Ed Piestrack of Nanticoke, Pa., and Nelson Hoy and Elizabeth Biggs of Williamsville, Va. Ed Piestrack and his wife, Wanda, manage 885 acres of forestland and certified Tree Farm in Steuben County, N.Y. The Piestracks have controlled invasive plants and rebuilt vital habitat on their property, installing nest boxes, restoring vernal pools and planting hundreds of trees on land that will remain intact and managed when it is transferred to their children.
Image courtesy Berriedale Farms
Close to 400 miles south in the Cowpasture River Valley sits Berriedale Farms, where Nelson Hoy and Elizabeth Biggs manage land that forms a critical corridor between a wildlife refuge and a national forest. Hoy and Biggs have integrated their 50-acre Appalachian hardwood forest into their farm operation, protecting the landscape while finding a sustainable source of income in their low-impact horse-powered forest products business.
Image courtesy Zack Roeder
Forest Resource Planner Zack Roeder was named Most Effective at Engaging the Public for his work as a forester in Pennsylvania’s largely agricultural Franklin and Cumberland counties. There, Roeder helped farmers manage and implement conservation practices on their land and helped watershed groups plant streamside forest buffers. Roeder also guided a high school in starting a “grow out” tree nursery and coordinated Growing Native events in local communities, using volunteers to collect native hardwood and shrub seeds for propagation.
Image courtesy Savage River Watershed Association
The Savage River Watershed Association in Frostburg, Md., was commended for the Greatest On-the-Ground Impact. In a watershed whose streamside trees have shaded waterways and provided critical habitat to Maryland’s rare reproducing brook trout fisheries, the organization has worked to conserve area forests, removing invasive plants and putting more than 4,000 red spruce seedlings into the ground.
It’s easy to see why the Iroquois once called Pine Creek Tiadaghton, or “the river of pines.” A mix of hardwoods, including the eastern white pine and the eastern hemlock, now line its banks more than a century after the region was clear cut by Pennsylvania’s once-booming lumber industry.
Image courtesy fishhawk/Flickr
At close to 90 miles long, Pine Creek is the longest tributary to the West Branch of the Susquehanna River. But Pine Creek once flowed in the opposite direction—until a surge of glacial meltwater reversed the creek to its current southerly flow, creating the driving force behind Pine Creek Gorge. Named by the National Park Service a National Natural Landmark in 1968, the gorge is better known as the Grand Canyon of Pennsylvania.
At its deepest point, Pine Creek Gorge is 1,450 feet deep and almost one mile wide. Visitors can view the gorge (along with dramatic rock outcrops and waterfalls) from the east rim of the canyon in Leonard Harrison State Park. On the west rim of the canyon is Colton Point State Park, which features five stone and timber pavilions built in the 1930s by the Civilian Conservation Corps. And in the Tioga State Forest, approximately 165,000 acres of trees, streams and awe-inspiring views await hikers, bikers, hunters and more. Pine Creek is paralleled by the 65-mile Pine Creek Rail Trail, which a 2001 article in USA Today named one of the top ten places in the world to take a bike tour.
Image courtesy Travis Prebble/Flickr
More from Pine Creek:
Fall brings with it cooler weather and a rainbow of red, orange and yellow foliage, making it the perfect time to get outside for a hike.
From the coastal marshes of the Chesapeake Bay to the rocky hills of the Appalachian Mountains, scenic vistas and mountaintops await.
Tip: To plan your outing, find out when "peak fall foliage" occurs in your region with this map from the Weather Channel.
Here are some of our favorite sites to take in the changing colors of fall:
1. Old Rag Mountain Hike, Shenandoah National Park, Va. (7 miles)
Image courtesy David Fulmer/Flickr
Be prepared for a challenging rock scramble and a crowd of tourists, but know that it will all be worth it in the end. Some consider this hike to have the best panoramic vistas in Northern Virginia, and it remains one of the most popular hikes in the mid-Atlantic.
2. Loudoun Heights Trails, Harpers Ferry National Historic Park, W.Va. (7.5 miles)
Harpers Ferry National Historic Park is located along the C&O Canal—a hot spot for those looking to find fall foliage. But if you're tired of the canal's flat views as it runs along the Potomac River, check out the trails in Loudon Heights. It may be an uphill battle, but you'll find yourself overlooking the Shenandoah and Potomac rivers from what seems to be the highest point around. This is certainly a good hike for a cool fall day (this blogger took to the trails in the heat of summer and was drained!). Be sure to grab ice cream in town afterwards!
3. Flat Top Hike, Peaks of Otter Trails, Bedford, Va. (3.5 miles)
Image courtesy Jim Liestman/Flickr
The Peaks of Otter are three mountain peaks that overlook the foothills of Virginia's Blue Ridge Mountains. While a hike to Sharp Top is an intriguing one with stunning views, a hike to Flat Top promises to be less crowded. Keep in mind, there are many other trails and lakes near the Peaks of Otter worth exploring!
4. Wolf Rock and Chimney Rock Loop, Catoctin Mountain Park, Thurmont, Md. (5 miles)
Image courtesy TrailVoice/Flickr
Give yourself plenty of time to take in the unique rock formations and two outstanding viewpoints found along this hardwood forest trail. If you're not up for a long hike, visit the park's more accessible viewpoints and make a stop at the nearby Cunningham Falls State Park to see a scenic waterfall just below the mountains.
5. Chesapeake & Ohio Canal Trail, Washington, D.C., to Cumberland, Md. (184 miles)
Image courtesy sandcastlematt/Flickr
This trail follows the Potomac River from Washington, D.C., to Cumberland, Md. While bikers and hikers often tackle the entire trail, the canal path can also be enjoyed as a leisurely day hike.
From Great Falls to Harpers Ferry to Green Ridge State Forest—the second largest in Maryland—a walk along this rustic trail traces our nation's transportation history with sightings of brick tunnels, lock houses and the beautiful scenery that surrounds it all.
If you plan on making a multi-day journey, watch the color of the leaves change as you move north along with peak foliage.
6. Pokomoke River State Forest (Snow Hill, Md.) (1 mile)
Image courtesy D.C. Glovier/Flickr
Whether you explore the 15,500 acres of this forest from land or from water, you are sure to find breath-taking scenes of fall—in stands of loblolly pine, in bald-cypress forests and swamps and even in a five-acre remnant of old growth forest. Take a one-mile self guided trail or opt for an afternoon fall colors paddle in the nearby Pocomoke River State Park, sponsored by the Maryland Department of Natural Resources.
7. Waggoner's Gap Hawk Watch Hike, Cumberland County, Pa.
Image courtesy Audubon Pennsylvania
This rocky site is located along an autumn raptor migration flyway, making it popular among bird-watchers. During the fall, however, it is a must-visit for birders and non-birders alike. From the top of Kittatinny Ridge, also known as Blue Mountain, you can see South Mountain and Cumberland, Perry, York and Franklin counties. The land is cared for by Audubon Pennsylvania.
8. Pole Steeple Trail, Pine Grove Furnace State Park, Cumberland County, Pa. (.75 mile)
Image courtesy Shawnee17241/Flickr
This trail offers a great view for a short climb. While the trail is less than one mile long, it is steep! From the top, you can see Laurel Lake in Pine Grove Furnace State Park and all 2,000 feet of South Mountain. Plan this hike around sunset to see fall colors in a different light.
Do you know an individual or group that is working hard to help our forests stay healthy? Nominate them to be a Chesapeake Forest Champion!
The Forest Champion contest was launched by the Alliance for the Chesapeake Bay and the U.S. Forest Service in 2011. Now in its second year, the contest hopes to recognize additional exemplary forest stewards in the Chesapeake Bay watershed. With 100 acres of the region's forest lost to development each day, the need for local champions of trees and forests has never been greater!
The contest is open to schools and youth organizations, community groups and nonprofits, businesses and forestry professionals. If you know a professional or volunteer who is doing outstanding work for forests, you can nominate them, too!
Awards will be given for:
Nominations forms can be found at the Forestry for the Bay website and are due August 6, 2012.
Winners will be recognized at the 2012 Chesapeake Watershed Forum in Shepherdstown, West Virginia in late September.
For more information about Forest Champions:
When most people talk about forests, they mention hunting, or the timber market, or environmental conservation. But when Susan Benedict discusses her forest – a 200,000 acre property in Centre County, Pennsylvania – she talks about family.
“We all work together. This is a family operation,” she says as we drive to her property along a Pennsylvania State Game Lands road that winds through the Allegany Mountains from Black Moshannon to Pennsylvania-504.
(Image courtesy Susan Benedict)
A desire to keep the mountaintop property in the hands of her children and grandchildren motivated Benedict to implement sustainable forestry practices, participate in Pennsylvania’s Forest Stewardship Program and certify the property under the American Tree Farm System. By managing her forest in an environmentally conscious way, Benedict ensures that stands of ash, red oak and beech will be around in a hundred years for her great-grandchildren to enjoy.
But Benedict’s involvement in forest conservation doesn’t mean that she’s rejecting the land’s economic and recreation potential. The property’s plethora of hardwoods allows the family to participate in the timber market. As a large and secluded mountaintop property, it has attracted wind farms seeking to turn wind into energy. Its location along the Marcellus Shale makes it a desirable location for natural gas developers. This multitude of interested parties, each with its own vision, can be overwhelming for any property owner.
Since different stakeholders preach different benefits and drawbacks of extracting these natural resources, Benedict took charge and carefully investigated the issues herself, knowing her family’s land was at stake. Her decisions balance the property’s economic potential with her desire to keep her family forest as pristine as it was when she explored it as a child.
We talk so much about the environmental benefits of trees that it’s easy to forget that they’re also a business.
(Image courtesy Susan Benedict)
“My forester assures me that your woods are like your stock portfolio,” Benedict explains. “You don’t want to cut out more annual growth than what you’re generating, and in fact, you want to shoot for (cutting) less than what you’re generating. Right now, we are good; what we are taking out, we are generating.”
Before any logging is done, a county forester walks the property and designates which trees can be removed. Then it’s time to cut. Benedict has one logger, an ex-Vietnam veteran whose wife occasionally accompanies him. “He cuts whatever the mills are wanting,” says Benedict.
The challenge occurs when mills want something that shouldn’t be cut. “It’s a little more problematic because we have to market what we want to get rid of, instead of the lumber mills telling us what they want,” Benedict explains.
But Benedict won’t let natural resource markets sway her forest management decisions. She’s taking charge by telling lumber mills that she’ll give them what she wants to give them – no more, no less. Of course, the economic incentives of sustainable forest management make saying “no” easier.
One of these economic rewards is the Department of Agriculture’s Environmental Quality Incentives Program (EQUIP), which provides financial and technical assistance to landowners seeking to “promote agricultural production and environmental quality as compatible national goals.”
Benedict’s EQUIP project will enhance growth on mass-producing trees such as hickory, oak, cherry, hazelnut, beech nut and others that produce animal feed. “Basically, we want to get the trees to grow quicker, and re-generate better.”
Family health problems put Bendict's EQUIP project on hold. Since it needed to be completed by the end of summer, Benedict’s brothers and her three sons (age 15, 24 and 27) held mandatory family work days each weekend from the Fourth of July to the end of September.
“It’s a 200,000-acre property, which translates to a lot of work. But I think that’s good,” Benedict assures me, even though she also sweat through the word during the height of summer’s humidity. “When you have concentrated time like that, you actually talk to each other. If you meet for an hour meeting, no one ever gets around to saying what they want. You get down to what’s real.”
Using the forest as a mechanism to unite her family has been Benedict’s goal since she and her brothers inherited the property after her father’s death.
Benedict tells me that her three boys “have to help out, whether they want to or not.” Their involvement – even if it is forced sometimes – allows the family to connect to the property. Benedict hopes the hard work will inspire them to adopt sustainable forestry management practices when they inherit the land.
We’ve all experienced times when nature takes over and there’s nothing we can do about it – whether we’re a farmer that’s experienced a devastating drought or a commuter who’s had to pull over in a heavy rainstorm because we couldn’t see the road in front of us.
This happened to Benedict and her team six years ago, when a three-year gypsy moth infestation destroyed 80 percent of a red oak stand. The damage cost her more than one million dollars in timber profits on a 2,000-acre lot.
“Al (Benedict's logger) had worked so hard on the stand. And it’s not a fun place to work – rocky and snake-infested. We were all so proud of how it came out. And then three years worth of caterpillars, and it was destroyed.”
Biological sprays of fungi can sometimes prevent gypsy moth infestations. The caterpillars die after ingesting the fungi for a few days.
Benedict could have sprayed the fungi, but it may not have worked. It’s a big risk to take when you’re paying $25 per acre (that’s $50,000 in total). Not only do you need the money, but you must have three consecutive rain-free days in May, the only time of year you can spray.
So when the emerald ash borer – the invasive green insect that has destroyed between 50 and 100 million ash trees in the United States – made its first appearance in Pennsylvania, Benedict began cutting down her ash trees. “We got them to market before they got killed.”
By paying attention to both environmental and market pressures, Benedict’s forest is both sustainable and profitable.
Benedict’s property is isolated. For wind-power developers, that means fewer people will complain about the loud noise and shadows that make living near wind turbines burdensome. The land is also atop a mountain, which, of course, means it experiences high winds.
“It’s very hard to decide to have that much development on your property, but honestly, it will provide a nice retirement for my brothers and me,” Benedict says. “Everyone I talk to assures me that once the construction phase is over, it doesn’t hurt the trees, it doesn’t hurt the wildlife. The wildlife could care less, which has been my observation on most things that we do. After it gets back to normal, they don’t care and they adjust.”
Environmental surveys, which are required by law before construction, affirm Benedict’s insights. A group hired to do a migratory bird study constructed a high tower atop the mountain. “They stayed up there every evening and morning in March,” Benedict says with a shiver.
Another contractor is delineating wetlands on the property: identifying and marking wetland habitat and making sure construction does not affect these areas.
Benedict and her family even had the opportunity to learn what kinds of endangered and threatened animals live on their property. “They found seven timber rattlesnake dens, and had to relocate one of the turbines because it was too close to the den,” Benedict explains. The teams also surveyed Allegany wood rats and northern bulrushes, a critical upland wetland plant.
“I decided to [lease property to the wind farm] because the only way we are ever going to know if wind is a viable technology is if we get some turbines up, see what works, see what doesn’t work, and allow that process of invention to move. And we have to have someone to host it.”
And according to the surveys, Benedict’s property is the perfect host.
As Benedict drives her pickup around the property, she points out the site of her father's former saw mill, where she once worked, and shows me to the cabin that the family built after her grandfather died in 1976. Nearby, there's a section of forest that the family is converting to grouse habitat, which will support her brother's love of grouse hunting.
(Image courtesy Susan Benedict)
The uses of the property fluctuate as family members' interests change. Benedict affirms that managing the property sustainably will give her grandchildren the freedom to pursue their interests in the years to come.
"A lot of people go the route of having a conservation easement, but who knows what the best use of that property is going to be in 100 years. If my dad did that, we would have very little use of the property now, and certainly very little flexibility with these things, especially the wind and natural gas."
Benedict is a member of the Centre County Natural Gas Task Force. "You hear all sorts of things about natural gas development and water resources, and in order to make sure it wasn’t going to be horrible, I joined the task force," she explains.
Benedict also allows 15 or so individuals to hunt and fish on her property for a small annual fee. Control of the deer population in particular is essential for her timber operations.
But no matter what happens, Benedict insists, the forest will stay in the family.
"We made a pact that everyone will have to sell all of their belongings before we sold this," she says. "There's some things, you know, you got to make work out."
Benedict’s forest management practices and involvement in the sustainable forestry community has earned her recognition as a 2011 Forest Steward Champion by the Alliance for the Chesapeake Bay.
Four projects and individuals in Maryland, Pennsylvania and Virginia have been recognized as Chesapeake Forest Champions for their contribution to Chesapeake Bay restoration through the promotion of trees and forests.
The inaugural Chesapeake Forest Champion contest honored recipients in four categories: most innovative, most effective at engaging the public, greatest on-the-ground impact and exceptional forest steward/land owner.
The "most innovative" award went to Adam Downing and Michael LaChance of Virginia Cooperative Extension and Michael Santucci of the Virginia Department of Forestry for their Virginia Family Forestland Short Course program. The team tackled a critical land conservation challenge: intergenerational transfers of family farms and forests, and the need to educate land owners on how to protect their land. Through the land transfer plans developed in this program, more than 21,000 acres of Virginia forests are expected to remain intact, family-owned and sustainably managed.
The "most effective at engaging the public" champion was ecologist Carole Bergmann from Montgomery County, Maryland. Bergmann created the Weed Warrior program in response to a significant invasive plant problem in the county's forests. To date, approximately 600 Weed Warriors have logged more than 25,000 hours of work removing and monitoring invasive weeds.
The "greatest on-the-ground impact" award went to David Wise of the Chesapeake Bay Foundation for his leadership in restoring riparian forest buffers through the Pennsylvania Conservation Reserve Enhancement Program (CREP) partnership. Since 2000, Pennsylvania CREP has restored more than 22,000 acres of forest buffers -- more than all the other Chesapeake Bay states combined.
The "exceptional forest steward/land owner" champion was Susan Benedict of Centre County, Pennsylvania, for her work running a sustainable tree farm. Benedict has implemented many conservation projects on her family's land, such as planting habitat to encourage pollination in a forested ecosystem.
The Chesapeake Forest Champion contest was sponsored by the U.S. Forest Service and the Alliance for the Chesapeake Bay as part of the International Year of Forests. The four Chesapeake Forest Champions were honored earlier this month at the 2011 Chesapeake Watershed Forum in Shepherdstown, W.Va.
Visit the Alliance for the Chesapeake Bay's website to learn more about the Chesapeake Forest Champions.
Image: (from left to right) Sally Claggett, U.S. Forest Service; David Wise, Chesapeake Bay Foundation; Michael LaChance, Virginia Cooperative Extension; Susan Benedict, land owner, Centre County, Pa.; Carole Bergmann, Montgomery County, Md.; and Al Todd, Alliance for the Chesapeake Bay. Image courtesy Alliance for the Chesapeake Bay.
The Potomac Conservancy is looking for individuals, educators and community groups to help collect native tree seeds during the annual Growing Native season, which begins Sept. 17.
Volunteers participate in Growing Native by collecting native tree seeds across the Potomac River region. The seeds are donated to state nurseries in Maryland, Pennsylvania, Virginia and West Virginia, where they are planted and used to restore streamside forests throughout the 15,000-square-mile Potomac River watershed.
Since Growing Native’s inception in 2001, nearly 56,000 volunteers have collected more than 164,000 pounds of acorns, walnuts and other hardwood tree and shrub seeds. In addition to providing native tree stock, Growing Native builds public awareness of the important connection between healthy, forested lands and clean waters, and what individuals can do to protect them.
Visit growingnative.org to learn more about how you can get involved with Growing Native.
Image courtesy Jennifer Bradford/Flickr.
Do you know an exemplary person or group who is a champion for forests in the Chesapeake Bay region? Nominate them to be a Chesapeake Forest Champion!
To help celebrate International Year of Forests, the U.S. Forest Service and its partners are launching a new annual contest to recognize forest champions throughout the Chesapeake Bay watershed. With around 100 acres of the region's forests lost to development each day, the need for local forest champions has never been greater!
The Chesapeake Forest Champion awards recognize the outstanding efforts of groups and individuals to conserve, restore and celebrate Chesapeake forests in 2011. The contest is
open to schools and youth organizations, community groups and nonprofits, businesses and forestry professionals. If you know a professional or volunteer who is doing
outstanding work for forests, you can nominate them too!
The award has three categories:
Nominations are due by Friday, September 2. Winners will be recognized at the Chesapeake Watershed Forum in Shepherdstown, W.Va., in September.
Visit the Forestry for the Bay website to learn more about the awards and submit a nomination.
There are several different kinds of habitats found in the Bay’s watershed. Each one is important to the survival of the watershed’s diverse wildlife. Habitats also play important roles in Bay restoration.
Chesapeake Bay habitats include:
Forests covered approximately 95 percent of the Bay’s 64,000-square-mile watershed when Europeans arrived in the 17th century. Now, forests only cover about 58 percent of the watershed.
Forests are important because they provide vital habitat for wildlife. Forests also filter pollution, keeping nearby waterways cleaner. Forests act as huge natural sponges that absorb and slowly release excess stormwater runoff, which often contains harmful pollutants. Forests also absorb airborne nitrogen that might otherwise pollute our land and water.
Wetlands are transitional areas between land and water. There are two general categories of wetlands in the Chesapeake Bay watershed: tidal and non-tidal. Tidal wetlands, found along the Bay's shores, are filled with salt or brackish water when the tide rises. Non-tidal wetlands contain fresh water
Just like forests, wetlands act as important buffers, absorbing and slowing the flow of polluted runoff to the Bay and its tributaries.
Streams and rivers not only provide the Chesapeake Bay with its fresh water, they also provide many aquatic species with critical habitat. Fish, invertebrates, amphibians and other wildlife species all depend on the Bay’s tributaries for survival.
When the Bay’s streams and rivers are in poor health, so is the Bay, and the great array of wildlife it harbors is put in danger.
Shallow waters are the areas of water from the shoreline to about 10 feet deep. Shallow waters are constantly changing with the tides and weather throughout the year. The shallows support plant life, fish, birds and shellfish.
Tidal marshes in the Bay's shallows connect shorelines to forests and wetlands. Marshes and provide food and shelter for the wildlife that lives in the Bay's shallow waters. Freshwater marshes are found in the upper Bay, brackish marshes in the middle Bay and salt marshes in the lower Bay.
Aquatic reefs are solid three-dimensional habitats made up of densely packed oysters. The reefs form when oyster larvae attach to larger oysters at the bottom of the Bay.
Reefs provide habitat and communities for many aquatic species in the Bay, including fish and crabs. The high concentration of oysters in aquatic reefs improve water quality by filtering algae and pollutants from the water.
Open waters are beyond the shoreline and the shallows. Aquatic reefs replace underwater bay grasses, which cannot grow where the sunlight cannot penetrate deep waters. Open water provides vital habitat for pelagic fish, birds and invertebrates.
Each of these habitats are vital to the survival of the Chesapeake Bay’s many different species of wildlife. It's important to protect and restore habitats to help promote the overall health of the Bay. So do your part to save the Bay by protecting habitats near you – find out how.
Do you have a question about the Chesapeake Bay? Ask us and we might choose your question for the next Question of the Week! You can also ask us a question via Twitter by sending a reply to @chesbayprogram! Be sure to follow us there for all the latest in Bay news and events
The rain was falling heavy all through Tuesday night and things had not changed much when the alarm went off the next morning, signaling the new day. The Chesapeake Bay Forestry Workgroup had a meeting scheduled at Banshee Reeks Nature Preserve in Loudoun County, Virginia.
Hearing and seeing the rain and knowing the schedule of the day brought back memories from my past life. For years, the month of April had a pretty profound impact on my life. One of the duties as an employee working for the Virginia Department of Forestry was to plant tree seedlings with volunteer groups. The best planting months are March, April, November and December, but April was extremely busy with plantings because of Earth Day and Arbor Day. You can plant trees during other months, but for “bare root” seedlings with no soil on their roots, months with high precipitation and cooler temperatures are the best.
The Banshee Reeks Manor House sits on the top of a hill and Goose Creek winds through the rolling farmland and forest. The “Banshee” was with us that Wednesday because of the pouring rain; the misty spirit hung over the reeks (rolling hills and valley). But hardy as the Forestry Workgroup members are, they hopped on a wagon and rode down the hills -- in the pouring rain -- to Goose Creek to see the task before them.
The heavily grassed floodplain had bare areas that were prepared for a riparian buffer planting. Our hosts from the Virginia Department of Forestry had planting bars, tree seedlings, gloves, tree shelters and all of the equipment needed to get the trees in the ground; the Workgroup members were the muscle. The group planted approximately 125 sycamore, black walnut, river birch, hackberry and dogwood shrub seedlings -- again, in the pouring rain -- in a little over an hour.
As we road the wagon back up the hill -- still in the pouring rain -- and looked back at the newly planted floodplain, the enthusiasm was hard to contain. There was a special warm feeling that drifted over me, reminiscent of my days of planting with volunteers: the feeling of knowing you just did something special that will last far into the future. For the Forestry Workgroup members who promote riparian forest buffer plantings in the Bay watershed, this was a “lead by example” exercise.
As everyone got into their cars to return to their home states of Maryland, Pennsylvania, West Virginia and other parts of Virginia, yes, they were cold, they were wet, but they were proud of their work.
In early October the search was on for a site in the Bay watershed for the November 18 Bay Program Forestry Workgroup meeting. Educational workgroup meetings are good because members can get out of their offices and visit the fields and forests of the Chesapeake Bay watershed. After a few calls, the Virginia Tech Mare Equine Center in Middleburg, Virginia, separated itself from other choices. It was a perfect location for the forestry workgroup meeting because it has a 23-acre riparian forest buffer, and forest buffers would be the focus of the meeting.
Riparian forest buffers are a topic near and dear to my everyday life. People often tell me I live in “buffer land” because my job is very specific to that area of forestry. I really am very interested in watersheds as holistic ecosystems and think of forest buffers as the integral link between what happens on the land and how those actions are reflected in the water quality of streams and rivers.
Along with other Bay goals, the riparian forest buffer goal will fall short of the 10,000-mile commitment made for the 2010 deadline. The number of riparian buffer miles achieved annually has dropped off from 1,122 miles in 2002 to 385 miles in 2007. Since Forestry Workgroup members represent state forestry agencies, NGOs, and other groups interested in Bay forests, they are the logical group to come up with ways to address barriers that stand in the way of achieving state riparian forest buffer commitments. We spent the afternoon of the Forestry Workgroup meeting discussing the barriers to riparian forest buffer plantings and ways to eliminate those barriers.
The Forestry Workgroup meeting also featured two presentations on new riparian forest buffer tools intended for use by local governments, watershed groups, and local foresters. The first presentation, given by Fred Irani from the U.S. Geological Survey team at the Bay Program office, was about the RB Mapper, a new tool developed for assessing riparian forest buffers along shorelines and streambanks. The other presentation, given by Rob Feldt from Maryland DNR, was about a tool for targeting the placement of riparian forest buffers for more effective nutrient removal. (You can read all of the briefing papers and materials from the Forestry Workgroup meeting at the Bay Program’s website.)
After all the business, it was time to experience the Mare Center, their streamside forest buffer and the rolling hills of Virginia. A tractor and wagon provided transportation to the pasture to see the buffer, which was planted in 2000 with 2,500 tree seedlings. It was a cold and windy day, and there were actually snowflakes in the air. We had planned to ride the wagon out and walk back, however, with a little bit of a bribe, the wagon driver waited while we checked out the forest buffer for survival, growth, and general effectiveness for stream protection.
The Forestry Workgroup meeting was productive, educational, and enjoyable. How often can we say that about group meetings? Sometimes it is worth the extra effort to provide a meeting place with an outdoor component that conveys the endeavors that the Bay Program workgroups are all about.
I get a thrill whenever I see forests on equal billing with farm lands in the Chesapeake region. Especially when it comes to something BIG like carbon sequestration. Of course, one acre of forest land can sequester much more carbon than one acre of agricultural land -- 1-2 tons of carbon per acre per year for forest, compared to roughly 0.3-0.5 ton per acre per year for farmland. But when it comes to best management practices for water quality, and well, eating, agriculture is king.
Kudos to Delaware, which is now only 30% forested (the smallest percentage of forest for any of the six Bay states), to take on carbon for its champion role in the Chesapeake clean-up. When it comes to carbon, it’s all about taking advantage of existing volunteer markets, such as the Regional Greenhouse Gas Initiative (RGGI) and the Chicago Climate Exchange, and potential regulatory markets in the United States’ future
From a global perspective, the U.S. is playing catch-up with carbon. Our nation did not ratify Kyoto in 1997 when 84 other countries signed on. These countries are legally bound to reduce carbon emissions, with the average target being to reduce emissions by 5% below 1990 levels. Here in the U.S., the states have largely taken the leadership on reducing greenhouse gases, with some big regional programs such as RGGI, the Western Climate Initiative and the Midwestern Greenhouse Gas Reduction Accord taking off. Last year, Congress got serious with the Lieberman-Warner Climate Security Act, but it didn’t pass. Both of the prospective new administrations have promised to enact climate legislation. Most likely only after the economy settles down -- I mean up. It’s an exciting time for many who have talked for nearly two decades about the need.
Back to the symposium …
How will the markets actually reduce greenhouse gases? It’s not shuffling money around. It has to do with being cost-effective, promoting innovation and, indirectly, better land use decisions. Big questions abound, however; like: will it work? The top six issues are certainty, baseline, leakage, permanence, additionality and double counting.
Once some of the issues start being resolved, there’s great potential for forestry, since 80% of the forest land in this region is privately owned. The Bay Bank has moved from concept to design and will be up and running in fall 2009. The Bay Bank will facilitate both farm and forest landowner access to multiple ecosystem markets (not just carbon) and conservation programs through an easy-to-use online marketplace. Supporting aspects of the Bay Bank, such as the Spatial Lands Registry, will be up sooner. The Spatial Lands Registry is one of those tools that will help reduce issues such as certainty, baseline and permanence. When a tool does this, it also reduces the make-it or break-it transaction costs.
The all-important new regulations will determine the direction of these burgeoning markets. There need to be more drivers to direct more businesses and people to invest in carbon sequestering practices. The target reductions and rules need to be reasonable so a variety of private landowners can take part in the market and get a worthwhile return on their investment. The Delaware symposium is helping with the outreach and understanding that will be needed for any market to succeed.
What’s good for carbon is good for water quality. Less cars, more forests and farms, better-managed farms and forests, and hopefully, hopefully, a postponement of sea level rise. That would be very good for the Chesapeake. For that matter, good for the world.
Frederick, Maryland's urban tree canopy covers just 12 percent of the city, but an additional 72 percent could possibly be covered by trees in the future, according to a recent study by the Maryland Department of Natural Resources, the University of Vermont and the U.S. Forest Service.
Urban tree canopy—the layer of trees covering the ground when viewed from above—is a good indicator of the amount and quality of forests in cities, suburbs and towns. Healthy trees in these urban and suburban areas help improve water quality in local waterways—and eventually the Bay—by reducing polluted runoff. Urban forests also provide wildlife habitat, absorb carbon dioxide from the air and enhance quality of life for residents.
With 12 percent tree canopy, Frederick has less urban forest cover than several other cities in the region, including Annapolis (41 percent urban tree canopy), Washington, D.C. (35 percent) and Baltimore (20 percent). The report finds that 9,500 acres in Frederick, or 72 percent of the city's land area, could possibly support tree canopy because it is not covered by a road or structure (such as a building).
Thirty-eight urban and suburban Maryland communities, including Annapolis, Baltimore, Bowie, Cumberland, Greenbelt, Hyattsville, Rockville and 29 communities in Baltimore County, are involved in setting tree canopy cover goals. Washington, D.C., and communities in Virginia and Pennsylvania have also set urban tree canopy goals.
Under the 2007 Forest Conservation Initiative, the Bay Program committed to accelerating reforestation and conservation in urban and suburban areas by increasing the number of communities with tree canopy expansion goals to 120 by 2020.
At its annual meeting in early December, the Chesapeake Executive Council (EC) signed the Forestry Conservation Initiative, committing the Bay states to permanently conserve an additional 695,000 acres of forested land throughout the watershed by 2020.
Chesapeake forests are crucial to maintaining water quality in the Bay and its tributaries. They also safeguard wildlife habitat, contribute billions of dollars to the economy, protect public health, provide recreation opportunities and enhance quality of life for the watershed's 17 million residents.
Despite these benefits, forests in the Bay watershed are at risk. In the Bay region alone, some 750,000 acres - equivalent to 20 Washington, D.C.s - have been felled since the early 1980s, a rate of 100 acres per day. By 2030, 9.5 million more acres of forest will see increased development pressure.
There are four overarching goals to the Forestry Conservation Initiative:
By 2020, permanently protect an additional 695,000 acres of forest from conversion to other land uses such as development, targeting forests in areas of highest water quality value. As part of this goal, 266,400 acres of forest land under threat of conversion will be protected by 2012.
By 2020, accelerate reforestation and conservation in:
In addition, each state and the federal agencies will implement strategies and actions to: | http://www.chesapeakebay.net/blog/keyword/forests | 13 |
51 | Common Lisp/Basic topics/Functions
A function is a concept that is encountered in almost every programming language, but in Lisp functions are especially important. Historically, Lisp was inspired by lambda calculus, where every object is a function. On the other end of the spectrum there are many programming languages where functions are hardly objects at all. This is not the case with Lisp: functions here have the same privileges as the other objects, and we will discuss it in this chapter.
Functions are most often created using the defun (DEfine FUNction) macro. This macro takes a list of arguments and a sequence of Lisp forms, called a body of a function. A typical use of defun is like this:
(defun print-arguments-and-return-sum (x1 x2) (print x1) (print x2) (+ x1 x2))
Here, the list of arguments is (x1 x2) and the body is (print x1) (print x2) (+ x1 x2). When the function is called, each form in the body is evaluated sequentially (from first to last). What if something happened and we want to return from the function without reaching the last form? The return-from macro allows us to do that. For example:
(defun print-arguments-and-return-sum (x1 x2) (print x1) (print x2) (unless (and (numberp x1) (numberp x2)) (return-from print-arguments-and-return-sum "Error!")) (+ x1 x2))
The second argument to return-from is optional, which means that it can be called with only one argument - in this case the function would return nil. But wait: how did they do this? Our function accepts exactly two arguments: nothing more, nothing less. The answer is that what we called "argument list" is not as simple as it seems. It is in fact called lambda list and it is not the only place where we will encounter it. Later, I will explain how to allow optional and keyword arguments in functions.
Functions as data
As was mentioned in the beginning of this chapter, Lisp functions can be used like any other object: they can be stored in variables, passed as parameters to other functions, and returned as values from functions. In the previous section we defined a function. The function in now stored in the function cell of the symbol print-arguments-and-return-sum - defun put it there. However, it is not bound to this location forever - we can extract it and put in some other symbol, for example. To access the function cell of a symbol we can use the accessor symbol-function. Let's store our function in some other symbol:
(setf paars (symbol-function 'print-arguments-and-return-sum))
The first reaction would be to do something like that:
>(paars 1 1) EVAL: undefined function PAARS [Condition of type SYSTEM::SIMPLE-UNDEFINED-FUNCTION]
This is because we put the function into the value cell of the symbol instead of its function cell. It's easy to fix:
(setf (symbol-function 'paars) (symbol-function 'print-arguments-and-return-sum))
Since symbol-function is an accessor we can use setf with it. Now
(paars 1 1) produces what it should.
While symbol-function is there for a reason, it is almost never used in the real code. This is because it is superseded by several other Lisp features. One of them is the function special operator. It works like the symbol-function special operator, except it doesn't evaluate its argument, and it returns the function that is currently bound to the symbol, which may not actually be its function cell.
(function foo) may also be abbreviated as
#' which tremendously improves its usefulness. On the flip side, it's impossible to write
(setf (function paars) (function print-arguments-and-return-sum))
Fortunately it's possible to call functions from other places than function cell of a symbol. A function designator is either a symbol (in this case its function cell is used) or the function itself. funcall and apply are used to call functions by their function designators. Remember that symbol paars now contains the same function in its function cell and its value cell. Let's change its value cell so that the difference is apparent:
(setf paars #'+)
Now let's funcall it in different ways:
(funcall paars 1 2) ;equivalent to (+ 1 2) (funcall 'paars 1 2) ;equivalent to (funcall (symbol-function paars) 1 2) (funcall #'paars 1 2) ;equivalent to (paars 1 2)
The difference between the second and third example is that if paars was temporarily bound (with flet or labels) to some other function, the third funcall would use this temporary function, while the second funcall would still use its function cell. | http://en.m.wikibooks.org/wiki/Common_Lisp/Basic_topics/Functions | 13 |
65 | Copyright © 1996-2005 jsd
Because the earth is spinning and the air is moving, there are significant Coriolis effects.1 You’ll never understand how weather systems work unless you pay attention to this.
Based on their everyday indoor experience, people think they understand how air behaves:
However, when we consider the outdoor airflow patterns that Mother Nature creates, the story changes completely. In a chunk of air that is many miles across, a mile thick, and a mile away from the surface, there can be airflow patterns that last for hours or days, because there is so much more inertia and so much less friction. During these hours or days, the earth will rotate quite a bit, so Coriolis effects will be very important.
We are accustomed to seeing the rotation of storm systems depicted on the evening news, but you should remember that even a chunk of air that appears absolutely still on the weather map is rotating, because of the rotation of the earth as a whole. Any chunk of air that appears to rotate on the map must be rotating faster or slower than the underlying surface. (In particular, the air in a storm generally rotates faster, not slower.)
Note: In this chapter, I will use the § symbol to indicate words that are correct in the northern hemisphere but which need to be reversed in the southern hemisphere. Readers in the northern hemisphere can ignore the § symbol.
Suppose we start out in a situation where there is no wind, and where everything is in equilibrium. We choose the rotating Earth as our reference frame, which is a traditional and sensible choice. In this rotating frame we observe a centrifugal field, as well as the usual gravitational field, but the air has long ago distributed itself so that its pressure is in equilibrium with those fields.
Then suppose the pressure is suddenly changed, so there is a region where the pressure is lower than the aforementioned equilibrium pressure.
In some cases the low pressure region is roughly the same size in every direction, in which case it is called a low pressure center (or simply a low) and is marked with a big “L” on weather maps. In other cases, the low pressure region is quite long and skinny, in which case it is called a trough and is marked “trof” on the maps. See figure 20.1.
In either case, we have a pressure gradient.2 Each air parcel is subjected to an unbalanced force due to the pressure gradient.
Initially, each air parcel moves directly inward, in the direction of the pressure gradient, but whenever it moves it is subject to large sideways Coriolis forces, as shown in figure figure 20.2. Before long, the motion is almost pure counterclockwise§ circulation around the low, as shown in figure 20.3, and this pattern persists throughout most of the life of the low-pressure region. If you face downwind at locations such as the one marked A, the pressure gradient toward the left§ is just balanced by the Coriolis force to the right§, and the wind blows in a straight line parallel to the trough. At locations such as the one marked B, the pressure gradient is stronger than the Coriolis force. The net force deflects the air.
When explaining the counterclockwise§ circulation pattern, it would be diametrically wrong to think it is “because” the Coriolis force is causing a “leftward§” deflection of the motion. In fact the Coriolis force is always rightward§. In the steady motion, as shown in figure 20.3, the Coriolis force is outward from the low pressure center, partially opposing the pressure gradient. The Coriolis force favors counterclockwise§ motion mainly during the initial infall as shown in figure 20.2.
Not all circulation is counterclockwise§; it is perfectly possible for the air to contain a vortex that spins the other way. It depends on scale: A system the size of a hurricane will always be cyclonic, whereas anything the size of tornado (or smaller) can go either way, depending on how it got started.
Terminology: In the northern hemisphere, counterclockwise circulation is called cyclonic, while in the southern hemisphere clockwise circulation is called cyclonic. So in both hemispheres cyclonic circulation is common, and anticyclonic circulation is less common.
Now we must must account for friction (in addition to the other forces just mentioned). The direction of the frictional force will be opposite to the direction of motion. This will reduce the circulatory velocity. This allows the air to gradually spiral inward.
The unsophisticated idea that air should flow from a high pressure region toward a low pressure region is only correct in the very lowest layers of the atmosphere, where friction is dominant. If it weren’t for friction, the low would never get filled in. At any reasonable altitude, friction is negligible — so the air aloft just spins around and around the low pressure region.
The astute reader may have noticed a similarity between the air in figure 20.2 and the bean-bag in figure 19.14. In one case, something gets pulled inwards and increases its circulatory motion “because” of Coriolis force, and in the other case something gets pulled inwards and increases its circulatory motion “because” of conservation of angular momentum. For a bean-bag, you can analyze it either way, and get the same answer. Also for a simple low-pressure center, you can analyze it either way, and get the same answer. For a trough, however, there is no convenient way to apply the conservation argument.
In any case, please do not get the idea that the air spins around a low partly because of conservation of angular momentum and partly because of the Coriolis force. Those are just two ways of looking at the same thing; they are not cumulative.
As mentioned above, whenever the wind is blowing in a more-or-less straight line, there must low pressure on the left§ to balance the Coriolis force to the right§ (assuming you are facing downwind). In particular, the classic cold front wind pattern (shown in figure 20.4) is associated with a trough, (as shown in figure 20.5). The force generated by the low pressure is the only thing that could set up the characteristic frontal flow pattern.
The wind shift is what defines the existence of the front. Air flows one way on one side of the front, and the other way on the other side (as shown in figure 20.4).
Usually the front is oriented approximately north/south, and the whole system is being carried west-to-east by the prevailing westerlies. In this case, we have the classic cold front scenario, as shown in figure 20.4, figure 20.5, and figure 20.6. Ahead of the front, warm moist air flows in from the south§. Behind the front, the cold dry air flows in from the north§. Therefore the temperature drops when the front passes. In between cold fronts, there is typically a non-frontal gradual warming trend, with light winds.
You can use wind patterns to your advantage when you fly cross-country. If there is a front or a pressure center near your route, explore the winds aloft forecasts. Start by choosing a route that keeps the low pressure to your left§. By adjusting your altitude and/or route you can often find a substantial tailwind (or at least a substantially decreased headwind).
By ancient tradition, any wind that is named for a cardinal
direction is named for the direction from whence it comes.
For example, a south wind (or southerly wind)
blows from south to north.
To avoid confusion, it is better to say “wind from the south” rather than “south wind”.
|Almost everything else is named the other way. For example, an onshore breeze is blowing toward onshore points, while an offshore breeze is blowing toward offshore points. An aircraft on a southerly heading is flying toward the south. Physicists and mathematicians name all vectors by the direction toward which they point.|
|The arrow on a real-life weather vane points upwind, i.e. into the wind.||The arrows on a NOAA “850mb analysis” chart and similar charts point downwind, the way a velocity vector should point.|
A warm front is in many ways the same as a cold front. It is certainly not the opposite of a cold front. In particular, it is also a trough, and has the same cyclonic flow pattern.
A warm front typically results when a piece of normal cold front gets caught and spun backwards by the east-to-west flow just north§ of a strong low pressure center, as shown in figure 20.7. That is, near the low pressure center, the wind circulating around the center is stronger than the overall west-to-east drift of the whole system.
If a warm front passes a given point, a cold front must have passed through a day or so earlier. The converse does not hold — cold front passage does not mean you should expect a warm front a day or so later. More commonly, the pressure is more-or-less equally low along most of the trough. There will be no warm front, and the cold front will be followed by fair weather until the next cold front.
Low pressure — including cold fronts and warm fronts — is associated with bad weather for a simple reason. The low pressure was created by an updraft that removed some of the air, carrying it up to the stratosphere. The air cools adiabatically as it rises. When it cools to its dew point, clouds and precipitation result. The latent heat of condensation makes the air warmer than its surroundings, strengthening the updraft.
The return flow down from the stratosphere (high pressure, very dry descending air, and no clouds) generally occurs over a wide area, not concentrated into any sort of front. There is no sudden wind shift, and no sudden change in temperature. This is not considered “significant weather” and is not marked on the charts at all.
Air shrinks when it gets cold. This simple idea has some important consequences. It affects your altimeter, as will be discussed in section 20.2.4. It also explains some basic facts about the winds aloft, which we will discuss now.
Most non-pilots are not very aware of the winds aloft. Any pilot who has every flown westbound in the winter is keenly aware of some basic facts:
A typical situation is shown in figure 20.8. In January, the average temperature in Vero Beach, Florida, is about 15 Centigrade (59 Fahrenheit), while the average temperature in Oshkosh, Wisconsin is about minus 10 Centigrade (14 Fahrenheit). Imagine a day where surface winds are very weak, and the sea-level barometric pressure is the same everywhere, namely 1013 millibars (29.92 inches of mercury).
The pressure above Vero Beach will decrease with altitude. According to the International Standard Atmosphere (ISA), we expect the pressure to be 697 millibars at 10,000 feet.
Of course the pressure above Oshkosh will decrease with altitude, too, but it will not exactly follow the ISA, because the air is 25 centigrade colder than standard. Air shrinks when it gets cold. In the figure, I have drawn a stack of ten boxes at each site. Each box at VRB contains the same number of air molecules as the corresponding box at OSH.3 The pile of boxes is shorter at OSH than it is at VRB.
The fact that the OSH air column has shrunk (while the VRB air column has not) produces a big effect on the winds aloft. As we mentioned above, the pressure at VRB is 697 millibars at 10,000 feet. In contrast, the pressure at OSH is 672 millibars at the same altitude — a difference of 25 millibars.
This puts a huge force on the air. This force produces a motion, namely a wind of 28 knots out of the west. (Once again, the Coriolis effect is at work: during most of the life of this pressure pattern, the wind flows from west to east, producing a Coriolis force toward the south, which just balances the pressure-gradient force toward the north.) This is the average wind at 10,000 feet, everywhere between VRB and OSH.
More generally, suppose surface pressures are reasonably uniform (which usually the case) and temperatures are not uniform (which is usually the case, especially in winter). If you have low temperature on your left§ and high temperature on your right§, you will have a tailwind aloft. The higher you go, the stronger the wind. This is called thermal gradient wind.
The wind speed will be proportional to the temperature gradient. Above a large airmass with uniform temperature, there will be no thermal gradient wind. However, if there is a front between a warm airmass and a cold airmass, there will be a large temperature change over a short distance, and this can lead to truly enormous winds aloft.
In July, OSH warms up considerably, to about 20 centigrade, while VRB only warms up slightly, to about 25 centigrade. This is why the thermal gradient winds are typically much weaker in summer than in winter — only about 5 knots on the average at 10,000 feet.
In reality, the temperature change from Florida to Wisconsin does not occur perfectly smoothly; there may be large regions of relatively uniform temperature separated by rather abrupt temperature gradients — cold fronts or warm fronts. Above the uniform regions the thermal gradient winds will be weak, while above the fronts they will be much stronger.
For simplicity, the foregoing discussion assumed the sea-level pressure was the same everywhere. It also assumed that the temperature profile above any given point was determined by the surface temperature and the “standard atmosphere” lapse rate. You don’t need to worry about such details; as a pilot you don’t need to calculate your own winds-aloft forecasts. The purpose here is to make the official forecasts less surprising, less confusing, and easier to remember.
Several different notions of “altitude” are used in aviation.
We start with true altitude, which is the simplest. This is what non-pilots think of as “the” altitude or elevation, namely height above sea level, as measured with an accurate ruler. True altitude is labelled MSL (referring to Mean Sea Level). For instance, when they say that the elevation of Aspen is 7820 feet MSL, that is a true altitude.
Before proceeding, we need to introduce the notion of international standard atmosphere (ISA). The ISA is a set of formulas that define a certain temperature and pressure as a function of altitude. For example, at zero altitude, the ISA temperature is 15 degrees centigrade, and the ISA pressure is 1013.25 millibars, or equivalently 29.92126 inches of mercury. As the altitude increases, the ISA temperature decreases at a rate of 6.5 degrees centigrade per kilometer, or very nearly 2 degrees C per thousand feet. The pressure at 18,000 feet is very nearly half of the sea-level pressure, and the pressure at 36,000 feet is somewhat less than one quarter of the sea-level pressure – so you can see the pressure is falling off slightly faster than exponentially. If you want additional details on this, a good place to look is the Aviation Formulary web site.
Remember, the ISA is an imaginary, mathematical construction. However, the formulas were chosen so that the ISA is fairly close to the average properties of the real atmosphere.
Now we can define the notion of pressure altitude. This is not really an altitude; it is just a way of describing pressure. Specifically, you measure the pressure, and then figure out how high you would have to go in the international standard atmosphere to find that pressure. That height is called the pressure altitude. One tricky thing is that low pressure corresponds to high pressure altitude and vice versa.
Pressure altitude (i.e. pressure) is worth knowing for several reasons. For one thing, if the pressure altitude is too high, you will have trouble breathing. The regulations on oxygen usage are expressed in terms of pressure altitude. Also, engine performance is sensitive to pressure altitude (among other factors). Thirdly, at high altitudes, pressure altitude is used for vertical separation of air traffic. This works fine, even though the pressure altitude may be significantly different from the true altitude (because on any given day, the actual atmosphere may be different from the ISA). The point is that two aircraft at the same pressure level will be at the same altitude, and two aircraft with “enough” difference in pressure altitude will have “enough” difference in true altitude.
To determine your pressure altitude, set the Kollsman window on your altimeter to the standard value: 29.92 inches, or equivalently 1013 millibars. Then the reading on the instrument will be the pressure altitude (plus or minus nonidealities, as discussed in section 20.2.3).
This brings us to the subject of calibrated altitude and indicated altitude . At low altitudes – when we need to worry about obstacle clearance, not just traffic separation – pressure altitude is not good enough, because the pressure at any given true altitude varies with the weather. The solution is to use indicated altitude, which is based on pressure (which is convenient to measure), but with most of the weather-dependence factored out. To determine your indicated altitude, obtain a so-called altimeter setting from an appropriate nearby weather-reporting station, and dial it into the Kollsman window on your altimeter. Then the reading on the instrument will be the indicated altitude. (Calibrated altitude is the same thing, but does not include nonidealities, whereas indicated altitude is disturbed by nonidealities of the sort discussed in section 20.2.3.)
The altimeter setting is arranged so that right at the reporting station, calibrated altitude agrees exactly with the station elevation. By extension, if you are reasonably close to the station, your calibrated altitude should be a reasonable estimate of your true altitude ... although not necessarily good enough, as discussed in section 20.2.3 and section 20.2.4).
Next we turn to the notion of absolute altitude. This is defined to be the height above the surface of the earth. Here is a useful mnemonic for keeping the names straight: the Absolute Altitude is what you see on the rAdAr altimeter. Absolute altitude is labelled “AGL” (above ground level). It is much less useful than you might have guessed. One major problem is that there may be trees and structures that stick up above the surface of the earth, and absolute altitude does not account for them. Another problem is that the surface of the earth is uneven, and if you tried to maintain a constant absolute altitude, it might require wild changes in your true altitude, which would play havoc with your energy budget. Therefore the usual practice in general aviation is to figure out a suitable indicated altitude and stick to it.
Another type of altitude is altitude above field elevation, where field means airfield, i.e. airport. This is similar to absolute altitude, but much more widely used. For instance, the traffic-pattern altitude might be specified as 1000 feet above field elevation. Also, weather reports give the ceiling in terms of height above field elevation. This is definitely not the same as absolute altitude, because if there are hills near the field, 1000 feet above the field might be zero feet above the terrain. Altitude above field elevation should be labelled “AFE” but much more commonly it is labelled “AGL”. If the terrain is hilly “AGL” is a serious misnomer.
Finally we come to the notion of density altitude. This is not really an altitude; it is just a way of describing density. The official definition works like this: you measure the density, and then figure out how high you would have to go in the ISA to find that density. That height is called the density altitude. Beware that low density corresponds to high density altitude and vice versa.
Operationally, you can get a decent estimate of the density altitude by measuring the pressure altitude and temperature, and then calculating the density altitude using the graphs or tables in your POH. This is only an estimate, because it doesn’t account for humidity, but it is close enough for most purposes.
Density altitude is worth knowing for several reasons. For one thing, the TAS/CAS relationship is determined by density. Secondly, engine performance depends strongly on density (as well as pressure and other factors). Obviously TAS and engine performance are relevant to every phase of flight – sometimes critically important.
As discussed in the previous section, an aircraft altimeter does not measure true altitude. It really measures pressure, which is related to altitude, but it’s not quite the same thing.
In order to estimate the true altitude, the altimeter depends two factors: the pressure, and the altimeter setting in the Kollsman window. The altimeter setting is needed to correct for local variations in barometric pressure. You should set this on the runway before takeoff, and for extended flights you should get updated settings via radio. If you neglect this, you could find yourself at a too-low altitude, if you fly to a region where the barometric pressure is lower. The mnemonic is: “High to low, look out below”.
Altimeters are not perfect. Even if the altimeter and airplane were inspected yesterday, and found to be within tolerances,
The first item could be off in either direction, but the other items will almost certainly be off in the bad direction when you are descending. Also, if the airplane has been in service for a few months since the last inspection, the calibration could have drifted a bit. All in all, it would be perfectly plausible to find that your altimeter was off by 50 feet when parked on the ground, and off by 200 feet in descending flight over hilly terrain.
The altimeter measures a pressure and converts it to a so-called altitude. The conversion is based on the assumption that the actual atmospheric pressure varies with altitude the same way the the standard atmosphere would. The pressure decreases by roughly 3.5% per thousand feet, more or less, depending on temperature.
The problem is that the instrument does not account for nonstandard temperature. Therefore if you set the altimeter to indicate correctly on the runway at a cold place, it will be inaccurate in flight. It will indicate that you are higher than you really are. This could get you into trouble if you are relying on the altimeter for terrain clearance. The mnemonic is HALT — High Altimeter because of Low Temperature.
As an example: Suppose you are flying an instrument approach into Saranac Lake, NY, according to the FAA-approved “Localizer Runway 23” procedure. The airport elevation is 1663 feet. You obtain an altimeter setting from the airport by radio, since you want your altimeter to be as accurate as possible when you reach the runway.
You also learn that the surface temperature is −32 Centigrade, which is rather cold but not unheard-of at this location. That means the atmosphere is about 45 C colder than the standard atmosphere. That in turn means the air has shrunk by about 16%. Throughout the approach, you will be too low by an amount that is 16% of your height above the airport.
The procedure calls for crossing the outer marker at 3600 MSL and then descending to 2820 MSL, which is the Minimum Descent Altitude. That means that on final approach, you are supposed to be 1157 feet above the airport. If you blindly trust your altimeter, you will be 1157 “shrunken feet” above the airport, which is only about 980 real feet. You will be 180 real feet (210 shrunken feet) lower than you think. To put that number in perspective, remember that localizer approaches are designed to provide only 250 feet of obstacle clearance.4
You must combine this HALT error with the ordinary altimetry errors discussed in section 20.2.3. The combination means you could be 400 feet lower than what the altimeter indicates — well below the protected airspace. You could hit the trees on Blue Hill, 3.9 nm northeast of the airport.
Indeed, you may be wondering why there haven’t been lots of crashes already – especially since the Minimum Descent Altitude used to be lower (1117 feet, until mid-year 2001). Possible explanations include:
Even if people don’t “usually” crash, we still need to do something to increase the margin for error.
There is an obvious way to improve the situation: In cold weather, you need to apply temperature compensation to all critical obstacle-clearance altitudes.
You can do an approximate calculation in your head: If it’s cold, add 10%. If it’s really, really cold, add 20%. Approximate compensation is a whole lot better than no compensation.
The percentages here are applied to the height above the field, or, more precisely, to the height above the facility that is giving you your altimeter setting. In the present example, 20% of 1157 is about 230. Add that to 2820 to get 3050, which is the number you want to see on your altimeter during final approach. Note that this number, 3050, represents a peculiar mixture: 1663 real feet plus 1387 shrunken feet.
For better accuracy, you can use the following equation. The indicated altitude you want to see is:
|Ai = F + (Ar−F)|
In this formula, F is the facility elevation, Ar is the true altitude you want to fly (so Ar−F is the height above the facility, in real feet), λ is the standard lapse rate (2 ∘C per thousand feet), Tf is the temperature at the facility, 273.15 is the conversion from Centigrade to absolute temperature (Kelvin), and 15 C = 288.15 K is the sea-level temperature of the standard atmosphere. The denominator (273.15 + Tf) is the absolute temperature observed at the facility, while the numerator (288.15−λ F) is what the absolute temperature would be in standard conditions.
You might want to pre-compute this for a range of temperatures, and tabulate the results. An example is shown in table 20.1. Make a row for each of the critical altitudes, not just the Minimum Descent Altitude. Then, for each flight, find the column that applies to the current conditions and pencil-in each number where it belongs on the approach plate.
|Facility Temp, ∘C||12||0||−10||−20||−30||−40|
|Crossing Outer Marker||3600||3680||3760||3840||3940||4020|
|Minimum Descent Alt||2820||2860||2920||2960||3020||3080|
It is dangerously easy to get complacent about the temperature compensation. You could live in New Jersey for years without needing to think about it – but then you could fly to Saranac Lake in a couple of hours, and get a nasty surprise.
The HALT corrrection is important whenever temperatures are below standard and your height above the terrain is a small fraction of your height above the facility that gave you your altimeter setting. This can happen enroute or on approach:
A parcel of air will have less density if it has
If a parcel of air is less dense than the surrounding air, it will be subject to an upward force.5
As everyone knows, the tropics are hotter and more humid than the polar regions. Therefore there tends to be permanently rising air at the equator, and permanently sinking air at each pole.6 This explains why equatorial regions are known for having a great deal of cloudy, rainy weather, and why the polar regions have remarkably clear skies.
You might think that the air would rise at the equator, travel to the poles at high altitude, descend at the poles, and travel back to the equator at low altitude. The actual situation is a bit more complicated, more like what is shown in figure 20.9. In each hemisphere, there are actually three giant cells of circulation. Roughly speaking, there is rising air at the equator, descending air at 25 degrees latitude, rising air at 55 degrees latitude, and descending air at the poles. This helps explain why there are great deserts near latitude 25 degrees in several parts of the world.
The three cells are named as follows: the Hadley cell (after the person who first surmised that such things existed, way back in 1735), the Ferrel cell, and the polar cell. The whole picture is called the tricellular theory or tricellular model. It correctly describes some interesting features of the real-world situation, but there are other features that it does not describe correctly, so it shouldn’t be taken overly-seriously.
You may be wondering why there are three cells in each hemisphere, as opposed to one, or five, or some other number. The answer has to do with the size of the earth (24,000 miles in circumference), its speed of rotation, the thickness of the atmosphere (a few miles), the viscosity of the air, the brightness of the sun, and so forth. I don’t know how to prove that three is the right answer — so let’s just take it as an observed fact.
Low pressure near 55 degrees coupled with high pressure near 25 degrees creates a force pushing the air towards the north§ in the temperate regions. This force is mostly balanced by the Coriolis force associated with motion in the perpendicular direction, namely from west to east. As shown in figure 20.10, these are the prevailing westerlies that are familiar to people who live in these areas.
According to the same logic, low pressure near the equator coupled with high pressure near 25 degrees creates a force toward the equator. This force is mostly balanced by the Coriolis force associated with motion from east to west. These are the famous trade winds, which are typically found at low latitudes in each hemisphere, as shown in figure 20.10.
In days of old, sailing-ship captains would use the trade winds to travel in one direction and use the prevailing westerlies to travel in the other direction. The regions in between, where there was sunny weather but no prevailing wind, were named the horse latitudes. The region near the equator where there was cloudy weather and no prevailing wind was called the doldrums .
The boundaries of these great circulatory cells move with the sun. That is, they are found in more northerly positions in July and in more southerly positions in January. In certain locales, this can produce a tremendous seasonal shift in the prevailing wind, which is called a monsoon.7
Now let us add a couple more facts:
As a consequence, in temperate latitudes, we find that in summer, the land is hotter than the ocean (other things, such as latitude, being constant), whereas in winter the land is colder than the ocean.
This dissimilar heating of land and water creates huge areas of low pressure, rising air, and cyclonic flow over the oceans in winter, along with a huge area of high pressure and descending air over Siberia. Conversely there are huge areas of high pressure, descending air, and anticyclonic flow over the oceans in summer.
These continental / oceanic patterns are superimposed on the primary circulation patterns. In some parts of the world, one or the other is dominant. In other parts of the world, there is a day-by-day struggle between them.
Very near the surface (where friction dominates), air flows from high pressure to low pressure, just as water flows downhill. Meanwhile, in the other 99% of the atmosphere (where Coriolis effects dominate) the motion tends to be perpendicular to the applied force. The air flows clockwise§ around a high pressure center and counterclockwise§ around a low pressure center, cold front, or warm front.
Although trying to figure out all the details of the atmosphere from first principles is definitely not worth the trouble, it is comforting to know that the main features of the wind patterns make sense. They do not arise by magic; they arise as consequences of ordinary physical processes like thermal expansion and the Coriolis effect.
If you really want to know what the winds are doing at 10,000 feet, get the latest 700 millibar constant pressure analysis chart and have a look. These charts used to be nearly impossible for general-aviation pilots to obtain, but the situation is improving. Now you can get them by computer network or fax. On a trip of any length, this is well worth the trouble when you think of the time and fuel you can save by finding a good tailwind.
A few rules of thumb: eastbound in the winter, fly high. Westbound in the winter, fly lower. In the summer, it doesn’t matter nearly as much. In general, try to keep low pressure to your left§ and high pressure to your right§. | http://www.av8n.com/how/htm/atmo.html | 13 |
108 | Ada Programming/Type System
Ada's type system allows the programmer to construct powerful abstractions that represent the real world, and to provide valuable information to the compiler, so that the compiler can find many logic or design errors before they become bugs. It is at the heart of the language, and good Ada programmers learn to use it to great advantage. Four principles govern the type system:
- Strong typing: types are incompatible with one another, so it is not possible to mix apples and oranges. There are, however, ways to convert between types.
- Static typing: type checked while compiling, this allows type errors to be found earlier.
- Abstraction: types represent the real world or the problem at hand; not how the computer represents the data internally. There are ways to specify exactly how a type must be represented at the bit level, but we will defer that discussion to another chapter.
- Name equivalence, as opposed to structural equivalence used in most other languages. Two types are compatible if and only if they have the same name; not if they just happen to have the same size or bit representation. You can thus declare two integer types with the same ranges that are totally incompatible, or two record types with exactly the same components, but which are incompatible.
Types are incompatible with one another. However, each type can have any number of subtypes, which are compatible with one another, and with their base type.
Predefined types
There are several predefined types, but most programmers prefer to define their own, application-specific types. Nevertheless, these predefined types are very useful as interfaces between libraries developed independently. The predefined library, obviously, uses these types too.
These types are predefined in the Standard package:
- This type covers at least the range .. (RM 3.5.4 (21) (Annotated)). The Standard also defines Natural and Positive subtypes of this type.
- There is only a very weak implementation requirement on this type (RM 3.5.7 (14) (Annotated)); most of the time you would define your own floating-point types, and specify your precision and range requirements.
- A fixed point type used for timing. It represents a period of time in seconds (RM A.1 (43) (Annotated)).
- A special form of Enumerations. There are three predefined kinds of character types: 8-bit characters (called Character), 16-bit characters (called Wide_Character), and 32-bit characters (Wide_Wide_Character). Character has been present since the first version of the language (Ada 83), Wide_Character was added in Ada 95, while the type Wide_Wide_Character is available with Ada 2005.
- Three indefinite array types, of Character, Wide_Character, and Wide_Wide_Character respectively. The standard library contains packages for handling strings in three variants: fixed length (Ada.Strings.Fixed), with varying length below a certain upper bound (Ada.Strings.Bounded), and unbounded length (Ada.Strings.Unbounded). Each of these packages has a Wide_ and a Wide_Wide_ variant.
- A Boolean in Ada is an Enumeration of False and True with special semantics.
- An address in memory.
- An offset, which can be added to an address to obtain a new address. You can also subtract one address from another to get the offset between them. Together, Address, Storage_Offset and their associated subprograms provide for address arithmetic.
- A subtype of Storage_Offset which cannot be negative, and represents the memory size of a data structure (similar to C's
- In most computers, this is a byte. Formally, it is the smallest unit of memory that has an address.
- An array of Storage_Elements without any meaning, useful when doing raw memory access.
The Type Hierarchy
Types are organized hierarchically. A type inherits properties from types above it in the hierarchy. For example, all scalar types (integer, enumeration, modular, fixed-point and floating-point types) have operators "<", ">" and arithmetic operators defined for them, and all discrete types can serve as array indexes.
Here is a broad overview of each category of types; please follow the links for detailed explanations. Inside parenthesis there are equivalences in C and Pascal for readers familiar with those languages.
- Signed Integers (int, INTEGER)
- Signed Integers are defined via the range of values needed.
- Unsigned Integers (unsigned, CARDINAL)
- Unsigned Integers are called Modular Types. Apart from being unsigned they also have wrap-around functionality.
- Enumerations (enum, char, bool, BOOLEAN)
- Ada Enumeration types are a separate type family.
- Floating point (float, double, REAL)
- Floating point types are defined by the digits needed, the relative error bound.
- Ordinary and Decimal Fixed Point (DECIMAL)
- Fixed point types are defined by their delta, the absolute error bound.
- Arrays ( [ ], ARRAY [ ] OF, STRING )
- Arrays with both compile-time and run-time determined size are supported.
- Record (struct, class, RECORD OF)
- A record is a composite type that groups one or more fields.
- Access (*, ^, POINTER TO)
- Ada's Access types may be more than just a simple memory address.
- Task & Protected (no equivalence in C or Pascal)
- Task and Protected types allow the control of concurrency
- Interfaces (no equivalence in C or Pascal)
- New in Ada 2005, these types are similar to the Java interfaces.
Classification of Types
The types of this hierarchy can be classified as follows.
Specific vs. Class-wide
Operations of specific types are non-dispatching, those on class-wide types are dispatching.
New types can be declared by deriving from specific types; primitive operations are inherited by derivation. You cannot derive from class-wide types.
Constrained vs. Unconstrained
type AU is array (I range <>) of ... -- unconstrained type R (X: Discriminant [:= Default]) is ... -- unconstrained
By giving a constraint to an unconstrained subtype, a subtype or object becomes constrained:
subtype RC is R (Value); -- constrained subtype of R OC: R (Value); -- constrained object of anonymous constrained subtype of R OU: R; -- unconstrained object
Declaring an unconstrained object is only possible if a default value is given in the type declaration above. The language does not specify how such objects are allocated. GNAT allocates the maximum size, so that size changes that might occur with discriminant changes present no problem. Another possibility is implicit dynamic allocation on the heap and deallocation followed be a re-allocation when the size changes.
Definite vs. Indefinite
type T (<>) is ... -- indefinite type AU is array (I range <>) of ... -- indefinite type RI (X: Discriminant) is ... -- indefinite
Definite subtypes allow the declaration of objects without initial value, since objects of definite subtypes have constraints that are known at creation-time. Object declarations of indefinite subtypes need an initial value to supply a constraint; they are then constrained by the constraint delivered by the initial value.
OT: T := Expr; -- some initial expression (object, function call, etc.) OA: AU := (3 => 10, 5 => 2, 4 => 4); -- index range is now 3 .. 5 OR: RI := Expr; -- again some initial expression as above
Unconstrained vs. Indefinite
Note that unconstrained subtypes are not necessarily indefinite as can be seen above with RD: it is a definite unconstrained subtype.
Concurrency Types
The Ada language uses types for one more purpose in addition to classifying data + operations. The type system integrates concurrency (threading, parallelism). Programmers will use types for expressing the concurrent threads of control of their programs.
The core pieces of this part of the type system, the task types and the protected types are explained in greater depth in a section on tasking.
Limited Types
Limiting a type means disallowing assignment. The “concurrency types” described above are always limited. Programmers can define their own types to be limited, too, like this:
You can learn more in the limited types chapter.
Defining new types and subtypes
You can define a new type with the following syntax:
followed by the description of the type, as explained in detail in each category of type.
Formally, the above declaration creates a type and its first subtype named
T. The type itself, correctly called the "type of T", is anonymous; the RM refers to it as
T (in italics), but often speaks sloppily about the type T. But this is an academic consideration; for most purposes, it is sufficient to think of
T as a type. For scalar types, there is also a base type called
T'Base, which encompasses all values of T.
For signed integer types, the type of T comprises the (complete) set of mathematical integers. The base type is a certain hardware type, symmetric around zero (except for possibly one extra negative value), encompassing all values of T.
As explained above, all types are incompatible; thus:
type Integer_1 is range 1 .. 10; type Integer_2 is range 1 .. 10; A : Integer_1 := 8; B : Integer_2 := A; -- illegal!
is illegal, because
Integer_2 are different and incompatible types. It is this feature which allows the compiler to detect logic errors at compile time, such as adding a file descriptor to a number of bytes, or a length to a weight. The fact that the two types have the same range does not make them compatible: this is name equivalence in action, as opposed to structural equivalence. (Below, we will see how you can convert between incompatible types; there are strict rules for this.)
Creating subtypes
You can also create new subtypes of a given type, which will be compatible with each other, like this:
type Integer_1 is range 1 .. 10; subtype Integer_2 is Integer_1 range 7 .. 11; -- bad subtype Integer_3 is Integer_1'Base range 7 .. 11; -- OK A : Integer_1 := 8; B : Integer_3 := A; -- OK
The declaration of
Integer_2 is bad because the constraint
7 .. 11 is not compatible with
Integer_1; it raises
Contraint_Error at subtype elaboration time.
Integer_3 are compatible because they are both subtypes of the same type, namely
It is not necessary that the subtype ranges overlap, or be included in one another. The compiler inserts a run-time range check when you assign A to B; if the value of A, at that point, happens to be outside the range of
Integer_3, the program raises
There are a few predefined subtypes which are very useful:
subtype Natural is Integer range 0 .. Integer'Last; subtype Positive is Integer range 1 .. Integer'Last;
Derived types
A derived type is a new, full-blown type created from an existing one. Like any other type, it is incompatible with its parent; however, it inherits the primitive operations defined for the parent type.
type Integer_1 is range 1 .. 10; type Integer_2 is new Integer_1 range 2 .. 8; A : Integer_1 := 8; B : Integer_2 := A; -- illegal!
Here both types are discrete; it is mandatory that the range of the derived type be included in the range of its parent. Contrast this with subtypes. The reason is that the derived type inherits the primitive operations defined for its parent, and these operations assume the range of the parent type. Here is an illustration of this feature:
procedure Derived_Types is package Pak is type Integer_1 is range 1 .. 10; procedure P (I: in Integer_1); -- primitive operation, assumes 1 .. 10 type Integer_2 is new Integer_1 range 8 .. 10; -- must not break P's assumption -- procedure P (I: in Integer_2); inherited P implicitly defined here end Pak; package body Pak is -- omitted end Pak; use Pak; A: Integer_1 := 4; B: Integer_2 := 9; begin P (B); -- OK, call the inherited operation end Derived_Types;
When we call
P (B), the parameter B is converted to
Integer_1; this conversion of course passes since the set of acceptable values for the derived type (here, 8 .. 10) must be included in that of the parent type (1 .. 10). Then P is called with the converted parameter.
Consider however a variant of the example above:
procedure Derived_Types is package Pak is type Integer_1 is range 1 .. 10; procedure P (I: in Integer_1; J: out Integer_1); type Integer_2 is new Integer_1 range 8 .. 10; end Pak; package body Pak is procedure P (I: in Integer_1; J: out Integer_1) is begin J := I - 1; end P; end Pak; use Pak; A: Integer_1 := 4; X: Integer_1; B: Integer_2 := 8; Y: Integer_2; begin P (A, X); P (B, Y); end Derived_Types;
P (B, Y) is called, both parameters are converted to
Integer_1. Thus the range check on J (7) in the body of P will pass. However on return parameter Y is converted back to
Integer_2 and the range check on Y will of course fail.
With the above in mind, you will see why in the following program Constraint_Error will be called at run time.
procedure Derived_Types is package Pak is type Integer_1 is range 1 .. 10; procedure P (I: in Integer_1; J: out Integer_1); type Integer_2 is new Integer_1'Base range 8 .. 12; end Pak; package body Pak is procedure P (I: in Integer_1; J: out Integer_1) is begin J := I - 1; end P; end Pak; use Pak; B: Integer_2 := 11; Y: Integer_2; begin P (B, Y); end Derived_Types;
Subtype categories
Ada supports various categories of subtypes which have different abilities. Here is an overview in alphabetical order.
Anonymous subtype
A subtype which does not have a name assigned to it. Such a subtype is created with a variable declaration:
X : String (1 .. 10) := (others => ' ');
Here, (1 .. 10) is the constraint. This variable declaration is equivalent to:
Base type
In Ada, all types are anonymous and only subtypes may be named. For scalar types, there is a special subtype of the anonymous type, called the base type, which is nameable with the 'Base attribute. The base type comprises all values of the first subtype. Some examples:
The base type
Int'Base is a hardware type selected by the compiler that comprises the values of
Int. Thus it may have the range -27 .. 27-1 or -215 .. 215-1 or any other such type.
Enum'Base is the same as
Short'Base also holds the literal
Constrained subtype
A subtype of an indefinite subtype that adds constraints. The following example defines a 10 character string sub-type.
You cannot partially constrain an unconstrained subtype:
type My_Array is array (Integer range <>, Integer range <>) of Some_Type; -- subtype Constr is My_Array (1 .. 10, Integer range <>); illegal subtype Constr is My_Array (1 .. 10, -100 .. 200);
Constraints for all indices must be given, the result is necessarily a definite subtype.
Definite subtype
Objects of definite subtypes may be declared without additional constraints.
Indefinite subtype
An indefinite subtype is a subtype whose size is not known at compile-time but is dynamically calculated at run-time. An indefinite subtype does not by itself provide enough information to create an object; an additional constraint or explicit initialization expression is necessary in order to calculate the actual size and therefore create the object.
X : String := "This is a string";
X is an object of the indefinite (sub)type String. Its constraint is derived implicitly from its initial value. X may change its value, but not its bounds.
It should be noted that it is not necessary to initialize the object from a literal. You can also use a function. For example:
X : String := Ada.Command_Line.Argument (1);
This statement reads the first command-line argument and assigns it to X.
Named subtype
A subtype which has a name assigned to it. “First subtypes” are created with the keyword type (remember that types are always anonymous, the name in a type declaration is the name of the first subtype), others with the keyword subtype. For example:
Count_to_Ten is the first subtype of a suitable integer base type. However, if you would like to use this as an index constraint on String, the following declaration is illegal:
This is because String has Positive as index, which is a subtype of Integer (these declarations are taken from package Standard):
subtype Positive is Integer range 1 .. Integer'Last; type String is (Positive range <>) of Character;
So you have to use the following declarations:
Now Ten_Characters is the name of that subtype of String which is constrained to Count_To_Ten. You see that posing constraints on types versus subtypes has very different effects.
Unconstrained subtype
A subtype of an indefinite subtype that does not add a constraint only introduces a new name for the original subtype.
My_String and String are interchangeable.
Qualified expressions
In most cases, the compiler is able to infer the type of an expression; for example:
Here the compiler knows that
A is a value of the type Enum. But consider:
procedure Bad is type Enum_1 is (A, B, C); procedure P (E : in Enum_1) is... -- omitted type Enum_2 is (A, X, Y, Z); procedure P (E : in Enum_2) is... -- omitted begin P (A); -- illegal: ambiguous end Bad;
The compiler cannot choose between the two versions of P; both would be equally valid. To remove the ambiguity, you use a qualified expression:
P (Enum_1'(A)); -- OK
As seen in the following example, this syntax is often used when creating new objects. If you try to compile the example, it will fail with a compilation error since the compiler will determine that 256 is not in range of Byte.
with Ada.Text_IO; procedure Convert_Evaluate_As is type Byte is mod 2**8; type Byte_Ptr is access Byte; package T_IO renames Ada.Text_IO; package M_IO is new Ada.Text_IO.Modular_IO (Byte); A : constant Byte_Ptr := new Byte'(256); begin T_IO.Put ("A = "); M_IO.Put (Item => A.all, Width => 5, Base => 10); end Convert_Evaluate_As;
Type conversions
Data do not always come in the format you need them. You must, then, face the task of converting them. As a true multi-purpose language with a special emphasis on "mission critical", "system programming" and "safety", Ada has several conversion techniques. The most difficult part is choosing the right one, so the following list is sorted in order of utility. You should try the first one first; the last technique is a last resort, to be used if all others fail. There are also a few related techniques that you might choose instead of actually converting the data.
Since the most important aspect is not the result of a successful conversion, but how the system will react to an invalid conversion, all examples also demonstrate faulty conversions.
Explicit type conversion
An explicit type conversion looks much like a function call; it does not use the tick (apostrophe, ') like the qualified expression does.
The compiler first checks that the conversion is legal, and if it is, it inserts a run-time check at the point of the conversion; hence the name checked conversion. If the conversion fails, the program raises Constraint_Error. Most compilers are very smart and optimise away the constraint checks; so, you need not worry about any performance penalty. Some compilers can also warn that a constraint check will always fail (and optimise the check with an unconditional raise).
Explicit type conversions are legal:
- between any two numeric types
- between any two subtypes of the same type
- between any two types derived from the same type (note special rules for tagged types)
- between array types under certain conditions (see RM 4.6(24.2/2..24.7/2))
- and nowhere else
(The rules become more complex with class-wide and anonymous access types.)
I: Integer := Integer (10); -- Unnecessary explicit type conversion J: Integer := 10; -- Implicit conversion from universal integer K: Integer := Integer'(10); -- Use the value 10 of type Integer: qualified expression -- (qualification not necessary here).
This example illustrates explicit type conversions:
with Ada.Text_IO; procedure Convert_Checked is type Short is range -128 .. +127; type Byte is mod 256; package T_IO renames Ada.Text_IO; package I_IO is new Ada.Text_IO.Integer_IO (Short); package M_IO is new Ada.Text_IO.Modular_IO (Byte); A : Short := -1; B : Byte; begin B := Byte (A); -- range check will lead to Constraint_Error T_IO.Put ("A = "); I_IO.Put (Item => A, Width => 5, Base => 10); T_IO.Put (", B = "); M_IO.Put (Item => B, Width => 5, Base => 10); end Convert_Checked;
Explicit conversions are possible between any two numeric types: integers, fixed-point and floating-point types. If one of the types involved is a fixed-point or floating-point type, the compiler not only checks for the range constraints (thus the code above will raise Constraint_Error), but also performs any loss of precision necessary.
Example 1: the loss of precision causes the procedure to only ever print "0" or "1", since
P / 100 is an integer and is always zero or one.
with Ada.Text_IO; procedure Naive_Explicit_Conversion is type Proportion is digits 4 range 0.0 .. 1.0; type Percentage is range 0 .. 100; function To_Proportion (P : in Percentage) return Proportion is begin return Proportion (P / 100); end To_Proportion; begin Ada.Text_IO.Put_Line (Proportion'Image (To_Proportion (27))); end Naive_Explicit_Conversion;
Example 2: we use an intermediate floating-point type to guarantee the precision.
with Ada.Text_IO; procedure Explicit_Conversion is type Proportion is digits 4 range 0.0 .. 1.0; type Percentage is range 0 .. 100; function To_Proportion (P : in Percentage) return Proportion is type Prop is digits 4 range 0.0 .. 100.0; begin return Proportion (Prop (P) / 100.0); end To_Proportion; begin Ada.Text_IO.Put_Line (Proportion'Image (To_Proportion (27))); end Explicit_Conversion;
You might ask why you should convert between two subtypes of the same type. An example will illustrate this.
subtype String_10 is String (1 .. 10); X: String := "A line long enough to make the example valid"; Slice: constant String := String_10 (X (11 .. 20));
Slice has bounds 1 and 10, whereas
X (11 .. 20) has bounds 11 and 20.
Change of Representation
Type conversions can be used for packing and unpacking of records or arrays.
type Unpacked is record -- any components end record; type Packed is new Unpacked; for Packed use record -- component clauses for some or for all components end record;
P: Packed; U: Unpacked; P := Packed (U); -- packs U U := Unpacked (P); -- unpacks P
Checked conversion for non-numeric types
The examples above all revolved around conversions between numeric types; it is possible to convert between any two numeric types in this way. But what happens between non-numeric types, e.g. between array types or record types? The answer is two-fold:
- you can convert explicitly between a type and types derived from it, or between types derived from the same type,
- and that's all. No other conversions are possible.
Why would you want to derive a record type from another record type? Because of representation clauses. Here we enter the realm of low-level systems programming, which is not for the faint of heart, nor is it useful for desktop applications. So hold on tight, and let's dive in.
Suppose you have a record type which uses the default, efficient representation. Now you want to write this record to a device, which uses a special record format. This special representation is more compact (uses fewer bits), but is grossly inefficient. You want to have a layered programming interface: the upper layer, intended for applications, uses the efficient representation. The lower layer is a device driver that accesses the hardware directly and uses the inefficient representation.
package Device_Driver is type Size_Type is range 0 .. 64; type Register is record A, B : Boolean; Size : Size_Type; end record; procedure Read (R : out Register); procedure Write (R : in Register); end Device_Driver;
The compiler chooses a default, efficient representation for
Register. For example, on a 32-bit machine, it would probably use three 32-bit words, one for A, one for B and one for Size. This efficient representation is good for applications, but at one point we want to convert the entire record to just 8 bits, because that's what our hardware requires.
package body Device_Driver is type Hardware_Register is new Register; -- Derived type. for Hardware_Register use record A at 0 range 0 .. 0; B at 0 range 1 .. 1; Size at 0 range 2 .. 7; end record; function Get return Hardware_Register; -- Body omitted procedure Put (H : in Hardware_Register); -- Body omitted procedure Read (R : out Register) is H : Hardware_Register := Get; begin R := Register (H); -- Explicit conversion. end Read; procedure Write (R : in Register) is begin Put (Hardware_Register (R)); -- Explicit conversion. end Write; end Device_Driver;
In the above example, the package body declares a derived type with the inefficient, but compact representation, and converts to and from it.
This illustrates that type conversions can result in a change of representation.
View conversion, in object-oriented programming
Within object-oriented programming you have to distinguish between specific types and class-wide types.
With specific types, only conversions to ancestors are possible and, of course, are checked. During the conversion, you do not "drop" any components that are present in the derived type and not in the parent type; these components are still present, you just don't see them anymore. This is called a view conversion.
There are no conversions to derived types (where would you get the further components from?); extension aggregates have to be used instead.
type Parent_Type is tagged null record; type Child_Type is new Parent_Type with null record; Child_Instance : Child_Type; -- View conversion from the child type to the parent type: Parent_View : Parent_Type := Parent_Type (Child_Instance);
Since, in object-oriented programming, an object of child type is an object of the parent type, no run-time check is necessary.
With class-wide types, conversions to ancestor and child types are possible and are checked as well. These conversions are also view conversions, no data is created or lost.
procedure P (Parent_View : Parent_Type'Class) is -- View conversion to the child type: One : Child_Type := Child_Type (Parent_View); -- View conversion to the class-wide child type: Two : Child_Type'Class := Child_Type'Class (Parent_View);
This view conversion involves a run-time check to see if
Parent_View is indeed a view of an object of type
Child_Type. In the second case, the run-time check accepts objects of type
Child_Type but also any type derived from
View renaming
A renaming declaration does not create any new object and performs no conversion; it only gives a new name to something that already exists. Performance is optimal since the renaming is completely done at compile time. We mention it here because it is a common idiom in object oriented programming to rename the result of a view conversion.
type Parent_Type is tagged record <components>; end record; type Child_Type is new Parent_Type with record <further components>; end record; Child_Instance : Child_Type; Parent_View : Parent_Type'Class renames Parent_Type'Class (Child_Instance);
Parent_View is not a new object, but another name for
Child_Instance viewed as the parent, i.e. only the parent components are visible, the further child components are hidden.
Address conversion
Ada's access type is not just a memory location (a thin pointer). Depending on implementation and the access type used, the access might keep additional information (a fat pointer). For example GNAT keeps two memory addresses for each access to an indefinite object — one for the data and one for the constraint informations (Size, First, Last).
If you want to convert an access to a simple memory location you can use the package System.Address_To_Access_Conversions. Note however that an address and a fat pointer cannot be converted reversibly into one another.
The address of an array object is the address of its first component. Thus the bounds get lost in such a conversion.
type My_Array is array (Positive range <>) of Something; A: My_Array (50 .. 100); A'Address = A(A'First)'Address
Unchecked conversion
One of the great criticisms of Pascal was "there is no escape". The reason was that sometimes you have to convert the incompatible. For this purpose, Ada has the generic function Unchecked_Conversion:
generic type Source (<>) is limited private; type Target (<>) is limited private; function Ada.Unchecked_Conversion (S : Source) return Target;
Unchecked_Conversion will bit-copy the source data and reinterprete them under the target type without any checks. It is your chore to make sure that the requirements on unchecked conversion as stated in RM 13.9 (Annotated) are fulfilled; if not, the result is implementation dependent and may even lead to abnormal data. Use the 'Valid attribute after the conversion to check the validity of the data in problematic cases.
A function call to (an instance of) Unchecked_Conversion will copy the source to the destination. The compiler may also do a conversion in place (every instance has the convention Intrinsic).
To use Unchecked_Conversion you need to instantiate the generic.
In the example below, you can see how this is done. When run, the example will output "A = -1, B = 255". No error will be reported, but is this the result you expect?
with Ada.Text_IO; with Ada.Unchecked_Conversion; procedure Convert_Unchecked is type Short is range -128 .. +127; type Byte is mod 256; package T_IO renames Ada.Text_IO; package I_IO is new Ada.Text_IO.Integer_IO (Short); package M_IO is new Ada.Text_IO.Modular_IO (Byte); function Convert is new Ada.Unchecked_Conversion (Source => Short, Target => Byte); A : constant Short := -1; B : Byte; begin B := Convert (A); T_IO.Put ("A = "); I_IO.Put (Item => A, Width => 5, Base => 10); T_IO.Put (", B = "); M_IO.Put (Item => B, Width => 5, Base => 10); end Convert_Unchecked;
There is of course a range check in the assignment
B := Convert (A);. Thus if B were defined as
B: Byte range 0 .. 10;, Constraint_Error would be raised.
If the copying of the result of Unchecked_Conversion is too much waste in terms of performance, then you can try overlays, i.e. address mappings. By using overlays, both objects share the same memory location. If you assign a value to one, the other changes as well. The syntax is:
where expression defines the address of the source object.
While overlays might look more elegant than Unchecked_Conversion, you should be aware that they are even more dangerous and have even greater potential for doing something very wrong. For example if
Source'Size < Target'Size and you assign a value to Target, you might inadvertently write into memory allocated to a different object.
You have to take care also of implicit initializations of objects of the target type, since they would overwrite the actual value of the source object. The Import pragma with convention Ada can be used to prevent this, since it avoids the implicit initialization, RM B.1 (Annotated).
The example below does the same as the example from "Unchecked Conversion".
with Ada.Text_IO; procedure Convert_Address_Mapping is type Short is range -128 .. +127; type Byte is mod 256; package T_IO renames Ada.Text_IO; package I_IO is new Ada.Text_IO.Integer_IO (Short); package M_IO is new Ada.Text_IO.Modular_IO (Byte); A : aliased Short; B : aliased Byte; for B'Address use A'Address; pragma Import (Ada, B); begin A := -1; T_IO.Put ("A = "); I_IO.Put (Item => A, Width => 5, Base => 10); T_IO.Put (", B = "); M_IO.Put (Item => B, Width => 5, Base => 10); end Convert_Address_Mapping;
Export / Import
Just for the record: There is still another method using the Export and Import pragmas. However, since this method completely undermines Ada's visibility and type concepts even more than overlays, it has no place here in this language introduction and is left to experts.
Elaborated Discussion of Types for Signed Integer Types
As explained before, a type declaration
declares an anonymous type
T and its first subtype
T (please note the italicization).
T encompasses the complete set of mathematical integers. Static expressions and named numbers make use of this fact.
All numeric integer literals are of type
Universal_Integer. They are converted to the appropriate specific type where needed.
Universal_Integer itself has no operators.
Some examples with static named numbers:
S1: constant := Integer'Last + Integer'Last; -- "+" of Integer S2: constant := Long_Integer'Last + 1; -- "+" of Long_Integer S3: constant := S1 + S2; -- "+" of root_integer S4: constant := Integer'Last + Long_Integer'Last; -- illegal
Static expressions are evaluated at compile-time on the appropriate types with no overflow checks, i.e. mathematically exact (only limited by computer store). The result is then implicitly converted to
The literal 1 in
S2 is of type
Universal_Integer and implicitly converted to
S3 implicitly converts the summands to
root_integer, performs the calculation and converts back to
S4 is illegal because it mixes two different types. You can however write this as
S5: constant := Integer'Pos (Integer'Last) + Long_Integer'Pos (Long_Integer'Last); -- "+" of root_integer
where the Pos attributes convert the values to
Universal_Integer, which are then further implicitly converted to
root_integer, added and the result converted back to
root_integer is the anonymous greatest integer type representable by the hardware. It has the range
System.Min_Integer .. System.Max_Integer. All integer types are rooted at
root_integer, i.e. derived from it.
Universal_Integer can be viewed as
During run-time, computations of course are performed with range checks and overflow checks on the appropriate subtype. Intermediate results may however exceed the range limits. Thus with
I, J, K of the subtype
T above, the following code will return the correct result:
I := 10; J := 8; K := (I + J) - 12; -- I := I + J; -- range check would fail, leading to Constraint_Error
Real literals are of type
Universal_Real, and similar rules as the ones above apply accordingly.
Relations between types
Types can be made from other types. Array types, for example, are made from two types, one for the arrays' index and one for the arrays' components. An array, then, expresses an association, namely that between one value of the index type and a value of the component type.
type Color is (Red, Green, Blue); type Intensity is range 0 .. 255; type Colored_Point is array (Color) of Intensity;
The type Color is the index type and the type Intensity is the component type of the array type Colored_Point. See array.
See also
Ada Reference Manual
- 3.2.1 Type Declarations (Annotated)
- 3.3 Objects and Named Numbers (Annotated)
- 3.7 Discriminants (Annotated)
- 3.10 Access Types (Annotated)
- 4.9 Static Expressions and Static Subtypes (Annotated)
- 13.9 Unchecked Type Conversions (Annotated)
- 13.3 Operational and Representation Attributes (Annotated)
- Annex K (informative) Language-Defined Attributes (Annotated) | http://en.wikibooks.org/wiki/Ada_Programming/Types | 13 |
61 | Topics covered: Multielectron atoms and electron configurations
Instructor: Catherine Drennan, Elizabeth Vogel Taylor
Lecture Notes (PDF)
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu.
PROFESSOR: OK. As you're settling into your seats, why don't we take 10 more seconds on the clicker question here. All right, so this is a question that you saw on your problem-set, so this is how many electrons would we expect to see in a single atom in the 2 p state. So, let's see what you said here. Six. And the correct answer is, in fact, six. And most of you got that, about 75% of you got that right. So, let's consider some people got it wrong, however, and let's see where that wrong answer might have come from, or actually, more importantly, let's see how we can all get to the correct answer. So if we say that we have a 2 p orbital here, that means that we can have how many different complete orbitals have a 2 for an n, and a p as its l value? three.
So, we can have the 2 p x, 2 p y, and 2 p z orbitals. Each of these orbitals can have two electrons in them, so we get two electrons here, here, and here. So, we end up with a total of six electrons that are possible that have that 2 p orbital value.
So this is a question that, hopefully, if we see another one like this we'll get a 100% on, because you've already seen this in your problem-set as much as you're going to see it, and you're seen it in class as much as you're going to see it. So if you're still having trouble with this, this is something you want to bring up in your recitation. And the idea behind this, of course, is that we know that every electron has to have its own distinct set of four quantum numbers. So that means that if we have three orbitals, we can only have six electrons in those complete three orbitals.
All right, so today we're going to fully have our discussion focused on multi-electron atoms. We started talking about these on Wednesday, and what we're going to start with is considering specifically the wave functions for multi-electron atoms. So, the wave functions for multi-electron atoms. Then we'll move on to talking about the binding energies, and we'll specifically talk about how that differs from the binding energies we saw of hydrogen atoms. We talked about that quite in depth, but there are some differences now that we have more than one electron in the atom.
Then something that you probably have a lot of experience with is talking about electron configuration and writing out the electron configuration. But we'll go over that, particularly some exceptions, when we're filling in electron configurations, and how we would go about doing that for positive ions, which follow a little bit of a different procedure.
And if we have time today, we'll start in on the photo-electron spectroscopy, if not, that's where we'll start when we come back on Wednesday.
So, what we saw just on Wednesday, in particular, but also as we have been discussing the Schrodinger equation for the hydrogen atom, is that this equation can be used to correctly predict the atomic structure of hydrogen, and also all of the energy levels of the different orbitals in hydrogen, which matched up with what we observed, for example, when we looked at the hydrogen atom emission spectra. And what we can do is we can also use the Schrodinger equation to make these accurate predictions for any other atom that we want to talk about in the periodic table.
The one problem that we run into is as we go to more and more atoms on the table, as we add on electrons, the Schrodinger equation is going to get more complicated. So here I've written for the hydrogen atom that deceptively simple form of the Schrodinger equation, where we don't actually write out the Hamiltonian operator, but you remember that's a series of second derivatives, so we have a differential equation that were actually dealing with.
If you think about what happens when we go from hydrogen to helium, now instead of one electron, so three position variables, we have to describe two electrons, so now we have six position variables that we need to plug into our Schrodinger equation. So similarly, as we now move up only one more atom in the table, so to an atomic number of three or lithium, now we're going from six variables all the way to nine variables.
So you can see that we're starting to have a very complicated equation, and it turns out that it's mathematically impossible to even solve the exact Schrodinger equation as we move up to higher numbers of electrons.
So, what we say here is we need to take a step back here and come up with an approximation that's going to allow us to think about using the Schrodinger equation when we're not just talking about hydrogen or one electron, but when we have these multi-electron atoms.
The most straightforward way to do this is to make what's called a one electron orbital approximation, and when you do you get out what are called Hartree orbitals, and what this means is that instead of considering the wave function as a function, for example, for helium as six different variables, what we do is we break it up and treat each electron has a separate wave function and say that our assumption is that the total wave function is equal to the product of the two individual wave functions.
So, for example, for helium, we can break it up into wave function for it the r, theta, and phi value for electron one, and multiply that by the wave function for the r, theta, and phi value for electron number two. So essentially what we're saying is we have a wave function for electron one, and a wave function for electron two.
We know how to write that in terms of the state numbers, so it would be 1, 0, 0, because we're talking about the ground state. We're always talking about the ground state unless we specify that we're talking about an excited state. And we have the spin quantum number as plus 1/2 for electron one, and minus 1/2 for the electron two. It's arbitrary which one I assigned to which, but we know that we have to have each of those two magnetic spin quantum numbers in order to have the distinct four letter description of an electron. We know that it's not enough just to describe the orbital by three quantum numbers, we need that fourth number to fully describe an electron.
And when we describe this in terms of talking about chemistry terminology, we would call the first one the 1 s, and 1 is in parentheses because we're talking about the first electron there, and we would multiply it by the wave function for the second one, which is also 1 s, but now we are talking about that second electron.
We can do the exact same thing when we talk about lithium, but now instead of breaking it up into two wave functions, we're breaking it up into three wave functions because we have three electrons.
So, the first again is the 1 s 1 electron. We then have the 1 s 2 electron, and what is our third electron going to be? Yeah. So it's going to be the 2 s 1 electron. So we can do this essentially for any atom we want, we just have more and more wave functions that we're breaking it up to as we get to more and more electrons.
And we can also write this in an even simpler form, which is what's called electron configuration, and this is just a shorthand notation for these electron wave functions. So, for example, again we see hydrogen is 1 s 1, helium we say is 1 s 2, or 1 s squared, so instead of writing out the 1 s 1 and the 1 s 2, we just combine it as 1 s squared, lithium is 1 s 2, 2 s 1.
So writing out electron configurations I realize is something that a lot of you had experience with in high school, you're probably -- many of you are very comfortable doing it, especially for the more straightforward atoms. But what's neat to kind of think about is if you think about what a question might have been in high school, which is please write the electron configuration for lithium, now we can also answer what sounds like a much more impressive, and a much more complicated question, which would be write the shorthand notation for the one electron orbital approximation to solve the Schrodinger equation for lithium.
So essentially, that is the exact same thing. The electronic configuration, all it is is the shorthand notation for that one electron approximation for the Schrodinger equation for lithium. So, if you're at hanging your exam from high school on the fridge and you want to make it look more impressive, you could just rewrite the question as that, and essentially you're answering the same thing. But now, hopefully, we understand where that comes from, why it is that we use the shorthand notation.
So, let's write this one electron orbital approximation for berylium, that sounds like a pretty complicated question, but hopefully we know that it's not at all, it's just 1 s 2, and then 2 s 2. And we can go on and on down the table. So, for example, for boron, now we're dealing with 1 s 2, then 2 s 2, and now we have to move into the p orbital so we go to 2 p 1.
So that's a little bit of an introduction into electron configuration. We'll get into some spots where it gets a little trickier, a little bit more complicated later in class. But that's an idea of what it actually means to talk about electron configuration.
So now that we can do this, we can compare and think about, we know how to consider wave functions for individual electrons in multi-electron atoms using those Hartree orbitals or the one electron wave approximations. So let's compare what some of the similarities and differences are between hydrogen atom orbitals, which we spent a lot of time studying, and now these one electron orbital approximations for these multi-electron atoms.
So, as an example, let's take argon, I've written up the electron configuration here, and let's think about what some of the similarities might be between wave functions in argon and wave functions for hydrogen. So the first is that the orbitals are similar in shape. So for example, if you know how to draw an s orbital for a hydrogen atom, then you already know how to draw the shape of an s orbital or a p orbital for argon.
Similarly, if we were to look at the radial probability distributions, what we would find is that there's an identical nodal structure. So, for example, if we look at the 2 s orbital of argon, it's going to have the same amount of nodes and the same type of nodes that the 2 s orbital for hydrogen has. So how many nodes does the 2 s orbital for hydrogen have? It has one node, right, because if we're talking about nodes it's just n minus 1 is total nodes, so you would just say 2 minus 1 equals 1 node for the 2 s orbital. And how many of those nodes are angular nodes? zero. L equals 0, so we have zero angular nodes, that means that they're all radial nodes. So what we end up with is one radial node for the 2 s orbital of hydrogen, and we can apply that for argon or any other multi-electron atom here, we also have one radial node for the 2 s orbital of argon.
But there's also some differences that we need to keep in mind, and that will be the focus of a lot of the lecture today. One of the main difference is is that when you're talking about multi-electron orbitals, they're actually smaller than the corresponding orbital for the hydrogen atom.
We can think about why that would be. Let's consider again an s orbital for argon, so let's say we're looking at the 1 s orbital for argon. What is the pull from the nucleus from argon going to be equal to? What is the charge of the nucleus? Does anyone know, it's a quick addition problem here. Yeah, so it's 18. So z equals 18, so the nucleus is going to be pulling at the electron with a Coulombic attraction that has a charge of plus 18, if we're talking about the 1 s electron or the 1 s orbital in argon. It turns out, and we're going to get the idea of shielding, so it's not going to actually feel that full plus 18, but it'll feel a whole lot more than it will just feel in terms of a hydrogen atom where we only have a nuclear charge of one.
So because we're feeling a stronger attractive force from the nucleus, we're actually pulling that electron in closer, which means that the probability squared of where the electron is going to be is actually a smaller radius. So when we talk about the size of multi-electron orbitals, they're actually going to be smaller because they're being pulled in closer to the nucleus because of that stronger attraction because of the higher charge of the nucleus in a multi-electron atom compared to a hydrogen atom.
The other main difference that we're really going to get to today is that in multi-electron atoms, orbital energies depend not just on the shell, which is what we saw before, not just on the value of n, but also on the angular momentum quantum number. So they also depend on the sub-shell or l. And we'll really get to see a picture of that, and I'll be repeating that again and again today, because this is something I really want everyone to get firmly into their heads.
So, let's now take a look at the energies. We looked at the wave functions, we know the other part of solving the Schrodinger equation is to solve for the binding energy of electrons to the nucleus, so let's take a look at those. And there again is another difference between multi-electron atom and the hydrogen atoms. So when we talk about orbitals in multi-electron atoms, they're actually lower in energy than the corresponding h atom orbitals. And when we say lower in energy, of course, what we mean is more negative. Right, because when we think of an energy diagram, that lowest spot there is going to have the lowest value of the binding energy or the most negative value of binding.
So, let's take a look here at an example of an energy diagram for the hydrogen atom, and we can also look at a energy diagram for a multi-electron atom, and this is just a generic one here, so I haven't actually listed energy numbers, but I want you to see the trend. So for example, if you look at the 1 s orbital here, you can see that actually it is lower in the case of the multi-electron atom than it is for the hydrogen atom.
You see the same thing regardless of which orbital you're looking at. For example, for the 2 s, again what you see is that the multi-electron atom, its 2 s orbital is lower in energy than it is for the hydrogen. The same thing we see for the 2 p. Again the 2 p orbitals for the multi-electron atom, lower in energy than for the hydrogen atom.
But there's something you'll note here also when I point out the case of the 2 s versus the 2 p, which is what I mentioned that I would be saying again and again, which is when we look at the hydrogen atom, the energy of all of the n equals 2 orbitals are exactly the same. That's what we call degenerate orbitals, they're the same energy. But when we get to the multi-electron atoms, we see that actually the p orbitals are higher in energy than the s orbitals. So we'll see specifically why it is that the s orbitals are lower in energy. We'll get to discussing that, but what I want to point out here again is the fact that instead of just being dependent on n, the energy level is dependent on both n and l. And is no longer that sole determining factor for energy, energy also depends both on n and on l.
And we can look at precisely why that is by looking at the equations for the energy levels for a hydrogen atom versus the multi-electron atom. So, for a hydrogen atom, and actually for any one electron atom at all, this is our energy or our binding energy. This is what came out of solving the Schrodinger equation, we've seen this several times before that the energy is equal to negative z squared times the Rydberg constant over n squared. Remember the z squared, that's just the atomic number or the charge on the nucleus, and we can figure that out for any one electron atom at all. And an important thing to note is in terms of what that physically means, so physically the binding energy is just the negative of the ionization energy. So if we can figure out the binding energy, we can also figure out how much energy we have to put into our atom in order to a eject or ionize an electron.
We can also look at the energy equation now for a multi-electron atom. And the big difference is right here in this term. So instead of being equal to negative z squared, now we're equal to negative z effective squared times r h all over n squared.
So when we say z effective, what we're talking about is instead of z, the charge on the nucleus, we're talking about the effective charge on a nucleus. So for an example, even if a nucleus has a charge of 7, but the electron we're interested in only feels the charge as if it were a 5, then what we would say is that the z effective for the nucleus is 5 for that electron. And we'll talk about this more, so if this is not completely intuitive, we'll see why in a second.
So the main idea here is z effective is not z, so don't try to plug one in for the other, they're absolutely different quantities in any case when we're not talking about a 1 electron atom. And the point that I also want to make is the way that they differ, z effective actually differs from the total charge in the nucleus due to an idea called shielding. So, shielding happens when you have more than one electron in an atom, and the reason that it's happening is because you're actually canceling out some of that positive charge from the nucleus or that attractive force with a repulsive force between two electrons. So if you have some charge in the nucleus, but you also have repulsion with another electron, the net attractive charge that a given electron going to feel is actually less than that total charge in the nucleus.
And shielding is a little bit of a misnomer because it's not actually that's the electron's blocking the charge from another electron, it's more like you're canceling out a positive attractive force with a negative repulsive force. But shielding is a good way to think about it, and actually, that's what we'll use in this class to sort of visualize what's happening when we have many electrons in an atom and they're shielding each other. Shielding is the term that's used, it brings up a certain image in our mind, and even though that's not precisely what's going on, it's a very good way to visualize what we're trying to think about here.
So let's take two cases of shielding if we're talking about, for example, the helium, a helium nucleus or a helium atom. So what is the charge on a helium nucleus? What is z? Yup, so it's plus 2. So the charge is actually just equal to z, we can write plus 2, or you can write plus 2 e, e just means the absolute value of the charge on an electron. When we plug it into equations we just use the number, the e is assumed there.
So, let's think of what we could have if we have two electrons in a helium atom that are shielded in two extreme ways. So, in the first extreme way, let's consider that our first electron is at some distance very far away from the nucleus, we'll call this electron one, and our second electron is, in fact, much, much closer to the nucleus, and let's think of the idea of shielding in more of the classical sense where we're actually blocking some of that positive charge. So if we have total and complete shielding where that can actually negate a full positive charge, because remember our nucleus is plus 2, one of the electrons is minus 1, so if it totally blocks it, all we would have left from the nucleus is an effective charge of plus 1.
So in our first case, our first extreme case, would be that the z effective that is felt by electron number 1, is going to be plus 1.
So, what we can do is figure out what we would expect the binding energy of that electron to be in the case of this total shielding. And remember again, the binding energy physically is the negative of the ionization energy, and that's actually how you can experimentally check to see if this is actually correct. And that's going to be equal to negative z effective squared times r h over n squared.
So, let's plug in these values and see what we would expect to see for the energy. So it would be negative 1 squared times r h all over 1 squared, since our z effective we're saying is 1, and n is also equal to 1, because we're in the ground state here so we're talking about a 1 s orbital.
So if we have a look at what the answer would be, this looks very familiar. We would expect our binding energy to be a negative 2 . 1 8 times 10 to the negative 18 joules. This is actually what the binding energy is for hydrogen atom, and in fact, that makes sense because in our extreme case where we have total shielding by the second electron of the electron of interest, it's essentially seeing the same nuclear force that an electron in a hydrogen atom would see.
All right. Let's consider now the second extreme case, or extreme case b, for our helium atom. Again we have the charge of the nucleus on plus 2, but let's say this time the electron now is going to be very, very close to the nucleus. And let's say our second electron now is really far away, such that it's actually not going to shield any of the nuclear charge at all from that first electron. So what we end up saying is that the z effective or the effective charge that that first electron feels is now going to be plus 2.
Again, we can just plug this into our equation, so if we write in our numbers now saying that z effective is equal to 2, we find that we get negative 2 squared r h, all divided again by 1 squared -- we're still talking about a 1 s orbital here. And if we do that calculation, what we find out is that the binding energy, in this case where we have no shielding, is negative 8 . 7 2 times 10 to the negative 18 joules.
So, let's compare what we've just seen as our two extremes. So in extreme case a, we saw that z effective was 1. This is what we call total shielding. The electron completely canceled out it's equivalent of charge from the nucleus, such that we only saw in a z effective of 1. In an extreme case b, we had a z effective of 2, so essentially what we had was no shielding at all. We said that that second electron was so far out of the picture, that it had absolutely no affect on what the charge was felt by that first electron.
So, we can actually think about now, we know the extreme cases, but what is the reality, and the reality is if we think about the ionization energy, and we measure it experimentally, we find that it's 3 . 9 4 times 10 to the negative 18 joules, and what you can see is that falls right in the middle between the two ionization energies that we would expect for the extreme cases. And this is absolutely confirming that what is happening is what we would expect to happen, because we would expect the case of reality is that, in fact, some shielding is going on, but it's not going to be total shielding, but at the same time it's not going to be no shielding at all.
And if we experimentally know what the ionization energy is, we actually have a way to find out what the z effective will be equal to. And we can use this equation here, this is just the equation for the ionization energy, which is the same thing as saying the negative of the binding energy that's equal to z effective squared r h over n squared.
So, what we can do instead of talking about the ionization energy, because that's one of our known quantities, is we can instead solve so that we can find z effective. So, if we just rearrange this equation, what we find is that z effective is equal to n squared times the ionization energy, all over the Rydberg constant and the square root of this. So the square root of n squared r e over r h.
So what's our value for n here? one. Yup, that's right. And then what's our value for ionization energy? Yup. So it's just that ionization energy that we have experimentally measured, 3 . 9 4 times 10 to the negative 18 joules. We put all of this over the Rydberg constant, which is 2 . 1 8 times 10 to the negative 18 joules, and we want to raise this all to the 1/2. So what we end up seeing is that the z effective is equal to positive 1 . 3 4.
So, this is what we find the actual z effective is for an electron in the helium atom. Does this seem like a reasonable number? Yeah? Who says yes, raise your hand if this seems reasonable. Does anyone think this seems not reasonable? OK. How can we check, for example, if it does or if it doesn't seem reasonable. Well, the reason, the way that we can check it is just to see if it's in between our two extreme cases. We know that it has to be more than 1, because even if we had total shielding, we would at least feel is the effective of 1. We know that it has to be equal to less than 2, because even if we had absolutely no shielding at all, the highest z effective we could have is 2, so it makes perfect sense that we have a z effective that falls somewhere in the middle of those two.
So, let's look at another example of thinking about whether we get an answer out that's reasonable. So we should be able to calculate a z effective for any atom that we want to talk about, as long as we know what that ionization energy is. And I'm not expecting you to do that calculation here, because it involves the calculator, among maybe a piece of paper as well. But what you should be able to do is take a look at a list of answers for what we're saying z effective might be, and determining which ones are possible versus which ones are not possible.
So, why don't you take a look at this and tell me which are possible for a 2 s electron in a lithium atom where z is going to be equal to three? Let's do 10 more seconds on that.
OK, great. So, the majority of you got it right. There are some people that are a little bit confused still on where this make sense, so, let's just think about this a little bit more. So now we're saying that z is equal to 3, so if, for example, we had total shielding by the other two electrons, if they totally canceled out one unit of positive charge each in the nucleus, what we would end up with is we started with 3 and then we would subtract a charge of 2, so we would end up with a plus 1 z effective from the nucleus. So our minimum that we're going to see is that the smallest we can have for a z effective is going to be equal to 1. So any of the answers that said a z effective of . 3 9 or . 8 7 are possible, they actually aren't possible because even if we saw a total shielding, the minimum z effective we would see is 1. And then I think it looks like most people understood that four was not a possibility. Of course, if we saw no shielding at all what we would end up with is a z effective of 3.
So again, when we check these, what we want to see is that our z effective falls in between the two extreme cases that we could envision for shielding. And again, just go back and look at this and think about this, this should make sense if you kind of look at those two extreme examples, so even if it doesn't make entire sense in the 10 seconds you have to answer a clicker question right now, make sure this weekend you can go over it and be able to predict if you saw a list of answers or if you calculate your own answer on the p-set, whether or not it's right or it's wrong, you should be able to qualitatively confirm whether you have a reasonable or a not reasonable answer after you do the calculation part.
All right. So now that we have a general idea of what we're talking about with shielding, we can now go back and think about why it is that the orbitals are ordered in the order that they are. We know that the orbitals for multi-electron atoms depend both on n and on l. But we haven't yet addressed why, for example, a 2 s orbital is lower in energy than the 2 p orbital, or why, for example, a 3 s orbital is lower in energy than a 3 p, which in turn is lower than a 3 d orbital.
So let's think about shielding in trying to answer why, in fact, it's those s orbitals that are the lowest in energy. And when we make these comparisons, one thing I want to point out is that we need to keep the constant principle quantum number constant, so we're talking about a certain state, so we could talk about the n equals 2 state, or the n equals 3 state. And when we're talking about orbitals in the same state, what we find is that orbitals that have lower values of l can actually penetrate closer to the nucleus.
This is an idea we introduced on Wednesday when we were looking at the radial probability distributions of p orbitals versus s orbitals versus d orbitals. But now it's going to make more sense because in that case we were just talking about single electron atoms, and now we're talking about a case where we actually can see shielding. So what is actually going to matter is how closely that electron can penetrate to the nucleus, and what I mean by penetrate to the nucleus is is there probability density a decent amount that's very close to the nucleus.
So, if we superimpose, for example, the 2 s radial probability distribution over the 2 p, what we see is there's this little bit of probability density in the 2 s, but it is significant, and that's closer to the nucleus than it is for the 2 p. And remember, this is in complete opposition to what we call the size of the orbitals, because we know that the 2 p is actually a smaller orbital. For example, when we're talking about radial probability distributions, the most probable radius is closer into the nucleus than it is for the s orbital.
But what's important is not where that most probable radius is when we're talking about the z effective it feels, what's more important is how close the electron actually can get the nucleus. And for the s electron, since it can get closer, what we're going to see is that s electrons are actually less shielded than the corresponding p electrons. They're less shielded because they're closer to the nucleus, they feel a greater z effective.
We can see the same thing when we compare p electrons to d electrons, or p and d orbitals. I've drawn the 3 p and the 3 d orbital here, and again, what you can see is that the p electron are going to be able to penetrate closer to the nucleus because of the fact that there's this bit of probability density that's in significantly closer to the nucleus than it is for the 3 d orbital.
And if we go ahead and superimpose the 3 s on top of the 3 p, you can see that the 3 s actually has some bit of probability density that gets even closer to the nucleus than the 3 p did. So that's where that trend comes from where the s orbital is lower than the d orbital, which is lower than the d orbital.
So now that we have this idea of shielding and we can talk about the differences in the radial probability distributions, we can consider more completely why, for example, if we're talking about lithium, we write the electron configuration as 1 s 2, 2 s 1, and we don't instead jump from the 1 s 2 all the way to a p orbital. So the most basic answer that doesn't explain why is just to say well, the s orbital is lower in energy than the p orbital, but we now have a more complete answer, so we can actually describe why that is. And what we're actually talking about again is the z effective. So that z effective felt by the 2 p is going to be less than the z effective felt by the 2 s.
And another way to say this, I think it's easiest to look at just the fact that there's some probability density very close the nucleus, but what we can actually do is average the z effective over this entire radial probability distribution, and when we find that, we find that it does turn out that the average of the z effective over the 2 p is going to be less than that of the 2 s.
So we know that we can relate to z effective to the actual energy level of each of those orbitals, and we can do that using this equation here where it's negative z effective squared r h over n squared, we're going to see that again and again. And it turns out that if we have a, for example, for s, a very large z effective or larger z effective than for 2 p, and we plug in a large value here in the numerator, that means we're going to end up with a very large negative number. So in other words a very low energy is what we're going to have when we talk about the orbitals -- the energy of the 2 s orbital is going to be less than the energy of the 2 p orbital.
Another way to say that it's going to be less, so you don't get confused with that the fact this is in the numerator here, there is that negative sign so it's less energy but it's a bigger negative number that gives us that less energy there.
All right, so let's go back to electrons configurations now that we have an idea of why the orbitals are listed in the energy that they are listed under, why, for example, the 2 s is lower than the 2 p. So now we can go back and think about filling in these electron configurations for any atom.
I think most and you are familiar with the Aufbau or the building up principle, you probably have seen it quite a bit in high school, and this is the idea that we're filling up our energy states, again, which depend on both n and l, one electron at a time starting with that lowest energy and then working our way up into higher and higher orbitals.
And when we follow the Aufbau principle, we have to follow two other rules. One is the Pauli exclusion principal, we discussed this on Wednesday. So this is just the idea that the most electrons that you can have in a single orbital is two electrons. That makes sense because we know that every single electron has to have its own distinct set of four quantum numbers, the only way that we can do that is to have a maximum of two spins in any single orbital or two electrons per orbital.
We also need to follow Hund's rule, this is that a single electron enters each state before it enters a second state. And by state we just mean orbital, so if we're looking at the p orbitals here, that means that a single electron goes in x, and then it will go in the z orbital before a second one goes in the x orbital. This intuitively should make a lot of sense, because we know we're trying to minimize electron repulsions to keep things in as low an energy state as possible, so it makes sense that we would put one electron in each orbital first before we double up in any orbital.
And the third fact that we need to keep in mind is that spins remain parallel prior to adding a second electron in any of the orbitals. So by parallel we mean they're either both spin up or they're both spin down -- remember that's our spin quantum number, that fourth quantum number. And the reason for this comes out of solving the relativistic version of the Schrodinger equation, so unfortunately it's not as intuitive as knowing that we want to fill separate before we double up a degenerate orbital, but you just need to keep this in mind and you need to just memorize the fact that you need to be parallel before you double up in the orbital.
So, we'll see how this works in a second. So let's do this considering, for example, what it would look like if we were to write out the electron configuration for oxygen where z is going to be equal to 8. So what we're doing is filling in those eight electrons following the Aufbau principle, so our first electron is going to go in the 1 s, and then we have no other options for other orbitals that are at that same energy, so we put the second electron in the 1 s as well. Then we go up to the 2 s, and we have two electrons that we can fill in the 2 s. And now we get the p orbitals, remember we want to fill up 1 orbital at a time before we double up, so we'll put one in the 2 p x, then one in the 2 p z, and then one in the 2 p y.
At this point, we have no other choice but to double up before going to the next energy level, so we'll put a second one in the 2 p x. And I arbitrarily chose to put it in the 2 p x, we also could have put it in the 2 p y or the 2 p z, it doesn't matter where you double up, they're all the same energy.
So if we think about what we would do to actually write out this configuration, we just write the energy levels that we see here or the orbital approximations. So if we're talking about oxygen, we would say that it's 1 s 2, then we have 2 s 2, and then we have 2 p, and our total number of electrons in the p orbitals are four.
So it's OK to not specify. I want to point out, whether you're in the p x, the p y, or the p z, unless a question specifically asks you to specify the m sub l, which occasionally will happen, but if it doesn't happen you just write it like this. But if, in fact, you are asked to specify the m sub l's, then we would have to write it out more completely, which would be the 1 s 2, the 2 s 2, and then we would say 2 p x 2, 2 p z 1, and 2 p y 1.
So again, in general, just go ahead and write it out like this, but if we do ask you to specify you should be able to know that the p orbital separates into these three -- the p sub-shell separates into these three orbitals.
So let's do a clicker question on assigning electron configurations using the Aufbau principle. So why don't you go ahead and identify the correct electron configuration for carbon, and I'll tell you that z is equal to 6 here. And in terms of doing this for your homework, I actually want to mention that in the back page of your notes I attached a periodic table that does not have electron configurations on them. It's better to practice doing electron configurations when you cannot actually see the electron configurations. And this is the same periodic table that you're going to get in your exams, so it's good to practice doing your problem-sets with that periodic table so you're not relying on having the double check right there of seeing what the electron configuration is. So, let's do 10 seconds on this problem here.
OK, great. So this might be our best clicker question yet. Most people were able to identify the correct electron configuration here. Some people, the next most popular answer with 5%, which is a nice low number, wanted to put two in the 2 p x before they moved on. Remember we have to put one in each degenerate orbital before we double up on any orbital, so just keep that rule in mind that we would fill one in each p orbital before we a to the second one. But it looks like you guys are all experts here on doing these electron configurations.
So, let's move on to some more complicated electron configurations. So, for example, we can move to the next periods in the periodic table. When we talk about a period, we're just talking about that principle quantum number, so period 2 means that we're talking about starting with the 2 s orbitals, period 3 starts with, what we're now filling into the 3 s orbitals here. So if we're talking about the third period, that starts with sodium and it goes all the way up to argon.
So if we write the electron configuration for sodium, which you can try later -- hopefully you would all get it correctly -- you see that this is the electron configuration here, 1 s 2, 2 s 2, 2 p 6, and now we're going into that third shell, 3 s 1.
And I want to point out the difference between core electrons and valence electrons here. If we look at this configuration, what we say is all of the electrons in these inner shells are what we call core electrons. The core electrons tend not to be involved in much chemistry in bonding or in reactions. They're very deep and held very tightly to the nucleus, so we can often lump them together, and instead of writing them all out separately, we can just write the equivalent noble gas that has that configuration. So, for example, for sodium, we can instead write neon and then 3 s 1.
So the 3 s 1, or any of the other electrons that are in the outer-most shell, those are what we call our valence electrons, and those are where all the excitement happens. That's what we see are involved in bonding. It makes sense, right, because they're the furthest away from the nucleus, they're the ones that are most willing to be involved in some chemistry or in some bonding, or those are the orbitals that are most likely to accept an electron from another atom, for example. So the valence electrons, those are the exciting ones. We want to make sure we have a full picture of what's going on there.
So, no matter whether or not you write out the full form here, or the noble gas configuration where you write ne first or whatever the corresponding noble gas is to the core electrons, we always write out the valence electrons here. So for sodium, again, we can write n e and then 3 s 1. We can go all the way down, magnesium, aluminum, all the way to this noble gas, argon, which would be n e and then 3 s 2, 3 p 6.
Now we can think about the fourth period, and the fourth period is where we start to run into some exceptions, so this is where things get a teeny bit more complicated, but you just need to remember the exceptions and then you should be OK, no matter what you're asked to write. So for the fourth period, now we're into the 4 s 1 for potassium here. And what we notice when we get to the third element in and the fourth period is that we go 4 s 2 and then we're back to the 3 d's.
So if you look at the energy diagram, what we see is that the 4 s orbitals are actually just a teeny bit lower in energy -- they're just ever so slightly lower in energy than the 3 d orbitals. You can see that as you fill up your periodic table, it's very clear. But also we'll tell you a pneumonic device to keep that in mind, so you always remember and get the orbital energy straight. But it just turns out that the 4 s is so low in energy that it actually surpasses the 3 d, because we know the 3 d is going to be pretty high in terms of the three shell, and the 4 s is going to be the lowest in terms of the 4 shell, and it turns out that we need to fill up the 4 s before we fill in the 3 d.
And we can do that just going along, 3 d 1, 2 3, and the problem comes when we get to chromium here, which is instead of what we would expect, we might expect to see 4 s 2, 3 d 4. What we see is that instead it's 4 s 1, and 3 d 5. So this is the first exception that you need to the Aufbau principle. The reason this an exception is because it turns out that half filled d orbitals are more stable than we could even predict. You wouldn't be expected to be able to guess that this would happen, because using any kind of simple theory, we would, in fact, predict that this would not be the case, but what we find experimentally is that it's more stable to have half filled d orbital than to have a 4 s 2, and a 3 d 4.
So you're going to need to remember, so this is an exception, you have to memorize. Another exception in the fourth period is in copper here, we see that again, we have 4 s 1 instead of 4 s 2. This is 4 s 1, 3 d 10, we might expect 4 s 2, 3 d 9, but again, this exception comes out of experimental observation, which is the fact that full d orbitals also are lower in energy then we could theoretically predict using simple calculations.
So again, you need to memorize these two exceptions, and the exception in general is that filled d 10, or half-filled d 5 orbitals are lower in energy than would be expected, so we got this flip-flip where if we can get to that half filled orbital by only removing one s electron, then we're going to do it, and the same with the filled d orbital.
And actually, when we get to the fifth period of the periodic table, that again takes place, so when you get to a half filled, or a filled d orbital, again you want to do it, so those exceptions would be with molybdenum and silver would be the corresponding elements in the fifth period where you're going to see the same case here where it's lower in energy to have the half filled or the filled d orbitals.
So here's the pneumonic I mentioned for writing the electron configuration and getting those orbital energies in the right order. All you do is just write out all the orbitals, the 1 s, then the 2 s 2 p 3, 3 s 3 p d, just write them in a straight line like this, and then if you draw diagonals down them, what you'll get is the correct order in terms of orbital energies. So if we go down the diagonal, we start with 1 s, then we get 2 s, then 2 p and 3 s, then 3 p, and 4 s, and then that's how we see here that 4 s is actually lower in energy than 3 d, then 4 p, 5 s and so on.
So if you want to on an exam, you can just write this down quickly at the beginning and refer to it as you're filling up your electron configurations, but also if you look at the periodic table it's very clear as you try to fill it up that way that the same order comes out of that. So, whichever works best for you can do in terms of figuring out electron configurations.
So the last thing I want to mention today is how we can think about electron configurations for ions. It turns out that it's going to be a little bit different when we're talking about positive ions here. We need to change our rules just slightly. So what we know is that these 3 d orbitals are higher in energy than 4 s orbitals, so I've written the energy of the orbital here for potassium and for calcium. But what happens is that once a d orbital is filled, I said the two are very close in energy, and once a d orbital is filled, it actually drops to become lower in energy than the 4 s orbital. So once we move past, we fill the 4 s first, but once we fill in the d orbital, now that's going to be lower in energy.
So that doesn't make a difference for us when we're talking about neutral atoms, because we would fill up the 4 s first, because that's lower in energy until we fill it, and then we just keep going with the d orbitals. So, for example, if we needed to figure out the electron configuration for titanium, it would just be argon then 4 s 2, and then we would fill in the 3 d 2.
So, actually we don't have to worry about this fact any time we're dealing with neutrals. The problem comes when instead we're dealing with ions. So what I want to point out is what we said now is that the 3 d 2 is actually lower in energy, so if we were to rewrite this in terms of what the actual energy order is, we should instead write it 3 d 2, 4 s 2.
So you might ask in terms of when you're writing electron configurations, which way should you write it. And we'll absolutely accept both answers for a neutral atom. They're both correct. In one case you decided to order in terms of energy and in one case you decided to order in terms of how it fills up. I don't care how you do it on exams or on problem sets, but you do need to be aware that the 3 d once filled is lower in energy than the 4 s, and the reason you need to be aware of that is if you're asked for the electron configuration now of the titanium ion. So, let's say we're asked for the plus two ion. So a plus two ion means that we're removing two electrons from the atom and the electrons that we're going to remove are always going to be the highest energy electrons. So it's good to write it like this because this illustrates the fact that in fact the 4 s electrons are the ones that are higher in energy. So the correct answer for titanium plus two is going to be argon 3 d 2, whereas if we did not rearrange our order here we might have been tempted to write as 4 s 2 so keep that in mind when you're doing the positive ions of corresponding atoms. Alright, so we'll pick up with photoelectron spectroscopy on Wednesday. Have a great weekend. | http://ocw.mit.edu/courses/chemistry/5-111-principles-of-chemical-science-fall-2008/video-lectures/lecture-8/ | 13 |
73 | Bijection, injection and surjection
In mathematics, injections, surjections and bijections are classes of functions distinguished by the manner in which arguments (input expressions from the domain) and images (output expressions from the codomain) are related or mapped to each other.
- A function is injective (one-to-one) if or, equivalently, if . One could also say that every element of the codomain (sometimes called range) is mapped to by at most one element (argument) of the domain; not every element of the codomain, however, need have an argument mapped to it. An injective function is an injection.
- A function is surjective (onto) if every element of the codomain is mapped to by some element (argument) of the domain; some images may be mapped to by more than one argument. (Equivalently, a function where the range is equal to the codomain.) A surjective function is a surjection.
- A function is bijective (one-to-one and onto) if and only if (iff) it is both injective and surjective. (Equivalently, every element of the codomain is mapped to by exactly one element of the domain.) A bijective function is a bijection (one-to-one correspondence).
(Note: a one-to-one function is injective, but may fail to be surjective, while a one-to-one correspondence is both injective and surjective.)
An injective function need not be surjective (not all elements of the codomain may be associated with arguments), and a surjective function need not be injective (some images may be associated with more than one argument). The four possible combinations of injective and surjective features are illustrated in the following diagrams.
A function is injective (one-to-one) if every possible element of the codomain is mapped to by at most one argument. Equivalently, a function is injective if it maps distinct arguments to distinct images. An injective function is an injection. The formal definition is the following.
- The function is injective iff for all , we have
- A function f : A → B is injective if and only if A is empty or f is left-invertible, that is, there is a function g: B → A such that g o f = identity function on A.
- Since every function is surjective when its codomain is restricted to its range, every injection induces a bijection onto its range. More precisely, every injection f : A → B can be factored as a bijection followed by an inclusion as follows. Let fR : A → f(A) be f with codomain restricted to its image, and let i : f(A) → B be the inclusion map from f(A) into B. Then f = i o fR. A dual factorisation is given for surjections below.
- The composition of two injections is again an injection, but if g o f is injective, then it can only be concluded that f is injective. See the figure at right.
- Every embedding is injective.
A function is surjective (onto) if every possible image is mapped to by at least one argument. In other words, every element in the codomain has non-empty preimage. Equivalently, a function is surjective if its range is equal to its codomain. A surjective function is a surjection. The formal definition is the following.
- The function is surjective iff for all , there is such that f(a) = b.
- A function f : A → B is surjective if and only if it is right-invertible, that is, if and only if there is a function g: B → A such that f o g = identity function on B. (This statement is equivalent to the axiom of choice.)
- By collapsing all arguments mapping to a given fixed image, every surjection induces a bijection defined on a quotient of its domain. More precisely, every surjection f : A → B can be factored as a projection followed by a bijection as follows. Let A/~ be the equivalence classes of A under the following equivalence relation: x ~ y if and only if f(x) = f(y). Equivalently, A/~ is the set of all preimages under f. Let P(~) : A → A/~ be the projection map which sends each x in A to its equivalence class [x]~, and let fP : A/~ → B be the well-defined function given by fP([x]~) = f(x). Then f = fP o P(~). A dual factorisation is given for injections above.
- The composition of two surjections is again a surjection, but if g o f is surjective, then it can only be concluded that g is surjective. See the figure at right*.
A function is bijective if it is both injective and surjective. A bijective function is a bijection (one-to-one correspondence). A function is bijective if and only if every possible image is mapped to by exactly one argument. This equivalent condition is formally expressed as follows.
- The function is bijective iff for all , there is a unique such that f(a) = b.
- A function f : A → B is bijective if and only if it is invertible, that is, there is a function g: B → A such that g o f = identity function on A and f o g = identity function on B. This function maps each image to its unique preimage.
- The composition of two bijections is again a bijection, but if g o f is a bijection, then it can only be concluded that f is injective and g is surjective. (See the figure at right and the remarks above regarding injections and surjections.)
- The bijections from a set to itself form a group under composition, called the symmetric group.
Suppose you want to define what it means for two sets to "have the same number of elements". One way to do this is to say that two sets "have the same number of elements" if and only if all the elements of one set can be paired with the elements of the other, in such a way that each element is paired with exactly one element. Accordingly, we can define two sets to "have the same number of elements" if there is a bijection between them. We say that the two sets have the same cardinality.
It is important to specify the domain and codomain of each function since by changing these, functions which we think of as the same may have different jectivity.
Injective and surjective (bijective)
- For every set A the identity function idA and thus specifically .
- and thus also its inverse .
- The exponential function and thus also its inverse the natural logarithm
Injective and non-surjective
- The exponential function
Non-injective and surjective
- The sine function f(x) = sin x
Non-injective and non-surjective
- For every function f, subset A of the domain and subset B of the codomain we have A ⊂ f −1(fA) and f(f −1B) ⊂ B. If f is injective we have A = f −1(fA) and if f is surjective we have f(f −1B) = B.
- For every function h : A → C we can define a surjection H : A → h(A) : a → h(a) and an injection I : h(A) → C : a → a. It follows that h = I o H. This decomposition is unique up to isomorphism.
This terminology was originally coined by the Bourbaki group.
cs:Bijekce da:Bijektiv de:Bijektivität es:Función biyectiva fr:Bijection io:Bijektio it:Corrispondenza biunivoca he:התאמה על nl:Bijectie ja:全単射 pl:Bijekcja ru:Биекция fi:Bijektio sv:Bijektiv uk:Бієкція zh:单射、双射与满射 | http://www.exampleproblems.com/wiki/index.php/Bijection,_injection_and_surjection | 13 |
115 | In mathematics, a ratio is a relationship between two numbers of the same kind (e.g., objects, persons, students, spoonfuls, units of whatever identical dimension), usually expressed as "a to b" or a:b, sometimes expressed arithmetically as a dimensionless quotient of the two that explicitly indicates how many times the first number contains the second (not necessarily an integer).
In layman's terms a ratio represents, for every amount of one thing, how much there is of another thing. For example, supposing one has 8 oranges and 6 lemons in a bowl of fruit, the ratio of oranges to lemons would be 4:3 (which is equivalent to 8:6) while the ratio of lemons to oranges would be 3:4. Additionally, the ratio of oranges to the total amount of fruit is 4:7 (equivalent to 8:14). The 4:7 ratio can be further converted to a fraction of 4/7 to represent how much of the fruit is oranges.
Notation and terminology
The ratio of numbers A and B can be expressed as:
- the ratio of A to B
- A is to B
- A fraction (rational number) that is the quotient of A divided by B
The proportion expressing the equality of the ratios A:B and C:D is written A:B=C:D or A:B::C:D. this latter form, when spoken or written in the English language, is often expressed as
- A is to B as C is to D.
Again, A, B, C, D are called the terms of the proportion. A and D are called the extremes, and B and C are called the means. The equality of three or more proportions is called a continued proportion.
Ratios are sometimes used with three or more terms. The dimensions of a two by four that is ten inches long are 2:4:10.
History and etymology
|Look up ratio in Wiktionary, the free dictionary.|
It is impossible to trace the origin of the concept of ratio, because the ideas from which it developed would have been familiar to preliterate cultures. For example, the idea of one village being twice as large as another is so basic that it would have been understood in prehistoric society. However, it is possible to trace the origin of the word "ratio" to the Ancient Greek λόγος (logos). Early translators rendered this into Latin as ratio ("reason"; as in the word "rational"). (A rational number may be expressed as the quotient of two integers.) A more modern interpretation of Euclid's meaning is more akin to computation or reckoning. Medieval writers used the word proportio ("proportion") to indicate ratio and proportionalitas ("proportionality") for the equality of ratios.
Euclid collected the results appearing in the Elements from earlier sources. The Pythagoreans developed a theory of ratio and proportion as applied to numbers. The Pythagoreans' conception of number included only what would today be called rational numbers, casting doubt on the validity of the theory in geometry where, as the Pythagoreans also discovered, incommensurable ratios (corresponding to irrational numbers) exist. The discovery of a theory of ratios that does not assume commensurability is probably due to Eudoxus. The exposition of the theory of proportions that appears in Book VII of The Elements reflects the earlier theory of ratios of commensurables.
The existence of multiple theories seems unnecessarily complex to modern sensibility since ratios are, to a large extent, identified with quotients. This is a comparatively recent development however, as can be seen from the fact that modern geometry textbooks still use distinct terminology and notation for ratios and quotients. The reasons for this are twofold. First, there was the previously mentioned reluctance to accept irrational numbers as true numbers. Second, the lack of a widely used symbolism to replace the already established terminology of ratios delayed the full acceptance of fractions as alternative until the 16th century.
Euclid's definitions
Book V of Euclid's Elements has 18 definitions, all of which relate to ratios. In addition, Euclid uses ideas that were in such common usage that he did not include definitions for them. The first two definitions say that a part of a quantity is another quantity that "measures" it and conversely, a multiple of a quantity is another quantity that it measures. In modern terminology, this means that a multiple of a quantity is that quantity multiplied by an integer greater than one—and a part of a quantity (meaning aliquot part) is a part that, when multiplied by an integer greater than one, gives the quantity.
Euclid does not define the term "measure" as used here, However, one may infer that if a quantity is taken as a unit of measurement, and a second quantity is given as an integral number of these units, then the first quantity measures the second. Note that these definitions are repeated, nearly word for word, as definitions 3 and 5 in book VII.
Definition 3 describes what a ratio is in a general way. It is not rigorous in a mathematical sense and some have ascribed it to Euclid's editors rather than Euclid himself. Euclid defines a ratio as between two quantities of the same type, so by this definition the ratios of two lengths or of two areas are defined, but not the ratio of a length and an area. Definition 4 makes this more rigorous. It states that a ratio of two quantities exists when there is a multiple of each that exceeds the other. In modern notation, a ratio exists between quantities p and q if there exist integers m and n so that mp>q and nq>m. This condition is known as the Archimedean property.
Definition 5 is the most complex and difficult. It defines what it means for two ratios to be equal. Today, this can be done by simply stating that ratios are equal when the quotients of the terms are equal, but Euclid did not accept the existence of the quotients of incommensurables, so such a definition would have been meaningless to him. Thus, a more subtle definition is needed where quantities involved are not measured directly to one another. Though it may not be possible to assign a rational value to a ratio, it is possible to compare a ratio with a rational number. Specifically, given two quantities, p and q, and a rational number m/n we can say that the ratio of p to q is less than, equal to, or greater than m/n when np is less than, equal to, or greater than mq respectively. Euclid's definition of equality can be stated as that two ratios are equal when they behave identically with respect to being less than, equal to, or greater than any rational number. In modern notation this says that given quantities p, q, r and s, then p:q::r:s if for any positive integers m and n, np<mq, np=mq, np>mq according as nr<ms, nr=ms, nr>ms respectively. There is a remarkable similarity between this definition and the theory of Dedekind cuts used in the modern definition of irrational numbers.
Definition 6 says that quantities that have the same ratio are proportional or in proportion. Euclid uses the Greek ἀναλόγον (analogon), this has the same root as λόγος and is related to the English word "analog".
Definition 7 defines what it means for one ratio to be less than or greater than another and is based on the ideas present in definition 5. In modern notation it says that given quantities p, q, r and s, then p:q>r:s if there are positive integers m and n so that np>mq and nr≤ms.
As with definition 3, definition 8 is regarded by some as being a later insertion by Euclid's editors. It defines three terms p, q and r to be in proportion when p:q::q:r. This is extended to 4 terms p, q, r and s as p:q::q:r::r:s, and so on. Sequences that have the property that the ratios of consecutive terms are equal are called Geometric progressions. Definitions 9 and 10 apply this, saying that if p, q and r are in proportion then p:r is the duplicate ratio of p:q and if p, q, r and s are in proportion then p:s is the triplicate ratio of p:q. If p, q and r are in proportion then q is called a mean proportional to (or the geometric mean of) p and r. Similarly, if p, q, r and s are in proportion then q and r are called two mean proportionals to p and s.
The quantities being compared in a ratio might be physical quantities such as speed or length, or numbers of objects, or amounts of particular substances. A common example of the last case is the weight ratio of water to cement used in concrete, which is commonly stated as 1:4. This means that the weight of cement used is four times the weight of water used. It does not say anything about the total amounts of cement and water used, nor the amount of concrete being made. Equivalently it could be said that the ratio of cement to water is 4:1, that there is 4 times as much cement as water, or that there is a quarter (1/4) as much water as cement..
If there are 2 oranges and 3 apples, the ratio of oranges to apples is 2:3, and the ratio of oranges to the total number of pieces of fruit is 2:5. These ratios can also be expressed in fraction form: there are 2/3 as many oranges as apples, and 2/5 of the pieces of fruit are oranges. If orange juice concentrate is to be diluted with water in the ratio 1:4, then one part of concentrate is mixed with four parts of water, giving five parts total; the amount of orange juice concentrate is 1/4 the amount of water, while the amount of orange juice concentrate is 1/5 of the total liquid. In both ratios and fractions, it is important to be clear what is being compared to what, and beginners often make mistakes for this reason.
Number of terms
In general, when comparing the quantities of a two-quantity ratio, this can be expressed as a fraction derived from the ratio. For example, in a ratio of 2:3, the amount/size/volume/number of the first quantity is that of the second quantity. This pattern also works with ratios with more than two terms. However, a ratio with more than two terms cannot be completely converted into a single fraction; a single fraction represents only one part of the ratio since a fraction can only compare two numbers. If the ratio deals with objects or amounts of objects, this is often expressed as "for every two parts of the first quantity there are three parts of the second quantity".
Percentage ratio
If we multiply all quantities involved in a ratio by the same number, the ratio remains valid. For example, a ratio of 3:2 is the same as 12:8. It is usual either to reduce terms to the lowest common denominator, or to express them in parts per hundred (percent).
If a mixture contains substances A, B, C & D in the ratio 5:9:4:2 then there are 5 parts of A for every 9 parts of B, 4 parts of C and 2 parts of D. As 5+9+4+2=20, the total mixture contains 5/20 of A (5 parts out of 20), 9/20 of B, 4/20 of C, and 2/20 of D. If we divide all numbers by the total and multiply by 100, this is converted to percentages: 25% A, 45% B, 20% C, and 10% D (equivalent to writing the ratio as 25:45:20:10).
If the two or more ratio quantities encompass all of the quantities in a particular situation, for example two apples and three oranges in a fruit basket containing no other types of fruit, it could be said that "the whole" contains five parts, made up of two parts apples and three parts oranges. In this case, , or 40% of the whole are apples and , or 60% of the whole are oranges. This comparison of a specific quantity to "the whole" is sometimes called a proportion. Proportions are sometimes expressed as percentages as demonstrated above.
Note that ratios can be reduced (as fractions are) by dividing each quantity by the common factors of all the quantities. This is often called "cancelling." As for fractions, the simplest form is considered that in which the numbers in the ratio are the smallest possible integers.
Thus, the ratio 40:60 may be considered equivalent in meaning to the ratio 2:3 within contexts concerned only with relative quantities.
Mathematically, we write: "40:60" = "2:3" (dividing both quantities by 20).
- Grammatically, we would say, "40 to 60 equals 2 to 3."
An alternative representation is: "40:60::2:3"
- Grammatically, we would say, "40 is to 60 as 2 is to 3."
A ratio that has integers for both quantities and that cannot be reduced any further (using integers) is said to be in simplest form or lowest terms.
Sometimes it is useful to write a ratio in the form 1:n or n:1 to enable comparisons of different ratios.
For example, the ratio 4:5 can be written as 1:1.25 (dividing both sides by 4)
Alternatively, 4:5 can be written as 0.8:1 (dividing both sides by 5)
Dilution ratio
Ratios are often used for simple dilutions applied in chemistry and biology. A simple dilution is one in which a unit volume of a liquid material of interest is combined with an appropriate volume of a solvent liquid to achieve the desired concentration. The dilution factor is the total number of unit volumes in which your material is dissolved. The diluted material must then be thoroughly mixed to achieve the true dilution. For example, a 1:5 dilution (verbalize as "1 to 5" dilution) entails combining 1 unit volume of solute (the material to be diluted) + 4 unit volumes (approximately) of the solvent to give 5 units of the total volume. (Some solutions and mixtures take up slightly less volume than their components.)
The dilution factor is frequently expressed using exponents: 1:5 would be 5e−1 (5−1 i.e. one-fifth:one); 1:100 would be 10e−2 (10−2 i.e. one hundredth:one), and so on.
There is often confusion between dilution ratio (1:n meaning 1 part solute to n parts solvent) and dilution factor (1:n+1) where the second number (n+1) represents the total volume of solute + solvent. In scientific and serial dilutions, the given ratio (or factor) often means the ratio to the final volume, not to just the solvent. The factors then can easily be multiplied to give an overall dilution factor.
In other areas of science such as pharmacy, and in non-scientific usage, a dilution is normally given as a plain ratio of solvent to solute.
Odds (as in gambling) are expressed as a ratio. For example, odds of "7 to 3 against" (7:3) mean that there are seven chances that the event will not happen to every three chances that it will happen. On the other hand, the probability of success is 30%. In every ten trials, there are three wins and seven losses.
Different units
For example, the ratio 1 minute : 40 seconds can be reduced by changing the first value to 60 seconds. Once the units are the same, they can be omitted, and the ratio can be reduced to 3:2.
In chemistry, mass concentration "ratios" are usually expressed as w/v percentages, and are really proportions.
For example, a concentration of 3% w/v usually means 3g of substance in every 100mL of solution. This cannot easily be converted to a pure ratio because of density considerations, and the second figure is the total amount, not the volume of solvent.
See also
- Aspect ratio
- Fraction (mathematics)
- Golden ratio
- Interval (music)
- Parts-per notation
- Price/performance ratio
- Proportionality (mathematics)
- Ratio estimator
- Rule of three (mathematics)
- Wentworth, p. 55
- New International Encyclopedia
- Penny Cyclopedia, p. 307
- New International Encyclopedia
- New International Encyclopedia
- Smith, p. 477
- Penny Cyclopedia, p. 307
- Smith, p. 478
- Heath, p. 112
- Heath, p. 113
- Smith, p. 480
- Heath, reference for section
- "Geometry, Euclidean" Encyclopædia Britannica Eleventh Edition p682.
- Heath p. 125
Further reading
- "Ratio" The Penny Cyclopædia vol. 19, The Society for the Diffusion of Useful Knowledge (1841) Charles Knight and Co., London pp. 307ff
- "Proportion" New International Encyclopedia, Vol. 19 2nd ed. (1916) Dodd Mead & Co. pp270-271
- "Ratio and Proportion" Fundamentals of practical mathematics, George Wentworth, David Eugene Smith, Herbert Druery Harper (1922) Ginn and Co. pp. 55ff
- The thirteen books of Euclid's Elements, vol 2. trans. Sir Thomas Little Heath (1908). Cambridge Univ. Press. pp. 112ff.
- D.E. Smith, History of Mathematics, vol 2 Dover (1958) pp. 477ff | http://en.wikipedia.org/wiki/Ratio | 13 |
60 | Basics of Math and Physics
These subjects are usually seen by undergraduate students of sciences and engineering. However, a basic understanding of them is required if you want a deep understanding of any computational physical simulation.
The text below is not intended to be a reference in these subjects. It’s main purpose is just to provide the user of Blender some fundamental insight on what is actually happening inside physical simulations of Blender. If you wish learn more about any of the topics discussed, there are some useful links at the bottom of this page.
The formal description of what vectors are and how to use them (mathematically) is the core of Linear Algebra. However, the definition of vectors in Linear Algebra is somewhat obscure, and not intended for direct geometrical practical use, being defined as an abstract concept.
In this document we will use the analytical geometry approach, which defines vectors as "arrows" in 3d space. Examples of physical vectors are velocity, force, and acceleration. Physical quantities that are not vectorial are said to be Scalar. Examples of scalar quantities are energy, mass, and time.
Some properties that summarize vectors:
- One vector may be broken down into its components on the main axes.
- Two vectors are equal if, and only if, all their components are equal.
- The sum/difference of two vectors is equal to the sum/difference of their components.
- Multiplication/division of vectors is not the same as with "common" numbers.
There are some types of operation called products, but any of them are not equal to the products of the components, as you might think. These will not be discussed here.
Particles are the most basic type of body you can create in Physics. They have no size, and therefore cannot be rotated. The law that governs the movement of particles is Newton's Second Law: f=ma (force equals mass times acceleration). This is not the complete form of Newton's law, but the simplified form is sufficient for calculating particle motion. The vectorial sum of all the forces acting on the particle equals has the same direction and orientation of the acceleration. The branch of Physics that is dedicated to this subject is called Classical Mechanics.
Using vector notation, it is simple to solve particle motion equations using a computer even if there are multiple forces of different types. Examples of these forces include Blender "force fields" and Dampening, a force that is proportional to speed (the higher the speed, the higher the force). As particle motion is so simple to solve, Blender is able to calculate the position of thousands of particles almost in real-time when animating.
Rigid Body simulation
Rigid body means the body is not deformable, i.e. cannot stretch, shrink, etc. The main difference from particle simulation is that now our objects are allowed to rotate, and have a size, and a volume.
The equation that governs rotational motion is Τ = I α. Torque equals moment of inertia times angular acceleration. Now we need a definition of each of these words:
- A force may cause a torque. The torque it causes is the vectorial cross product of the component of the force perpendicular to the axis you are evaluating by the distance from the point of the application of the force to that axis. Also, torques are vectors. A very useful special case, often given as the definition of torque in fields other than physics, is as follows:
The construction of the "moment arm" is shown in the figure below, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque arising from a perpendicular force:
For instance, it is much easier to close a door by pushing it by the handle than by pushing it in the middle of the door, because, when you do it by the handle, you increase the distance of the force you are apply to the axis of calculation.
- Moment of Inertia is a measure of how difficult is to rotate the body. It is proportional to both the mass and the geometry of the object. Given a fixed volume, a sphere possesses the smallest moment of inertia possible.
- Angular acceleration: Is a measure of the acceleration in the rotational movement.
The programs devoted to deliver fast and accurate simulation of rigid body dynamics are often called physics engines, or game engines. Blender itself has a game engine, called Bullet. All simulations in the Bullet engine, if well designed, are real-time, except for those with a very high number of objects present.
In the near future, it is expected that computers used for gaming will have a Physics Processor Unit (PPU) dedicated to these calculations as a card, like what happened when graphics processing was moved from the CPU to video cards (Graphics Processing Units or GPUs).
Physics of deformable bodies
The method most computers use to simulate deformable 2D or 3D bodies is to subdivide (automatically or manually) a body into cells, and then, assuming any properties inside a single cell to be an interpolation of the properties at its corners, to solve the equations at the boundaries of the cells. If you use a large number of cells (in other words, a highly subdivided mesh), you can get very realistic results.
A common application of this method, plus some more hypotheses, leaves us into a wide field of engineering which nowadays is called Finite Element Analysis.
In Blender, the simulation of deformable bodies is somewhat similar to this, but the equations are simplified for speed. If we enable an object as a soft body, Blender assumes that all faces are cells and all vertices are masses, with the edges as springs. The following images illustrate how a 3x3 grid of vertices (a mesh plane in Blender) becomes a set of Soft Body cells in Blender.
There is a theory (in the scientific use of the word theory, it has been proven), that provides us a set of equations called Navier-Stokes equations, that completely state how a fluid will behave in most of situations being turbulent or not. However, these equations cannot be solved by hand, and today, no exact answer has been found to solve this equation in its complete state, just in some very special states, and none of them include turbulence.
Currently, the method used to solve this equation consists in using iterative solving, thus getting an answer as close to real as we want to. This is called DNS (Direct Numerical Simulation). However, DNS requires enormous computation power, and even for today, mid-2006, only supercomputers or very large clusters of computers can use DNS with some success. If correctly applied, however, DNS returns the best and most trusted results of all methods, from the more micro distance at which the results are meaningful to the simulation, to the more macro distance, the scale of the objects we are simulating.
So, we need to approximate our model more, in order to do less calculation. Instead of considering the fluid a continuum, we will discretize our fluid. By discretizing, you can understand dividing into cells. Inside a cell, the properties like velocity, pressure, density are all considered to be the same, so we only have to solve equations on its borders. We also discretized time, i.e., only some instances are calculated. However, we need another equation that deals with this discrete problem. This equation is called the Lattice-Boltzmann equation, and this equation complies with the Navier-Stokes equation.
There is also one more optimization done in Blender, the use of adaptative grids. In a region far from interfaces, instead of using a tiny cell, we use a larger cell. This greatly decreases calculation time (up to 4 times faster), without loss of quality. This optimization is responsible to find places where you can use a larger cell without disturbing the results, and when to start using smaller cells in these places.
Boundaries: The boundaries (domain, obstacles) counts as a cell in the method.
- No slip: The fluid cells near the surface of the boundaries are not allowed to move at all, having zero velocity.
- Free slip: The fluid cells are allowed to move freely. If the calculation indicates that that cell would move inwards the boundary cell, then its velocity vector is inversed.
- Classical Mechanics
Rigid Body simulation
Finite element analysis
- Finite Element Analysis
- Finite Element Analysis example | http://wiki.blender.org/index.php/Doc:2.4/Tutorials/Physics/BSoD/Math | 13 |
119 | MSP:MiddleSchoolPortal/Ratios For All Occasions
From Middle School Portal
Ratios for All Occasions - Introduction
In Ratios for All Occasions, we feature resources on the concept of the ratio as encountered in middle school: as rates in real-world problems, percents in relation to fractions, scale factors in building models, and comparisons of lengths in geometry. Most of these digital resources are activities that can serve as supplementary or motivational material.
A central theme in the middle school mathematics curriculum, proportional reasoning is based on making sense of ratios in a variety of contexts. The resources chosen for this unit provide practice in solving problems, often informally, in the format of games, hands-on modeling, mapmaking, and questions selected for their interest for students. As students work through the activities, they will exercise reasoning about basic proportions as well as further develop their knowledge of the relationship between fractions and percents.
The section titled Background Resources for Teachers contains links to workshop sessions, developed for teachers, on the mathematical content of the unit. Ratios in Children's Books identifies three picture books that entertain while they explore scale and proportion. In the final section, we look at the coverage of proportionality at the middle level in the NCTM Principles and Standards for School Mathematics.
Background Resources for Teachers
Ratios, whether simple comparisons or rates or percents or scale factors, are old friends of the middle school teacher. Every year you deal with them in your classroom. However, you may like to explore a particular topic, such as the golden mean or indirect measurement. These online workshop sessions, created for teachers, make use of applets and video to enable deeper investigation of a topic. You may find yourself fascinated enough with a topic to import the workshop idea directly into your classroom.
Rational Numbers and Proportional Reasoning How do ratios relate to our usual idea of fractions? In this session, part of a free online course for K-8 teachers, you can look at ways to interpret, model and work with rational numbers and to explore the basics of proportional reasoning. You can investigate these ideas through interactive applets, problem sets, and a video of teachers solving one of the problems. This session is part of the online course Learning Math: Number and Operations.
Fractions, Percents, and Ratios In this set of lessons created for K-8 teachers, you can examine graphical and geometric representations of these topics, as well as some of their applications in the physical world. A review of percents in terms of ratio and proportion is followed by an investigation of Fibonacci numbers and the golden mean. Why do we study the golden rectangle? In a video segment, an architect explains the place of the golden rectangle as an architectural element throughout history. This set of lessons is from Learning Math: Number and Operations.
Similarity Explore scale drawing, similar triangles, and trigonometry in terms of ratios and proportion in this series of lessons developed for teachers. Besides explanations and real-world problems, the unit includes video segments that show teachers investigating problems of similarity. To understand the ratios that underlie trigonometry, you can use an interactive activity provided online. This session is part of the course Learning Math: Geometry.
Indirect Measurement and Trigonometry For practical experience in the use of trigonometry, look at these examples of measuring impossible distances and inaccessible heights. These lessons show proportional reasoning in action! This unit is part of the online course Learning Math: Measurement.
Ratios as Fractions and Rates
It is at the middle school level that students move from understanding fractions to working with ratios and setting up proportions. You will find here problem-solving activities that you can use to introduce the concept of ratio as a rate that can be expressed as a fraction—miles per hour, drops per minute, for example. And you will find real-world problems that can be set up as proportions. Each activity was selected with student appeal in mind.
All About Ratios Designed to introduce the concept of ratio at the most basic level, this activity could open the idea to younger middle school students. Each multiple-choice problem shows sets of colorful elements and asks students to choose the one that matches the given ratio. The activity is from the collection titled Mathematics Lessons that are Fun! Fun! Fun!
Which Tastes Juicier? Students are challenged to decide which of four cans of grape juice concentrate requiring different amounts of water would have the strongest grape juice taste. A hint suggests forming ratios that are fractions to compare quantities. Two solutions are given, each fully illustrated with tables. Students are then offered further mixture-related questions.
Tern Turn: Are We There Yet? If you know an arctic tern's rate of flight and hours per day in flight, can you calculate how many days would be required to fly the 18,000-mile roundtrip from the Arctic Circle to Antarctica? A hint suggests that students first calculate how many miles the tern flies in one day. Similar questions follow, offering more opportunities to practice distance-rate-time problems.
Drip Drops: How Much Water Do You Waste? A leaky faucet is dripping at the rate of one drop every two seconds. Students are asked to decide if the water lost in one week would fill a drinking glass, a sink, or a bathtub. The only hint is that a teaspoon holds about 20 drops. The full solution demonstrates how to convert the drops to gallons using an equation or a table. Students then consider, "How much water is lost in one year by a single leaky faucet? By two million leaky faucets?"
How Far Can You Go on a Tank of Gas? Which car will go the farthest on a single tank of gas? Students are given the mileage and gasoline tank capacity of three models of automobiles and are encouraged to begin the problem by calculating how far each car could go in the city and on the highway. In follow-up problems, students compare the fuel efficiency of different sports cars and calculate how often a commuter would need to refuel.
Capture Recapture: How Many Fish in a Pond? A real application of the ideas of proportion! To estimate the number of fish in a pond, scientists tag a number of them and return them to the pond. The next day, they catch fish from the pond and count the number of tagged fish recaptured. From this, they can set up a proportion to make their estimation. Hints on getting started are given, if needed, and the solution explains the setup of the proportion.
Neighborhood Math This site contains four activities in a neighborhood setting: Math at the Mall, Math in the Park or City, Wheel Figure This Out, and Gearing Up. Students calculate the amount of floor space occupied by various stores, find the height of objects, and take a mathematical look at bicycles. The third and fourth activities involve both geometry and ratios. Answers and explanations of the four activities are included.
Understanding Rational Numbers and Proportions To work well with ratios, learners need a solid basis in the idea of rational number. This complete lesson includes three well-developed activities that investigate fractions, proportion, and unit rates—all through real-world problems students encounter at a bakery.
Ratios as Percentages
In teaching ratio, percentage is where the rubber meets the road! Students need to understand the concept of percent thoroughly, which is the objective of the first five resources here. Students also need practice in converting from fractions to decimals to percents, and in finding percentages. The last four resources offer practice in various scenarios, generally through a game format.
Grid and Percent It This lesson begins with a basic visual used in many textbooks: a 10 × 10 grid as a model for demonstrating percent as "parts per hundred." It goes on to extend the model to solve various percentage problems. Especially valuable are the illustration of each problem and the thorough explanation that accompanies it. This is an exceptional lesson plan!
Percentages In this interactive activity, students can enter any two of these three numbers: the whole, the part, and the percentage. The missing number is not only calculated but the relationship among the three is illustrated as a colored section of both a circle and a rectangle. The exercise is an excellent help to understanding the meaning of percentage.
Majority Vote: What Percentage Does It Take to Win a Vote? This problem challenges students' understanding of percentage. Two solutions are available, plus hints for getting started. Clicking on "Try these" leads to different but similar problems on percentage. Questions under "Did you know?" include "Can you have a percentage over 100?" and "When can you add, subtract, multiply, or divide percentages?" These questions can lead to interesting math conversations.
Fraction Model III Using this applet, students create a fraction for which the denominator is 100 and then make the numerator any value they choose. A visual of the fraction is shown—either as a circle, a rectangle, or a model with the decimal and percent equivalents of the fraction. An excellent aid in understanding the basics of percentages!
Tight Weave: Geometry This is a fractal that can be used to give a visual of percentages. At each stage in the creation of the fractal, the middle one-ninth of each purple square area is transformed to gold. This gives progressively smaller similar patterns of gold and purple. At any stage of iteration, the percentage of gold is given. Interesting questions that your class might consider: At what stage will more than 50% of the area be gold? Or you could pick a stage, show it visually, and ask the students to estimate the percentage of the original purple square that has turned to gold.
Dice Table This activity shows the student the possible results of rolling two dice. It can become a game between several students who select various combinations of results, which appear on an interactive table. The players then figure the probability of winning the roll, giving the probabilities as fractions, decimals, and percentages. Good practice in converting from fractions to percents.
Fraction Four A game for two players, this activity requires students to convert from fractions to percents, find percentages of a number, and more. Links go to game ideas and a brief discussion of the connection between fractions and percentages, presented as a talk between a student and a mentor.
Snap Saloon In this interactive online game, students practice matching fractions with decimals and percentages. Three levels of difficulty are available. This is one of 12 games from The Maths File Game Show.
Ratios in Building Scale Models
This is the hands-on area of ratios! These activities are for students who like to get in there and get dirty—in other words, all middle schoolers. Here they can make models, maps, floor plans, and pyramids, or consider the length of the Statue of Liberty's nose. All the problems deal with the idea of scale, the application of a scale factor, and the central question: What changes when an object is enlarged or shrunk to scale?
Floor Plan Your Classroom: Make an Architectural Plan in 3 Steps This resource guides the learner step-by-step in creating a floor plan of a classroom. The directions include drawings of student work. The three parts of the activity are: sketching a map of the classroom, making a scale drawing from the sketch, and drafting a CAD (computer-aided design) floor plan from the drawing.
Statue of Liberty: Is the Statue of Liberty's Nose Too Long? The full question is: "The arm of the Statue of Liberty is 42 feet. How long is her nose?" To answer the question, students first find the ratio of their own arm length to nose length and then apply their findings to the statue's proportions. The solution sets out different approaches to the problem, including the mathematics involved in determining proportion. Extension problems deal with shrinking a T-shirt and the length-to-width ratios of cereal boxes.
Scaling Away For this one-period lesson, students bring to class either a cylinder or a rectangular prism, and their knowledge of how to find surface area and volume. They apply a scale factor to these dimensions and investigate how the scaled-up model has changed from the original. Activity sheets and overheads are included, as well as a complete step-by-step procedure and questions for class discussion.
This activity provides instructions for making a scale model of the solar system, including an interactive tool to calculate the distances between the planets. The student selects a measurement to represent the diameter of the Sun, and the other scaled measurements are automatically calculated. Students can experiment with various numbers for the Sun's diameter and see how the interplanetary distances adjust to the scale size.
Mathematics of Cartography: Mathematics Topics This web page looks at scale in relation to making maps. It discusses coordinate systems as well as the distortions created when projecting three -- dimensional space onto a two-dimensional paper. One activity here has students use an online site to create a map of their neighborhoods-to scale, of course!
Size and Scale This is a challenging and thorough activity on the physics of size and scale. Again, the product is a scale model of the Earth-moon system, but the main objective is understanding the relative sizes of bodies in our solar system and the problem of making a scale model of the entire solar system. The site contains a complete lesson plan, including motivating questions for discussion and extension problems.
Scaling the Pyramids Students working on this activity will compare the Great Pyramid to such modern structures as the Statue of Liberty and the Eiffel Tower. The site contains all the information needed, including a template, to construct a scale model of the Great Pyramid. Heights of other tall structures are given. A beautifully illustrated site!
Ratios in Geometry
Geometry offers a challenging arena in which to wrestle with ideas of ratio. Except for the first resource, the work below is more appropriate for the upper end of middle school than for the younger students. All of the resources include activities that will involve your students in working with visual, geometric figures that they can draw or manipulate online. You will notice the absence of a favorite and most significant ratio: p. You will find several interesting resources on the circumference to diameter ratio in Going in Circles!
Constant Dimensions In this carefully developed lesson, students measure the length and width of a rectangle using standard units of measure as well as nonstandard units such as pennies, beads, and paper clips. When students mark their results on a length-versus-width graph, they find that the ratio of length to width of a rectangle is constant, in spite of the units. For many middle school students, not only is the discovery surprising but also opens up the whole meaning of ratio.
Parallel Lines and Ratio Three parallel lines are intersected by two straight lines. The classic problem is: If we know the ratio of the segments created by one of the straight lines, what can we know about the ratio of the segments along the other line? An applet allows students to clearly see the geometric reasoning involved. The activity is part of the Manipula Math site.
Figure and Ratio of Area A page shows two side-by-side grids, each with a blue rectangle inside. Students can change the height and width of these blue rectangles and then see how their ratios compare--not only of height and width but also, most important, of area. The exercise becomes most impressive visually when a tulip is placed inside the rectangles. As the rectangles' dimensions are changed, the tulips grow tall and widen or shrink and flatten. An excellent visual! The activity is part of the Manipula Math site.
Cylinders and Scale Activity Using a film canister as a pattern, students create a paper cylinder. They measure its height, circumference, and surface area, then scale up by doubling and even tripling the linear dimensions. They can track the effect on these measurements, on the area, and finally on the amount of sand that fits into each module (volume). The lesson is carefully described and includes handouts.
The Fibonacci Numbers and the Golden Section Here students can explore the properties of the Fibonacci numbers, find out where they occur in nature, and learn about the golden ratio. Illustrations, diagrams, and graphs are included.
The Golden Ratio Another site that introduces the golden ratio, this resource offers seven activities that guide students in constructing a golden rectangle and spiral. Although designed for ninth and tenth graders, the explorations are appropriate for middle school students as well.
Ratios in Children's Books
Middle schoolers may be surprised and pleased to find ratios treated as the subject of these three picture books. You can find the books in school or public libraries. They are also available from online booksellers.
Cut Down to Size at High Noon by Scott Sundby and illustrated by Wayne Geehan
This parody of classic western movies teaches scale and proportion. The story takes place in Cowlick, a town filled with people with intricate western-themed hairstyles that the town's one and only barber creates with the help of scale drawings. Enter a second barber, and the town does not seem big enough for both of them! The story reaches its high point of suspense when the two barbers face off with scissors at high noon. The duel ends in a draw of equally magnificent haircuts, one in the shape of a grasshopper and the other in the shape of a train engine, and the reader learns that scale drawings can be used to scale up as well as down.
If You Hopped Like a Frog by David M. Schwartz and illustrated by James Warhola
Imagine, with the help of ratio and proportion, what you could accomplish if you could hop like a frog or eat like a shrew. You would certainly be a shoo-in for the Guinness World Records. The book first shows what a person could do if he or she could hop proportionately as far as a frog or were proportionately as powerful as an ant. At the back of the book, the author explains each example and poses questions at the end of the explanations.
If the World Were a Village: A Book about the World's People by David J. Smith and illustrated by Shelagh Armstrong
How can you comprehend statistics about a world brimming with more than 6.2 billion people (the population in January 2002)? One answer to understanding large numbers is to create a scale where 100 people represent the total world population and change the other numbers proportionally. In a world of 100 people, how many people (approximately) would come from China? (21) From India? (17) From the United States? (5) In the same way, the book presents statistics about the different languages spoken in the world, age distributions, religions, air and water quality, and much more.
SMARTR: Virtual Learning Experiences for Students
Visit our student site SMARTR to find related virtual learning experiences for your students! The SMARTR learning experiences were designed both for and by middle school aged students. Students from around the country participated in every stage of SMARTR’s development and each of the learning experiences includes multimedia content including videos, simulations, games and virtual activities. Visit the virtual learning experience on Ratios.
The FunWorks Visit the FunWorks STEM career website to learn more about a variety of math-related careers (click on the Math link at the bottom of the home page).
Within the NCTM Principles and Standards, the concept of ratio falls under the Number and Operations Standard. The document states that one curricular focus at this level is "the proposed emphasis on proportionality as an integrative theme in the middle-grades mathematics program. Facility with proportionality develops through work in many areas of the curriculum, including ratio and proportion, percent, similarity, scaling," and more. Another focus identified for middle school is rational numbers, including conceptual understanding, computation, and learning to "think flexibly about relationships among fractions, decimals, and percents" (NCTM, 2000, p. 212).
Characteristically, the document emphasizes the deep understandings that underlie the coursework. For example, to work proficiently with fractions, decimals, and percents, a solid concept of rational number is needed. Many students hold serious misconceptions about what a fraction is and how it relates to a decimal or a percent. They can develop a clearer, more intuitive understanding through "experiences with a variety of models" that "offer students concrete representations of abstract ideas" (pp. 215-216).
The online resources in this unit offer several models for hands-on encounters with ratio under some of its many guises: a rate, a scale factor, a percent, a comparison of geometric dimensions. We hope that your students will enjoy their encounters with ratios and deepen their understanding of this useful concept.
Author and Copyright
Terese Herrera taught math several years at middle and high school levels, then earned a Ph.D. in mathematics education. She is a resource specialist for the Middle School Portal 2: Math & Science Pathways project.
Please email any comments to firstname.lastname@example.org.
Connect with colleagues at our social network for middle school math and science teachers at http://msteacher2.org.
Copyright June 2006 - The Ohio State University. This material is based upon work supported by the National Science Foundation under Grant No. 0424671 and since September 1, 2009 Grant No. 0840824. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. | http://msp.ehe.osu.edu/wiki/index.php?title=MSP:MiddleSchoolPortal/Ratios_For_All_Occasions&oldid=300 | 13 |
162 | Special relativity, an introduction
- This article is intended as a generally accessible introduction to the subject.
Special relativity is a fundamental physics theory about space and time that was developed by Albert Einstein in 1905 as a modification of Newtonian physics. It was created to deal with some pressing theoretical and experimental issues in the physics of the time involving light and electrodynamics. The predictions of special relativity correspond closely to those of Newtonian physics at speeds which are low in comparison to that of light, but diverge rapidly for speeds which are a significant fraction of the speed of light. Special relativity has been experimentally tested on numerous occasions since its inception, and its predictions have been verified by those tests.
Einstein postulated that the speed of light is the same for all observers, irrespective of their motion relative to the light source. This was in total contradiction to classical mechanics, which had been accepted for centuries. Einstein's approach was based on thought experiments and calculations. In 1908, Hermann Minkowski reformulated the theory based on different postulates of a more geometrical nature. His approach depended on the existence of certain interrelations between space and time, which were considered completely separate in classical physics. This reformulation set the stage for further developments of physics.
Special relativity makes numerous predictions that are incompatible with Newtonian physics (and everyday intuition). The first such prediction described by Einstein is called the relativity of simultaneity, under which observers who are in motion with respect to each other may disagree on whether two events occurred at the same time or one occurred before the other. The other major predictions of special relativity are time dilation (under which a moving clock ticks more slowly than when it is at rest with respect the observer), length contraction (under which a moving rod may be found to be shorter than when it is at rest with respect to the observer), and the equivalence of mass and energy (written as E=mc2). Special relativity predicts a non-linear velocity addition formula, which prevents speeds greater than that of light from being observed. Special relativity also explains why Maxwell's equations of electromagnetism are correct in any frame of reference, and how an electric field and a magnetic field are two aspects of the same thing.
Special relativity has received experimental support in many ways, and it has been proven far more accurate than Newtonian mechanics. The most famous experimental support is the Michelson-Morley experiment, the results of which (showing that the speed of light is a constant) was one factor that motivated the formulation of the theory of special relativity. Other significant tests are the Fizeau experiment (which was first done decades before special relativity was proposed), the detection of the transverse Doppler effect, and the Haefele-Keating experiment. Today, scientists are so comfortable with the idea that the speed of light is always the same that the meter is now defined as being the distance traveled by light in 1/299,792,458th of a second. This means that the speed of light is now defined as being 299,792,458 m/s.
Reference frames and Galilean relativity: A classical prelude
A reference frame is simply a selection of what constitutes stationary objects. Once the velocity of a certain object is arbitrarily defined to be zero, the velocity of everything else in the universe can be measured relative to it. When a train is moving at a constant velocity past a platform, one may either say that the platform is at rest and the train is moving or that the train is at rest and the platform is moving past it. These two descriptions correspond to two different reference frames. They are respectively called the rest frame of the platform and the rest frame of the train (sometimes simply the platform frame and the train frame).
The question naturally arises, can different reference frames be physically differentiated? In other words, can one conduct some experiments to claim that "we are now in an absolutely stationary reference frame?" Aristotle thought that all objects tend to cease moving and become at rest if there were no forces acting on them. Galileo challenged this idea and argued that the concept of absolute motion was unreal. All motion was relative. An observer who couldn't refer to some isolated object (if, say, he was imprisoned inside a closed spaceship) could never distinguish whether according to some external observer he was at rest or moving with constant velocity. Any experiment he could conduct would give the same result in both cases. However, accelerated reference frames are experimentally distinguishable. For example, if an astronaut moving in free space saw that the tea in his tea-cup was slanted rather than horizontal, he would be able to infer that his spaceship was accelerated. Thus not all reference frames are equivalent, but people have a class of reference frames, all moving at uniform velocity with respect to each other, in all of which Newton's first law holds. These are called the inertial reference frames and are fundamental to both classical mechanics and SR. Galilean relativity thus states that the laws of physics can not depend on absolute velocity, they must stay the same in any inertial reference frame. Galilean relativity is thus a fundamental principle in classical physics.
Mathematically, it says that if one transforms all velocities to a different reference frame, the laws of physics must be unchanged. What is this transformation that must be applied to the velocities? Galileo gave the common-sense "formula" for adding velocities: If
- Particle P is moving at velocity v with respect to reference frame A and
- Reference frame A is moving at velocity u with respect to reference frame B, then
- The velocity of P with respect to B is given by v + u.
The formula for transforming coordinates between different reference frames is called the Galilean transformation. The principle of Galilean relativity then demands that laws of physics be unchanged if the Galilean transformation is applied to them. Laws of classical mechanics, like Newton's second law, obey this principle because they have the same form after applying the transformation. As Newton's law involves the derivative of velocity, any constant velocity added in a Galilean transformation to a different reference frame contributes nothing (the derivative of a constant is zero). Addition of a time-varying velocity (corresponding to an accelerated reference frame) will however change the formula (see pseudo force), since Galilean relativity only applies to non-accelerated inertial reference frames.
Time is the same in all reference frames because it is absolute in classical mechanics. All observers measure exactly the same intervals of time and there is such a thing as an absolutely correct clock.
Invariance of length: The Euclidean picture
In special relativity, space and time are joined into a unified four-dimensional continuum called spacetime. To gain a sense of what spacetime is like, we must first look at the Euclidean space of Newtonian physics.
This approach to the theory of special relativity begins with the concept of "length." In everyday experience, it seems that the length of objects remains the same no matter how they are rotated or moved from place to place; as a result the simple length of an object doesn't appear to change or is "invariant." However, as is shown in the illustrations below, what is actually being suggested is that length seems to be invariant in a three-dimensional coordinate system.
The length of a line in a two-dimensional Cartesian coordinate system is given by Pythagoras' theorem:
One of the basic theorems of vector algebra is that the length of a vector does not change when it is rotated. However, a closer inspection tells us that this is only true if we consider rotations confined to the plane. If we introduce rotation in the third dimension, then we can tilt the line out of the plane. In this case the projection of the line on the plane will get shorter. Does this mean length is not invariant? Obviously not. The world is three-dimensional and in a 3D Cartesian coordinate system the length is given by the three-dimensional version of Pythagoras's theorem:
This is invariant under all rotations. The apparent violation of invariance of length only happened because we were "missing" a dimension. It seems that, provided all the directions in which an object can be tilted or arranged are represented within a coordinate system, the length of an object does not change under rotations. A 3-dimensional coordinate system is enough in classical mechanics because time is assumed absolute and independent of space in that context. It can be considered separately.
Note that invariance of length is not ordinarily considered a dynamic principle, not even a theorem. It is simply a statement about the fundamental nature of space itself. Space as we ordinarily conceive it is called a three-dimensional Euclidean space, because its geometrical structure is described by the principles of Euclidean geometry. The formula for distance between two points is a fundamental property of an Euclidean space, it is called the Euclidean metric tensor (or simply the Euclidean metric). In general, distance formulas are called metric tensors.
Note that rotations are fundamentally related to the concept of length. In fact, one may define length or distance to be that which stays the same (is invariant) under rotations, or define rotations to be that which keep the length invariant. Given any one, it is possible to find the other. If we know the distance formula, we can find out the formula for transforming coordinates in a rotation. If, on the other hand, we have the formula for rotations then we can find out the distance formula.
The postulates of Special Relativity
Einstein developed Special Relativity on the basis of two postulates:
- First postulate—Special principle of relativity—The laws of physics are the same in all inertial frames of reference. In other words, there are no privileged inertial frames of reference.
- Second postulate—Invariance of c—The speed of light in a vacuum is independent of the motion of the light source.
Special Relativity can be derived from these postulates, as was done by Einstein in 1905. Einstein's postulates are still applicable in the modern theory but the origin of the postulates is more explicit. It was shown above how the existence of a universally constant velocity (the speed of light) is a consequence of modeling the universe as a particular four dimensional space having certain specific properties. The principle of relativity is a result of Minkowski structure being preserved under Lorentz transformations, which are postulated to be the physical transformations of inertial reference frames.
The Minkowski formulation: Introduction of spacetime
After Einstein derived special relativity formally from the counterintuitive proposition that the speed of light is the same to all observers, the need was felt for a more satisfactory formulation. Minkowski, building on mathematical approaches used in non-Euclidean geometry and the mathematical work of Lorentz and Poincaré, realized that a geometric approach was the key. Minkowski showed in 1908 that Einstein's new theory could be explained in a natural way if the concept of separate space and time is replaced with one four-dimensional continuum called spacetime. This was a groundbreaking concept, and Roger Penrose has said that relativity was not truly complete until Minkowski reformulated Einstein's work.
The concept of a four-dimensional space is hard to visualize. It may help at the beginning to think simply in terms of coordinates. In three-dimensional space, one needs three real numbers to refer to a point. In the Minkowski space, one needs four real numbers (three space coordinates and one time coordinate) to refer to a point at a particular instant of time. This point at a particular instant of time, specified by the four coordinates, is called an event. The distance between two different events is called the spacetime interval.
A path through the four-dimensional spacetime, usually called Minkowski space, is called a world line. Since it specifies both position and time, a particle having a known world line has a completely determined trajectory and velocity. This is just like graphing the displacement of a particle moving in a straight line against the time elapsed. The curve contains the complete motional information of the particle.
In the same way as the measurement of distance in 3D space needed all three coordinates we must include time as well as the three space coordinates when calculating the distance in Minkowski space (henceforth called M). In a sense, the spacetime interval provides a combined estimate of how far two events occur in space as well as the time that elapses between their occurrence.
But there is a problem. Time is related to the space coordinates, but they are not equivalent. Pythagoras's theorem treats all coordinates on an equal footing (see Euclidean space for more details). We can exchange two space coordinates without changing the length, but we can not simply exchange a space coordinate with time, they are fundamentally different. It is an entirely different thing for two events to be separated in space and to be separated in time. Minkowski proposed that the formula for distance needed a change. He found that the correct formula was actually quite simple, differing only by a sign from Pythagoras's theorem:
where c is a constant and t is the time coordinate. Multiplication by c, which has the dimension ms − 1, converts the time to units of length and this constant has the same value as the speed of light. So the spacetime interval between two distinct events is given by
There are two major points to be noted. Firstly, time is being measured in the same units as length by multiplying it by a constant conversion factor. Secondly, and more importantly, the time-coordinate has a different sign than the space coordinates. This means that in the four-dimensional spacetime, one coordinate is different from the others and influences the distance differently. This new 'distance' may be zero or even negative. This new distance formula, called the metric of the spacetime, is at the heart of relativity. This distance formula is called the metric tensor of M. This minus sign means that a lot of our intuition about distances can not be directly carried over into spacetime intervals. For example, the spacetime interval between two events separated both in time and space may be zero (see below). From now on, the terms distance formula and metric tensor will be used interchangeably, as will be the terms Minkowski metric and spacetime interval.
In Minkowski spacetime the spacetime interval is the invariant length, the ordinary 3D length is not required to be invariant. The spacetime interval must stay the same under rotations, but ordinary lengths can change. Just like before, we were missing a dimension. Note that everything this far are merely definitions. We define a four-dimensional mathematical construct which has a special formula for distance, where distance means that which stays the same under rotations (alternatively, one may define a rotation to be that which keeps the distance unchanged).
Now comes the physical part. Rotations in Minkowski space have a different interpretation than ordinary rotations. These rotations correspond to transformations of reference frames. Passing from one reference frame to another corresponds to rotating the Minkowski space. An intuitive justification for this is given below, but mathematically this is a dynamical postulate just like assuming that physical laws must stay the same under Galilean transformations (which seems so intuitive that we don't usually recognize it to be a postulate).
Since by definition rotations must keep the distance same, passing to a different reference frame must keep the spacetime interval between two events unchanged. This requirement can be used to derive an explicit mathematical form for the transformation that must be applied to the laws of physics (compare with the application of Galilean transformations to classical laws) when shifting reference frames. These transformations are called the Lorentz transformations. Just like the Galilean transformations are the mathematical statement of the principle of Galilean relativity in classical mechanics, the Lorentz transformations are the mathematical form of Einstein's principle of relativity. Laws of physics must stay the same under Lorentz transformations. Maxwell's equations and Dirac's equation satisfy this property, and hence, they are relativistically correct laws (but classically incorrect, since they don't transform correctly under Galilean transformations).
With the statement of the Minkowski metric, the common name for the distance formula given above, the theoretical foundation of special relativity is complete. The entire basis for special relativity can be summed up by the geometric statement "changes of reference frame correspond to rotations in the 4D Minkowski spacetime, which is defined to have the distance formula given above." The unique dynamical predictions of SR stem from this geometrical property of spacetime. Special relativity may be said to be the physics of Minkowski spacetime. In this case of spacetime, there are six independent rotations to be considered. Three of them are the standard rotations on a plane in two directions of space. The other three are rotations in a plane of both space and time: These rotations correspond to a change of velocity, and are described by the traditional Lorentz transformations.
As has been mentioned before, one can replace distance formulas with rotation formulas. Instead of starting with the invariance of the Minkowski metric as the fundamental property of spacetime, one may state (as was done in classical physics with Galilean relativity) the mathematical form of the Lorentz transformations and require that physical laws be invariant under these transformations. This makes no reference to the geometry of spacetime, but will produce the same result. This was in fact the traditional approach to SR, used originally by Einstein himself. However, this approach is often considered to offer less insight and be more cumbersome than the more natural Minkowski formalism.
Reference frames and Lorentz transformations: Relativity revisited
We have already discussed that in classical mechanics coordinate frame changes correspond to Galilean transfomations of the coordinates. Is this adequate in the relativistic Minkowski picture?
Suppose there are two people, Bill and John, on separate planets that are moving away from each other. Bill and John are on separate planets so they both think that they are stationary. John draws a graph of Bill's motion through space and time and this is shown in the illustration below:
John sees that Bill is moving through space as well as time but Bill thinks he is moving through time alone. Bill would draw the same conclusion about John's motion. In fact, these two views, which would be classically considered a difference in reference frames, are related simply by a coordinate transformation in M. Bill's view of his own world line and John's view of Bill's world line are related to each other simply by a rotation of coordinates. One can be transformed into the other by a rotation of the time axis. Minkowski geometry handles transformations of reference frames in a very natural way.
Changes in reference frame, represented by velocity transformations in classical mechanics, are represented by rotations in Minkowski space. These rotations are called Lorentz transformations. They are different from the Galilean transformations because of the unique form of the Minkowski metric. The Lorentz transformations are the relativistic equivalent of Galilean transformations. Laws of physics, in order to be relativistically correct, must stay the same under Lorentz transformations. The physical statement that they must be same in all inertial reference frames remains unchanged, but the mathematical transformation between different reference frames changes. Newton's laws of motion are invariant under Galilean rather than Lorentz transformations, so they are immediately recognizable as non-relativistic laws and must be discarded in relativistic physics. Schrödinger's equation is also non-relativistic.
Maxwell's equations are trickier. They are written using vectors and at first glance appear to transform correctly under Galilean transformations. But on closer inspection, several questions are apparent that can not be satisfactorily resolved within classical mechanics (see History of special relativity). They are indeed invariant under Lorentz transformations and are relativistic, even though they were formulated before the discovery of special relativity. Classical electrodynamics can be said to be the first relativistic theory in physics. To make the relativistic character of equations apparent, they are written using 4-component vector like quantities called 4-vectors. 4-Vectors transform correctly under Lorentz transformations. Equations written using 4-vectors are automatically relativistic. This is called the manifestly covariant form of equations. 4-Vectors form a very important part of the formalism of special relativity.
Einstein's postulate: The constancy of the speed of light
Einstein's postulate that the speed of light is a constant comes out as a natural consequence of the Minkowski formulation
- When an object is traveling at c in a certain reference frame, the spacetime interval is zero.
- The spacetime interval between the origin-event (0,0,0,0) and an event (x, y,z, t) is
- The distance travelled by an object moving at velocity v for t seconds is:
- Since the velocity v equals c we have
- Hence the spacetime interval between the events of departure and arrival is given by
- An object traveling at c in one reference frame is traveling at c in all reference frames.
- Let the object move with velocity v when observed from a different reference frame. A change in reference frame corresponds to a rotation in M. Since the spacetime interval must be conserved under rotation, the spacetime interval must be the same in all reference frames. In proposition 1 we showed it to be zero in one reference frame, hence it must be zero in all other reference frames. We get that
- which implies
The paths of light rays have a zero spacetime interval, and hence all observers will obtain the same value for the speed of light. Therefore, when assuming that the universe has four dimensions that are related by Minkowski's formula, the speed of light appears as a constant, and does not need to be assumed (postulated) to be constant as in Einstein's original approach to special relativity.
Clock delays and rod contractions: More on Lorentz transformations
Another consequence of the invariance of the spacetime interval is that clocks will appear to go slower on objects that are moving relative to you. This is very similar to how the 2D projection of a line rotated into the third-dimension appears to get shorter. Length is not conserved simply because we are ignoring one of the dimensions. Let us return to the example of John and Bill.
John observes the length of Bill's spacetime interval as:
whereas Bill doesn't think he has traveled in space, so writes:
The spacetime interval, s2, is invariant. It has the same value for all observers, no matter who measures it or how they are moving in a straight line. This means that Bill's spacetime interval equals John's observation of Bill's spacetime interval so:
So, if John sees a clock that is at rest in Bill's frame record one second, John will find that his own clock measures between these same ticks an interval t, called coordinate time, which is greater than one second. It is said that clocks in motion slow down, relative to those on observers at rest. This is known as "relativistic time dilation of a moving clock." The time that is measured in the rest frame of the clock (in Bill's frame) is called the proper time of the clock.
In special relativity, therefore, changes in reference frame affect time also. Time is no longer absolute. There is no universally correct clock, time runs at different rates for different observers.
Similarly it can be shown that John will also observe measuring rods at rest on Bill's planet to be shorter in the direction of motion than his own measuring rods. This is a prediction known as "relativistic length contraction of a moving rod." If the length of a rod at rest on Bill's planet is X, then we call this quantity the proper length of the rod. The length x of that same rod as measured on John's planet, is called coordinate length, and given by
These two equations can be combined to obtain the general form of the Lorentz transformation in one spatial dimension:
where the Lorentz factor is given by
The above formulas for clock delays and length contractions are special cases of the general transformation.
Alternatively, these equations for time dilation and length contraction (here obtained from the invariance of the spacetime interval), can be obtained directly from the Lorentz transformation by setting X = 0 for time dilation, meaning that the clock is at rest in Bill's frame, or by setting t = 0 for length contraction, meaning that John must measure the distances to the end points of the moving rod at the same time.
A consequence of the Lorentz transformations is the modified velocity-addition formula:
Simultaneity and clock desynchronization
Rather counter-intuitively, special relativity suggests that when 'at rest' we are actually moving through time at the speed of light. As we speed up in space we slow down in time. At the speed of light in space, time slows down to zero. This is a rotation of the time axis into the space axis. We observe that the object speeding by relativistically as having its time axis not at a right angle.
The consequence of this in Minkowski's spacetime is that clocks will appear to be out of phase with each other along the length of a moving object. This means that if one observer sets up a line of clocks that are all synchronized so they all read the same time, then another observer who is moving along the line at high speed will see the clocks all reading different times. This means that observers who are moving relative to each other see different events as simultaneous. This effect is known as "Relativistic Phase" or the "Relativity of Simultaneity." Relativistic phase is often overlooked by students of special relativity, but if it is understood, then phenomena such as the twin paradox are easier to understand.
Observers have a set of simultaneous events around them that they regard as composing the present instant. The relativity of simultaneity results in observers who are moving relative to each other having different sets of events in their present instant.
The net effect of the four-dimensional universe is that observers who are in motion relative to you seem to have time coordinates that lean over in the direction of motion, and consider things to be simultaneous that are not simultaneous for you. Spatial lengths in the direction of travel are shortened, because they tip upwards and downwards, relative to the time axis in the direction of travel, akin to a rotation out of three-dimensional space.
Great care is needed when interpreting spacetime diagrams. Diagrams present data in two dimensions, and cannot show faithfully how, for instance, a zero length spacetime interval appears.
Mass Velocity Relationship
E = mc2 where m stands for rest mass (invariant mass), applies most simply to single particles with no net momentum. But it also applies to ordinary objects composed of many particles so long as the particles are moving in different directions so the total momentum is zero. The mass of the object includes contributions from heat and sound, chemical binding energies and trapped radiation. Familiar examples are a tank of gas, or a hot bowl of soup. The kinetic energy of their particles, the heat motion and radiation, contribute to their weight on a scale according to E = mc2.
The formula is the special case of the relativistic energy-momentum relationship:
This equation gives the rest mass of a system which has an arbitrary amount of momentum and energy. The interpretation of this equation is that the rest mass is the relativistic length of the energy-momentum four-vector.
If the equation E = mc2 is used with the rest mass of the object, the E given by the equation will be the rest energy of the object, and will change with according to the object's internal energy, heat and sound and chemical binding energies, but will not change with the object's overall motion).
If the equation E = mc2 is used with the relativistic mass of the object, the energy will be the total energy of the object, which is conserved in collisions with other fast moving objects.
In developing special relativity, Einstein found that the total energy of a moving body is
with v the velocity.
For small velocities, this reduces to
Which includes the newtonian kinetic energy, as expected, but also an enormous constant term, which is not zero when the object isn't moving.
The total momentum is:
The ratio of the momentum to the velocity is the relativistic mass, and this ratio is equal to the total energy times c2. The energy and relativistic mass are always related by the famous formula.
While this is suggestive, it does not immediately imply that the energy and mass are equivalent because the energy can always be redefined by adding or subtracting a constant. So it is possible to subtract the m0c2 from the expression for E and this is also a valid conserved quantity, although an ugly one. Einstein needed to know whether the rest-mass of the object is really an energy, or whether the constant term was just a mathematical convenience with no physical meaning.
In order to see if the m0c2 is physically significant, Einstein considered processes of emission and absorption. He needed to establish that an object loses mass when it emits energy. He did this by analyzing two photon emission in two different frames.
After Einstein first made his proposal, it became clear that the word mass can have two different meanings. The rest mass is what Einstein called m, but others defined the relativistic mass as:
This mass is the ratio of momentum to velocity, and it is also the relativistic energy divided by c2. So the equation E = mrelc2 holds for moving objects. When the velocity is small, the relativistic mass and the rest mass are almost exactly the same.
E = mc2 either means E = m0c2 for an object at rest, or E = mrelc2 when the object is moving.
Einstein's original papers treated m as what would now be called the rest mass and some claim that he did not like the idea of "relativistic mass." When modern physicists say "mass," they are usually talking about rest mass, since if they meant "relativistic mass," they would just say "energy."
We can rewrite the expression for the energy as a Taylor series:
For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because v / c is small. For low speeds we can ignore all but the first two terms:
The classical energy equation ignores both the m0c2 part, and the high-speed corrections. This is appropriate, because all the high order corrections are small. Since only changes in energy affect the behavior of objects, whether we include the m0c2 part makes no difference, since it is constant. For the same reason, it is possible to subtract the rest energy from the total energy in relativity. In order to see if the rest energy has any physical meaning, it is essential to consider emission and absorption of energy in different frames.
The higher-order terms are extra correction to Newtonian mechanics which become important at higher speeds. The Newtonian equation is only a low speed approximation, but an extraordinarily good one. All of the calculations used in putting astronauts on the moon, for example, could have been done using Newton's equations without any of the higher order corrections.
Mass-energy equivalence: Sunlight and atom bombs
Einstein showed that mass is simply another form of energy. The energy equivalent of rest mass m is E = mc2. This equivalence implies that mass should be interconvertible with other forms of energy. This is the basic principle behind atom bombs and production of energy in nuclear reactors and stars (like Sun).
The standard model of the structure of matter has it that most of the 'mass' of the atom is in the atomic nucleus, and that most of this nuclear mass is in the intense field of light-like gluons swathing the quarks. Most of what is called the mass of an object is thus already in the form of energy, the energy of the quantum color field that confines the quarks.
The sun, for instance, fuels its prodigious output of energy by converting each second 600 billion kilograms of hydrogen-1 (single proton]]s) into 595.2 billion kilograms of helium-4 (2 protons combined with 2 neutrons)—the 4.2 billion kilogram difference is the energy which the sun radiates into space each second. The sun, it is estimated, will continue to turn 4.2 billion kilos of mass into energy for the next 5 billion years or so before leaving the main sequence.
The atomic bombs that ended the Second World War, in comparison, converted about a thirtieth of an ounce of mass into energy.
The energy involved in chemical reactions is so small, however, that the conservation of mass is an excellent approximation.
General relativity: A peek forward
Unlike Newton's laws of motion, relativity is not based upon dynamical postulates. It does not assume anything about motion or forces. Rather, it deals with the fundamental nature of spacetime. It is concerned with describing the geometry of the backdrop on which all dynamical phenomena take place. In a sense therefore, it is a meta-theory, a theory that lays out a structure that all other theories must follow. In truth, Special relativity is only a special case. It assumes that spacetime is flat. That is, it assumes that the structure of Minkowski space and the Minkowski metric tensor is constant throughout. In General relativity, Einstein showed that this is not true. The structure of spacetime is modified by the presence of matter. Specifically, the distance formula given above is no longer generally valid except in space free from mass. However, just like a curved surface can be considered flat in the infinitesimal limit of calculus, a curved spacetime can be considered flat at a small scale. This means that the Minkowski metric written in the differential form is generally valid.
One says that the Minkowski metric is valid locally, but it fails to give a measure of distance over extended distances. It is not valid globally. In fact, in general relativity the global metric itself becomes dependent on the mass distribution and varies through space. The central problem of general relativity is to solve the famous Einstein field equations for a given mass distribution and find the distance formula that applies in that particular case. Minkowski's spacetime formulation was the conceptual stepping stone to general relativity. His fundamentally new outlook allowed not only the development of general relativity, but also to some extent quantum field theories.
- ↑ Einstein, Albert, On the Electrodynamics of Moving Bodies, Annalen der Physik 17: 891-921. Retrieved December 18, 2007.
- ↑ Hermann Minkowski, Raum und Zeit, 80. Versammlung Deutscher Naturforscher, Physikalische Zeitschrift 10: 104-111.
- ↑ UCR, What is the experimental basis of Special Relativity? Retrieved December 22, 2007.
- ↑ Core Power, What is the experimental basis of the Special Relativity Theory? Retrieved December 22, 2007.
- ↑ S. Walter and J. Gray (eds.), "The non-Euclidean style of Minkowskian relativity." The Symbolic Universe (Oxford, UK: Oxford University Press, 1999, ISBN 0198500882).
- ↑ 6.0 6.1 Albert Einstein, R.W. Lawson (trans.), Relativity. The Special and General Theory (London, UK: Routledge classics, 2003).
- ↑ Richard Feyman, Six Not so Easy Pieces (Reading, MA: Addison-Wesley Pub, ISBN 0201150255).
- ↑ Hermann Weyl, Space, Time, Matter (New York, NY: Dover Books, 1952).
- ↑ Kip Thorne, and Roger Blandford, Caltec physics notes, Caltech. Retrieved December 18, 2007.
- ↑ FourmiLab, Special Relativity. Retrieved December 19, 2007.
- ↑ UCR, usenet physics FAQ. Retrieved December 19, 2007.
- Bais, Sander. 2007. Very Special Relativity: An Illustrated Guide. Cambridge, MA: Harvard University Press. ISBN 067402611X.
- Robinson, F.N.H. 1996. An Introduction to Special Relativity and Its Applications. River Edge, NJ: World Scientific Publishing Company. ISBN 9810224990.
- Stephani, Hans. 2004. Relativity: An Introduction to Special and General Relativity. Cambridge, UK: Cambridge University Press. ISBN 0521010691.
Special relativity for a general audience (no math knowledge required)
- Einstein Light An award-winning, non-technical introduction (film clips and demonstrations) supported by dozens of pages of further explanations and animations, at levels with or without mathematics. Retrieved December 18, 2007.
- Einstein Online Introduction to relativity theory, from the Max Planck Institute for Gravitational Physics. Retrieved December 18, 2007.
Special relativity explained (using simple or more advanced math)
- Albert Einstein. Relativity: The Special and General Theory. New York: Henry Holt 1920. BARTLEBY.COM, 2000. Retrieved December 18, 2007.
- Usenet Physics FAQ. Retrieved December 18, 2007.
- Sean Carroll's online Lecture Notes on General Relativity. Retrieved December 18, 2007.
- Hyperphysics Time Dilation. Retrieved December 18, 2007.
- Hyperphysics Length Contraction. Retrieved December 18, 2007.
- Greg Egan's Foundations. Retrieved December 18, 2007.
- Special Relativity Simulation. Retrieved December 18, 2007.
- Caltech Relativity Tutorial A basic introduction to concepts of Special and General Relativity, requiring only a knowledge of basic geometry. Retrieved December 18, 2007.
- Relativity Calculator - Learn Special Relativity Mathematics Mathematics of special relativity presented in as simple and comprehensive manner possible within philosophical and historical contexts. Retrieved December 18, 2007.
- Special relativity made stupid. Retrieved December 18, 2007.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Special_relativity%2C_an_introduction | 13 |
84 | This HTML version of the book is provided as a convenience, but some math equations are not translated correctly. The PDF version is more reliable.
Chapter 12 Vectors as vectors
12.1 What’s a vector?
The word “vector” means different things to different people. In MATLAB, a vector is a matrix that has either one row or one column. So far we have used MATLAB vectors to represent
In this chapter we will see another use of MATLAB vectors: representing spatial vectors. A spatial vector is a value that represents a multidimensional physical quantity like position, velocity, acceleration or force1.
These quantities cannot be described with a single number because they contain multiple components. For example, in a 3-dimensional Cartesian coordinate space, it takes three numbers to specify a position in space; they are usually called x, y and z coordinates. As another example, in 2-dimensional polar coordinates, you can specify a velocity with two numbers, a magnitude and an angle, often called r and θ.
It is convenient to represent spatial vectors using MATLAB vectors because MATLAB knows how to perform most of the vector operations you need for physical modeling. For example, suppose that you are given the velocity of a baseball in the form of a MATLAB vector with two elements, vx and vy, which are the components of velocity in the x and y directions.
>> V = [30, 40] % velocity in m/s
And suppose you are asked to compute the total acceleration of the ball due to drag and gravity. In math notation, the force due to drag is
where V is a spatial vector representing velocity, v is the magnitude of the velocity (sometimes called “speed”), and V is a unit vector in the direction of the velocity vector. The other terms, ρ, A and Cd, are scalars.
The magnitude of a vector is the square root of the sum of the squares of the elements. You could compute it with hypotenuse from Section 5.5, or you could use the MATLAB function norm (norm is another name2 for the magnitude of a vector):
>> v = norm(V) v = 50
V is a unit vector, which means it should have norm 1, and it should point in the same direction as V. The simplest way to compute it is to divide V by its own norm.
>> Vhat = V / v Vhat = 0.6 0.8
Then we can confirm that the norm of V is 1:
>> norm(Vhat) ans = 1
To compute Fd we just multiply the scalar terms by V.
Fd = - 1/2 * C * rho * A * v^2 * Vhat
Similarly, we can compute acceleration by dividing the vector Fd by the scalar m.
Ad = Fd / m
To represent the acceleration of gravity, we create a vector with two components:
Ag = [0; -9.8]
The x component of gravity is 0; the y component is −9.8 m/s2.
Finally we compute total acceleration by adding vector quantities:
A = Ag + Ad;
One nice thing about this computation is that we didn’t have to think much about the components of the vectors. By treating spatial vectors as basic quantities, we can express complex computations concisely.
12.2 Dot and cross products
Multiplying a vector by a scalar is a straightforward operation; so is adding two vectors. But multiplying two vectors is more subtle. It turns out that there are two vector operations that resemble multiplication: dot product and cross product.
The dot product of vectors A and B is a scalar:
where a is the magnitude of A, b is the magnitude of B, and θ is the angle between the vectors. We already know how to compute magnitudes, and you could probably figure out how to compute θ, but you don’t have to. MATLAB provides a function, dot, that computes dot products.
d = dot(A, B)
dot works in any number of dimensions, as long as A and B have the same number of elements.
If one of the operands is a unit vector, you can use the dot product to compute the component of a vector A that is in the direction of a unit vector, î:
s = dot(A, ihat)
In this example, s is the scalar projection of A onto î. The vector projection is the vector that has magnitude s in the direction of î:
V = dot(A, ihat) * ihat
The cross product of vectors A and B is a vector whose direction is perpendicular to A and B and whose magnitude is
where (again) a is the magnitude of A, b is the magnitude of B, and θ is the angle between the vectors. MATLAB provides a function, cross, that computes cross products.
C = cross(A, B)
cross only works for 3-dimensional vectors; the result is a 3-dimensional vector.
A common use of cross is to compute torques. If you represent a moment arm R and a force F as 3-dimensional vectors, then the torque is just
Tau = cross(R, F)
If the components of R are in meters and the components of F are in Newtons, then the torques in Tau are in Newton-meters.
12.3 Celestial mechanics
Modeling celestial mechanics is a good opportunity to compute with spatial vectors. Imagine a star with mass m1 at a point in space described by the vector P1, and a planet with mass m2 at point P2. The magnitude of the gravitational force3 between them is
where r is the distance between them and G is the universal gravitational constant, which is about 6.67 × 10−11 N m2 / kg2. Remember that this is the appropriate value of G only if the masses are in kilograms, distances in meters, and forces in Newtons.
The direction of the force on the star at P1 is in the direction toward P2. We can compute relative direction by subtracting vectors; if we compute R = P2 - P1, then the direction of R is from P1 to P2.
The distance between the planet and star is the length of R:
r = norm(R)
The direction of the force on the star is R:
rhat = R / r
Exercise 1 Write a sequence of MATLAB statements that computes F12, a vector that represents the force on the star due to the planet, and F21, the force on the planet due to the star.
Exercise 2 Encapsulate these statements in a function named gravity_force_func that takes P1, m1, P2, and m2 as input variables and returns F12.
Exercise 3 Write a simulation of the orbit of Jupiter around the Sun. The mass of the Sun is about 2.0 × 1030 kg. You can get the mass of Jupiter, its distance from the Sun and orbital velocity from http://en.wikipedia.org/wiki/Jupiter. Confirm that it takes about 4332 days for Jupiter to orbit the Sun.
Animation is a useful tool for checking the results of a physical model. If something is wrong, animation can make it obvious. There are two ways to do animation in MATLAB. One is to use getframe to capture a series of images and movie to play them back. The more informal way is to draw a series of plots. Here is an example I wrote for Exercise 12.3:
function animate_func(T,M) % animate the positions of the planets, assuming that the % columns of M are x1, y1, x2, y2. X1 = M(:,1); Y1 = M(:,2); X2 = M(:,3); Y2 = M(:,4); minmax = [min([X1;X2]), max([X1;X2]), min([Y1;Y2]), max([Y1;Y2])]; for i=1:length(T) clf; axis(minmax); hold on; draw_func(X1(i), Y1(i), X2(i), Y2(i)); drawnow; end end
The input variables are the output from ode45, a vector T and a matrix M. The columns of M are the positions and velocities of the Sun and Jupiter, so X1 and Y1 get the coordinates of the Sun; X2 and Y2 get the coordinates of Jupiter.
minmax is a vector of four elements which is used inside the loop to set the axes of the figure. This is necessary because otherwise MATLAB scales the figure each time through the loop, so the axes keep changing, which makes the animation hard to watch.
Each time through the loop, animate_func uses clf to clear the figure and axis to reset the axes. hold on makes it possible to put more than one plot onto the same axes (otherwise MATLAB clears the figure each time you call plot).
Each time through the loop, we have to call drawnow so that MATLAB actually displays each plot. Otherwise it waits until you finish drawing all the figures and then updates the display.
draw_func is the function that actually makes the plot:
function draw_func(x1, y1, x2, y2) plot(x1, y1, 'r.', 'MarkerSize', 50); plot(x2, y2, 'b.', 'MarkerSize', 20); end
The input variables are the position of the Sun and Jupiter. draw_func uses plot to draw the Sun as a large red marker and Jupiter as a smaller blue one.
Exercise 4 To make sure you understand how animate_func works, try commenting out some of the lines to see what happens.
One limitation of this kind of animation is that the speed of the animation depends on how fast your computer can generate the plots. Since the results from ode45 are usually not equally spaced in time, your animation might slow down where ode45 takes small time steps and speed up where the time step is larger.
There are two ways to fix this problem:
Exercise 5 Use animate_func and draw_func to vizualize your simulation of Jupiter. Modify it so it shows one day of simulated time in 0.001 seconds of real time—one revolution should take about 4.3 seconds.
12.5 Conservation of Energy
A useful way to check the accuracy of an ODE solver is to see whether it conserves energy. For planetary motion, it turns out that ode45 does not.
The kinetic energy of a moving body is m v2 / 2; the kinetic energy of a solar system is the total kinetic energy of the planets and sun. The potential energy of a sun with mass m1 and a planet with mass m2 and a distance r between them is
Exercise 6 Write a function called energy_func that takes the output of your Jupiter simulation, T and M, and computes the total energy (kinetic and potential) of the system for each estimated position and velocity. Plot the result as a function of time and confirm that it decreases over the course of the simulation. Your function should also compute the relative change in energy, the difference between the energy at the beginning and end, as a percentage of the starting energy.
You can reduce the rate of energy loss by decreasing ode45’s tolerance option using odeset (see Section 11.1):
options = odeset('RelTol', 1e-5); [T, M] = ode45(@rate_func, [0:step:end_time], W, options);
The name of the option is RelTol for “relative tolerance.” The default value is 1e-3 or 0.001. Smaller values make ode45 less “tolerant,” so it does more work to make the errors smaller.
Exercise 7 Run ode45 with a range of values for RelTol and confirm that as the tolerance gets smaller, the rate of energy loss decreases.
Exercise 8 Run your simulation with one of the other ODE solvers MATLAB provides and see if any of them conserve energy.
12.6 What is a model for?
In Section 7.2 I defined a “model” as a simplified description of a physical system, and said that a good model lends itself to analysis and simulation, and makes predictions that are good enough for the intended purpose.
Since then, we have seen a number of examples; now we can say more about what models are for. The goals of a model tend to fall into three categories.
The exercises at the end of this chapter include one model of each type.
Exercise 9 If you put two identical bowls of water into a freezer, one at room temperature and one boiling, which one freezes first?
Hint: you might want to do some research on the Mpemba effect.
Exercise 10 You have been asked to design a new skateboard ramp; unlike a typical skateboard ramp, this one is free to pivot about a support point. Skateboarders approach the ramp on a flat surface and then coast up the ramp; they are not allowed to put their feet down while on the ramp. If they go fast enough, the ramp will rotate and they will gracefully ride down the rotating ramp. Technical and artistic display will be assessed by the usual panel of talented judges.
Your job is to design a ramp that will allow a rider to accomplish this feat, and to create a physical model of the system, a simulation that computes the behavior of a rider on the ramp, and an animation of the result.
A binary star system contains two stars orbiting each other and sometimes planets that orbit one or both stars4. In a binary system, some orbits are “stable” in the sense that a planet can stay in orbit without crashing into one of the stars or flying off into space.
Simulation is a useful tool for investigating the nature of these orbits, as in Holman, M.J. and P.A. Wiegert, 1999, “Long-Term Stability of Planets in Binary Systems,” Astronomical Journal 117, available from http://citeseer.ist.psu.edu/358720.html.
Read this paper and then modify your planetary simulation to replicate or extend the results.
Are you using one of our books in a class?We'd like to know about it. Please consider filling out this short survey. | http://www.greenteapress.com/matlab/html/book014.html | 13 |
62 | Molotov–Ribbentrop Pact negotiations
The Molotov–Ribbentrop Pact was an August 23, 1939 agreement between the Soviet Union and Nazi Germany colloquially named after Soviet foreign minister Vyacheslav Molotov and German foreign minister Joachim von Ribbentrop. The treaty renounced warfare between the two countries. In addition to stipulations of non-aggression, the treaty included a secret protocol dividing several eastern European countries between the parties.
Before the treaty's signing, the Soviet Union conducted negotiations with the United Kingdom and France regarding a potential "Tripartite" alliance. Long-running talks between the Soviet Union and Germany over a potential economic pact expanded to include the military and political discussions, culminating in the pact, along with a commercial agreement signed four days earlier.
After World War I
After the Russian Revolution of 1917, Bolshevist Russia ended its fight against the Central Powers, including Germany, in World War I by signing the Treaty of Brest-Litovsk. Therein, Russia agreed to cede sovereignty and influence over parts of several eastern European countries. Most of those countries became ostensible democratic republics following Germany's defeat and signing of an armistice in the autumn of 1918. With the exception of Belarus and Ukraine, those countries also became independent. However, the Treaty of Brest-Litovsk lasted only eight and a half months, when Germany renounced it and broke off diplomatic relations with Russia.
Before World War I, Germany and Russia had long shared a trading relationship. Germany is a relatively small country with few natural resources. It lacks natural supplies of several key raw materials needed for economic and military operations. Since the late 19th century, it had relied heavily upon Russian imports of raw materials. Germany imported 1.5 billion Rechsmarks of raw materials and other goods annually from Russia before the war.
In 1922, the countries signed the Treaty of Rapallo, renouncing territorial and financial claims against each other. The countries pledged neutrality in the event of an attack against one another with the 1926 Treaty of Berlin. While imports of Soviet goods to Germany fell after World War I, after trade agreements signed between the two countries in the mid-1920s, trade had increased to 433 million Reichsmarks per year by 1927.
In the early 1930s, this relationship fell as the more isolationist Stalinist regime asserted power and the abandonment of post-World War I military control decreased Germany's reliance on Soviet imports, such that Soviet imports fell to 223 million Reichsmarks in 1934.
In the mid-1930s, the Soviet Union made repeated efforts to reestablish closer contacts with Germany. The Soviet Union chiefly sought to repay debts from earlier trade with raw materials, while Germany sought to rearm, and the countries signed a credit agreement in 1935. The rise to power of the Nazi Party increased tensions between Germany, the Soviet Union and other countries with ethnic Slavs, which were considered "untermenschen" according to Nazi racial ideology. The Nazis were convinced that ethnic Slavs were incapable of forming their own state and, accordingly, must be ruled by others. Moreover, the anti-semitic Nazis associated ethnic Jews with both communism and international capitalism, both of which they opposed. Consequently, Nazis believed that Soviet untermenschen Slavs were being ruled by "Jewish Bolshevik" masters. Two primary goals of Nazism were to eliminate Jews and seek Lebensraum ("living space") for ethnic Aryans to the east. In 1934, Hitler spoke of an inescapable battle against "pan-Slav ideals", the victory in which would lead to "permanent mastery of the world", though he stated that they would "walk part of the road with the Russians, if that will help us."
Despite the political rhetoric, in 1936, the Soviets attempted to seek closer political ties to Germany along with an additional credit agreement, while Hitler rebuffed the advances, not wanting to seek closer political ties, even though a 1936 raw material crisis prompted Hitler to decree a Four Year Plan for rearmament "without regard to costs."
Tensions grew further after Germany and Fascist Italy supported the Fascist Spanish Nationalists in the Spanish Civil War in 1936, while the Soviets supported the partially socialist-led Spanish Republic opposition. In November 1936, Soviet-German relations sank further when Germany and Japan entered the Anti-Comintern Pact, which was purportedly directed against the Communist International, though it contained a secret agreement that either side would remain neutral if the other became involved with the Soviet Union. In November 1937, Italy also joined the Anti-Comintern Pact.
Late 1930s
The Moscow Trials of the mid-1930s seriously undermined Soviet prestige in the West. Soviet purges in 1937 and 1938 made a deal less likely by disrupting the already confused Soviet administrative structure necessary for negotiations and giving Hitler the belief that the Soviets' were militarily weak.
The Soviets were not invited to the Munich Conference regarding Czechoslovakia . The Munich Agreement that followed marked the dissolution of Czechoslovakia in 1938 through a partial German annexation, part of an appeasement of Germany.
After German needs for military supplies after the Munich Agreement and Soviet demand for military machinery increased, talks between the two countries occurred from late 1938 to March 1939. The Soviet Third Five Year Plan would require massive new infusions of technology and industrial equipment. An autarkic economic approach or an alliance with England were impossible for Germany, such that closer relations with the Soviet Union were necessary, if not just for economic reasons alone. At that time, Germany could supply only 25 percent of its petroleum needs, and without its primary United States petroleum source in a war, would have to look to Russia and Romania. Germany suffered the same natural shortfall and supply problems for rubber and metal ores needed for hardened steel in war equipment , for which Germany relied on Soviet supplies or transit using Soviet rail lines. Finally, Germany also imported 40 per cent of its fat and oil food requirements, which would grow if Germany conquered nations that were also net food importers, and, thus, needed Soviet imports of Ukrainian grains or Soviet transshipments of Manchurian soybeans. Moreover, an anticipated British blockade in the event of war and a cutoff of petroleum from the United States would create massive shortages for Germany regarding a number of key raw materials
Following Hitler's March 1939 denunciation of the 1934 German–Polish Non-Aggression Pact, Britain and France had made statements guaranteeing the sovereignty of Poland, and on April 25, signed a Common Defense Pact with Poland, when that country refused to be associated with a four-power guarantee involving the USSR.
Initial talks
Potential for Soviet-German talk expansion
Germany and the Soviet Union discussed entering into an economic deal throughout early 1939. For months, Germany had secretly hinted to Soviet diplomats that it could offer better terms for a political agreement than could Britain and France. On March 10, Hitler in his official speech proclaimed that directly. That same day, Stalin, in a speech to the Eighteenth Congress of the All-Union Communist Party, characterized western actions regarding Hitler as moving away from "collective security" and toward "nonintervention," with the goal being to direct Fascist aggression anywhere but against themselves. After the Congress concluded, the Soviet press mounted an attack on both France and Great Britain.
On April 7, a Soviet diplomat visited the German Foreign Ministry stating that there was no point in continuing the German-Soviet ideological struggle and that the countries could conduct a concerted policy. Ten days later, the Soviet ambassador met the German Deputy Foreign Minister and presented him a note requesting speedy removal of any obstacles for fulfillment of military contracts signed between Czechoslovakia and the USSR before the former was occupied by Germany. According to German accounts, at the end of the discussion, the ambassador stated "'there exists for Russia no reason why she should not live with us on a normal footing. And from normal the relations might become better and better." though other sources admit that it could be an exaggeration or inaccurate recounting of the ambassador's words. Immediately after that, the Soviet ambassador had been withdrawn to Moscow and never returned to Germany. According to Ulam, future conversations on the topic in Berlin were believed to continue with lower level officials working under the cover of a Soviet trade mission.
Tripartite talks begin
Starting in mid-March 1939, the Soviet Union, Britain and France traded a flurry of suggestions and counterplans regarding a potential political and military agreement. The Soviet Union feared Western powers and the possibility of a "capitalist encirclements", had little faith either that war could be avoided or in the Polish army, and wanted guaranteed support for a two-pronged attack on Germany. Britain and France believed that war could still be avoided and that the Soviet Union, weakened by purges, could not serve as a main military participant. France, as a continental power, was more anxious for an agreement with the USSR than Britain, which was more willing to make concessions and more aware of the dangers of an agreement between the USSR and Germany. On April 17, Soviet foreign minister Maxim Litvinov outlined a French–British–Soviet mutual assistance pact between the three powers for five to 10 years, including military support, if any of the powers were the subject of aggression.
May changes
Litvinov Dismissal
On May 3, Stalin replaced Foreign Minister Litinov with Vyacheslav Molotov, which significantly increased Stalin's freedom to maneuver in foreign policy. The dismissal of Litvinov, whose Jewish ethnicity was viewed disfavorably by Nazi Germany, removed an obstacle to negotiations with Germany. Stalin immediately directed Molotov to "purge the ministry of Jews." Given Litvinov's prior attempts to create of an anti-fascist coalition, association with the doctrine of collective security with France and Britain, and pro-Western orientation by the standards of the Kremlin, his dismissal indicated the existence of a Soviet option of rapprochement with Germany. Likewise, Molotov's appointment served as a signal to Germany that the USSR was open to offers. The dismissal also signaled to France and Britain the existence of a potential negotiation option with Germany. One British official wrote that Litvinov's disappearance also meant the loss of an admirable technician or shock-absorber, while Molotov's "modus operandi" was "more truly Bolshevik than diplomatic or cosmopolitan." But Stalin sent double message: Molotov appointed Solomon Lozovsky, a Jew, as one of his deputies.
May tripartite negotiations
Although informal consultations started in late April, the main negotiations between the Soviet Union, Britain and France began in May. At a meeting in May 1939, the French Foreign Minister told the Soviet Ambassador to France that he was willing to support turning over all of eastern Poland to the Soviet Union, regardless of Polish opposition, if that was the price of an alliance with Moscow.
German supply concerns and potential political discussions
In May, German war planners also became increasingly concerned that, without Russian supplies, Germany would need to find massive substitute quantities of 165,000 tons of manganese and almost 2 million tons of oil per year. In the context of further economic discussions, on May 17, the Soviet ambassador told a German official that he wanted to restate "in detail that there were no conflicts in foreign policy between Germany and Soviet Russia and that therefore there was no reason for any enmity between the two countries." Three days later, on May 20, Molotov told the German ambassador in Moscow that he no longer wanted to discuss only economic matters, and that it was necessary to establish a "political basis", which German officials saw an "implicit invitation."
On May 26, German officials feared a potential positive result to come from the Soviets talks regarding proposals by Britain and France. On May 30, fearing potential positive results from a British and French offer to the Soviets, Germany directed its diplomats in Moscow that "we have now decided to undertake definite negotiations with the Soviet Union." The ensuing discussions were channeled through the economic negotiation, because the economic needs of the two sides were substantial and because close military and diplomatic connections had been severed in the mid-1930s, leaving these talks as the only means of communication.
Baltic sticking point and German rapprochement
Mixed signals
The Soviets sent mixed signals thereafter. In his first main speech as Soviet Foreign Minister on May 31, Molotov criticized an Anglo-French proposal, stated that the Soviets did not "consider it necessary to renounce business relations with countries like Germany" and proposed to enter a wide-ranging mutual assistance pact against aggression. However, Soviet Commissar for Foreign Trade Mikoyan argued on June 2 to a German official that Moscow "had lost all interest in these [economic] negotiations' as a result of earlier German procrastination."
Tripartite talks progress and Baltic moves
On June 2, the Soviet Union insisted that any mutual assistance pact should be accompanied by a military agreement describing in detail the military assistance that the Soviets, French and British would provide. That day, the Soviet Union also submitted a modification to a French and British proposal that specified the states that would be given aid in the event of "direct aggression", which included Belgium, Greece, Turkey, Romania, Poland, Estonia, Latvia and Finland. Five days later, Estonia and Latvia signed non-aggression pacts with Germany, creating suspicions that Germany had ambitions in a region through which it could attack the Soviet Union.
British attempt to stop German armament
On June 8, the Soviets had agreed that a high ranking German official could come to Moscow to continue the economic negotiations, which occurred in Moscow on July 3. Thereafter, official talks were started in Berlin on July 22.
Meanwhile, hoping to stop the German war machine, in July, Britain conducted talks with Germany regarding a potential plan to bail out the debt-ridden German economy, at the cost of one billion pounds, in exchange for Germany ending its armaments program. The British press broke a story on the talks, and Germany eventually rejected the offer.
Tripartite talks regarding "indirect aggression"
After weeks of political talks that began after the arrival of Central Department Foreign Office head William Strang, on July 8, the British and French submitted a proposed agreement, to which Molotov added a supplementary letter. Talks in late July stalled over a provision in Molotov's supplementary letter stating that a political turn to Germany by the Baltic states constituted "indirect aggression", which Britain feared might justify Soviet intervention in Finland and the Baltic states or push those countries to seek closer relations with Germany (while France was less resistant to the supplement). On July 23, France and Britain agreed with the Soviet proposal to draw up a military convention specifying a reaction to a German attack.
Soviet-German political negotiation beginnings
Only July 18, Soviet trade representative Yevgeniy Barbarin visited Julius Schnurre, saying that the Soviets would like to extend and intensify German-Soviet relations. On July 25, the Soviet Union and Germany were very close to finalizing the terms of a proposed economic deal. On July 26, over dinner, the Soviets accepted a proposed three stage agenda which included the economic agenda first and "a new arrangement which took account of the vital political interests of both parties." On July 28, Molotov sent a first political instruction to the Soviet ambassador in Berlin that finally opened the door to a political detente with Germany.
Germany had learned about the military convention talks before the July 31 British announcement and were skeptical that the Soviets would reach a deal with Britain and France during those planned talks in August. On August 1, the Soviet ambassador stated that two conditions must be met before political negotiations could begin: a new economic treaty and the cessation of anti-Soviet attacks by German media, with which German officials immediately agreed. On August 2, Soviet political discussions with France and Britain were suspended when Molotov stated they could not be restarted until progress was made in the scheduled military talks.
Addressing past hostilities
On August 3, German Foreign Minister Joachim Ribbentrop told Soviet diplomats that "there was no problem between the Baltic and the Black Sea that could not be solved between the two of us." The Germans discussed prior hostility between the nations in the 1930s. They addressed the common ground of anti-capitalism, stating "there is one common element in the ideology of Germany, Italy and the Soviet Union: opposition to the capitalist democracies," "neither we nor Italy have anything in common with the capitalist west" and "it seems to us rather unnatural that a socialist state would stand on the side of the western democracies." They explained that their prior hostility toward Soviet Bolshevism had subsided with the changes in the Comintern and the Soviet renunciation of a world revolution. Astakhov characterized the conversation as "extremely important."
Final negotiations
Finalizing the economic agreement
In August, as Germany scheduled its invasion of Poland on August 25 and prepared for war with France, German war planners estimated that, with an expected British naval blockade, if the Soviet Union became hostile, Germany would fall short of their war mobilization requirements of oil, manganese, rubber and foodstuffs by huge margins. Every internal German military and economic study had argued that Germany was doomed to defeat without at least Soviet neutrality. On August 5, Soviet officials stated that the completion of the trading credit agreement was the most important stage that could be taken in the direction of further such talks.
By August 10, the countries worked out the last minor technical details to to make all but final the their economic arrangement, but the Soviets delayed signing that agreement for almost ten days until they were sure that they had reached a political agreement with Germany. The Soviet ambassador explained to German officials that the Soviets had begun their British negotiations "without much enthusiasm" at a time when they felt Germany would not "come to an understanding", and the parallel talks with the British could not be simply broken off when they had been initiated after 'mature consideration.' On August 12, Germany received word that Molotov wished to further discuss these issues, including Poland, in Moscow.
Tripartite military talks begin
The Soviets, British and French began military negotiations in August. They were delayed until August 12 because the British military delegation, which did not include Strang, took six days to make the trip traveling in a slow merchant ship, undermining the Soviets' confidence in British resolve. On August 14, the question of Poland was raised by Voroshilov for the first time, requesting that the British and French pressure the Poles to enter into an agreement allowing the Soviet army to be stationed in Poland. The Polish government feared that the Soviet government sought to annex disputed territories, the Eastern Borderlands, received by Poland in 1920 after the Treaty of Riga ending the Polish–Soviet War. The British and French contingent communicated the Soviet concern over Poland to their home offices and told the Soviet delegation that they could not answer this political matter without their governments' approval.
Meanwhile, Molotov spoke with Germany's Moscow ambassador on August 15 regarding the possibility of "settling by negotiation all outstanding problems of Soviet–German relations." The discussion included the possibility of a Soviet-German non-aggression pact, the fates of the Baltic states and potential improvements in Soviet-Japanese relations. Molotov stated that "should the German foreign minister come here" these issues "must be discussed in concrete terms." Within hours of receiving word of the meeting, Germany sent a reply stating that it was prepared to conclude a 25 year non-aggression pact, ready to "guarantee the Baltic States jointly with the Soviet Union", and ready to exert influence to improve Soviet-Japanese relations. The Soviets responded positively, but stated that a "special protocol" was required "defining the interests" of the parties. Germany replied that, in contrast to the British delegation in Moscow at that time without Strang, Ribbentrop personally would travel to Moscow to conclude a deal.
In the Soviet-British-French talks, the Anglo-Franco military negotiators were sent to discuss "general principles" rather than details. On August 15, the British contingent was instructed to move more quickly to bring the military talks to a conclusion, and thus, were permitted to give Soviet negotiators confidential British information. The British contingent stated that Britain currently only possessed six army divisions but, in the event of a war, they could employ 16 divisions initially, followed by a second contingent of 16 divisions—a sum far less than the 120 Soviet divisions. French negotiators stated that they had 110 divisions available. In discussions on August 18–19, the Poles informed the French ambassador that they would not approve Red Army troops operating in Poland.
Delayed commercial agreement signing
After Soviet and German officials in Moscow first finalized the terms of a seven-year German-Soviet Commercial Agreement, German officials became nervous that the Soviets were delaying its signing on August 19 for political reasons. When Tass published a report that the Soviet-British-French talks had become snarled over the Far East and "entirely different matters", Germany took it as a signal that there was still time and hope to reach a Soviet-German deal. Hitler himself sent out a coded telegram to Stalin stating that because "Poland has become intolerable," Stalin must receive Ribbentrop in Moscow by August 23 at the latest to sign a Pact. Controversy surrounds a related alleged Stalin's speech on August 19, 1939 asserting that a great war between the Western powers was necessary for the spread of World Revolution. Historians debate whether that speech ever actually occurred.
At 2:00 a.m. on August 20, Germany and the Soviet Union signed a commercial agreement, dated August 19, providing for the trade of certain German military and civilian equipment in exchange for Soviet raw materials. The agreement covered "current" business, which entailed a Soviet obligation to deliver 180 million Reichsmarks in raw materials in response to German orders, while Germany would allow the Soviets to order 120 million Reichsmarks for German industrial goods. Under the agreement, Germany also granted the Soviet Union a merchandise credit of 200 million Reichsmarks over 7 years to buy German manufactured goods at an extremely favorable interest rate.
Soviets adjourn tripartite military talks and strike a deal with Germany
After the Poles' resistance to pressure, on August 21, Voroshilov proposed adjournment of the military talks with the British and French, using the excuse that the absence of the senior Soviet personnel at the talks interfered with the autumn manoeuvres of the Soviet forces though the primary reason was the progress being made in the Soviet-German negotiations.
That same day, August 21, Stalin has received assurance would approve secret protocols to the proposed non-aggression pact that would grant the Soviets land in Poland, the Baltic states, Finland and Romania. That night, with Germany nervously awaiting a response to Hitler's August 19 telegram, Stalin replied at 9:35 p.m. that the Soviets were willing to sign the pact and that he would receive Ribbentrop on August 23. The Pact was signed sometime in the night between August 23–24.
Pact signing
On August 24, a 10-year non-aggression pact was signed with provisions that included: consultation; arbitration if either party disagreed; neutrality if either went to war against a third power; no membership of a group "which is directly or indirectly aimed at the other." Most notably, there was also a secret protocol to the pact, according to which the states of Northern and Eastern Europe were divided into German and Soviet "spheres of influence".
Poland was to be partitioned in the event of its "political rearrangement". The USSR was promised an eastern part of Poland, primarily populated with Ukrainians and Belarusians, in case of its dissolution, and additionally Latvia, Estonia and Finland. Bessarabia, then part of Romania, was to be joined to the Moldovan ASSR, and become the Moldovan SSR under control of Moscow. The news was met with utter shock and surprise by government leaders and media worldwide, most of whom were aware only of the British-French-Soviet negotiations that had taken place for months.
Ribbentrop and Stalin enjoyed warm conversations at the signing, exchanging toasts and further discussing the prior hostilities between the countries in the 1930s. Ribbentrop stated that Britain had always attempted to disrupt Soviet-German relations, was "weak", and "wants to let others fight for her presumptuous claim to world dominion." Stalin concurred, adding "[i]f England dominated the world, that was due to the stupidity of the other countries that always let themselves be bluffed." Ribbentrop stated that the Anti-Comintern Pact was directed not against the Soviet Union, but against Western democracies, "frightened principally the City of London [i.e., the British financiers] and the English shopkeepers" and stated that Berliners had joked that Stalin would yet joint the Anti-Comintern Pact himself. Stalin proposed a toast to Hitler, and Stalin and Molotov repeatedly toasted the German nation, the Molotov-Ribbentrop Pact and Soviet-German relations. Ribbentrop countered with a toast to Stalin and a toast the countries' relations. As Ribbentrop left, Stalin took him aside and stated that the Soviet Government took the new pact very seriously, and he would "guarantee his word of honor that the Soviet Union would not betray its partner."
Events during the Pact's operation
Immediate dealings with Britain
The day after the Pact was signed, the French and British military negotiation delegation urgently requested a meeting with Voroshilov. On August 25, Voroshilov told them "[i]n view of the changed political situation, no useful purpose can be served in continuing the conversation." That day, Hitler told the British ambassador to Berlin that the pact with the Soviets prevented Germany from facing a two front war, changing the strategic situation from that in World War I, and that Britain should accept his demands regarding Poland. Surprising Hitler, Britain signed a mutual-assistance treaty with Poland that day, causing Hitler to delay the planned August 26 invasion of western Poland.
Division of eastern Europe
On September 1, 1939, the German invasion of its agreed upon portion of western Poland started World War II. On September 17 the Red Army invaded eastern Poland and occupied the Polish territory assigned to it by the Molotov-Ribbentrop Pact, followed by co-ordination with German forces in Poland. Eleven days later, the secret protocol of the Molotov-Ribbentrop Pact was modified, allotting Germany a larger part of Poland, while ceding most of Lithuania to the Soviet Union.
After a Soviet attempt to invade Finland faced stiff resistance, the combatants signed an interim peace, granting the Soviets approximately 10 per cent of Finnish territory. The Soviet Union also sent troops into Lithuania, Estonia and Latvia. Thereafter, governments in all three Baltic countries requesting admission to the Soviet Union were installed.
Further dealings
Germany and the Soviet Union entered an intricate trade pact on February 11, 1940 that was over four times larger than the one the two countries had signed in August 1939, providing for millions of tons of shipment to Germany of oil, foodstuffs and other key raw materials, in exchange for German war machines and other equipment. This was followed by a January 10, 1941 agreement setting several ongoing issues, including border specificity, ethnic migrations and further commercial deal expansion.
Discussions in the fall and winter of 1940-41 ensued regarding the potential entry of the Soviet Union as the fourth members of the Axis powers. The countries never came to an agreement on the issue.
German invasion of the Soviet Union
Nazi Germany terminated the Molotov–Ribbentrop Pact with its invasion of the Soviet Union in Operation Barbarossa on June 22, 1941. After the launch of the invasion, the territories gained by the Soviet Union due to the Molotov–Ribbentrop Pact were lost in a matter of weeks. In the three weeks following the Pact's breaking, attempting to defend against large German advances, the Soviet Union suffered 750,000 casualties, and lost 10,000 tanks and 4,000 aircraft. Within six months, the Soviet military had suffered 4.3 million casualties and the Germans had captured three million Soviet prisoners, two million of which would die in German captivity by February 1942. German forces had advanced 1,050 miles (1,690 kilometers), and maintained a linearly-measured front of 1,900 miles (3,058 kilometers).
Post-war commentary regarding Pact negotiations
The reasons behind signing the pact
There is no consensus among historians regarding the reasons that prompted the Soviet Union to sign the pact with Nazi Germany. According to Ericson, the opinions "have ranged from seeing the Soviets as far-sighted anti-Nazis, to seeing them as reluctant appeasers, as cautious expansionists, or as active aggressors and blackmailers". Edward Hallett Carr argued that it was necessary to enter into a non-aggression pact to buy time, since the Soviet Union was not in a position to fight a war in 1939, and needed at least three years to prepare. He stated: "In return for non-intervention Stalin secured a breathing space of immunity from German attack." According to Carr, the "bastion" created by means of the Pact, "was and could only be, a line of defense against potential German attack." An important advantage (projected by Carr) was that "if Soviet Russia had eventually to fight Hitler, the Western Powers would already be involved."
However, during the last decades, this view has been disputed. Historian Werner Maser stated that "the claim that the Soviet Union was at the time threatened by Hitler, as Stalin supposed,...is a legend, to whose creators Stalin himself belonged." (Maser 1994: 64). In Maser's view (1994: 42), "neither Germany nor Japan were in a situation [of] invading the USSR even with the least perspective [sic] of success," and this could not have been unknown to Stalin.
Some critics, such as Viktor Suvorov, claim that Stalin's primary motive for signing the Soviet–German non-aggression treaty was Stalin's calculation that such a pact could result in a conflict between the capitalist countries of Western Europe. This idea is supported by Albert L. Weeks. However, other claims by Suvorov, such as the Stalin's planning to invade Germany in 1941, have remained under debate among historians, with some like David Glantz opposing, and others like Mikhail Meltyukhov supporting it.
The extent to which the Soviet Union's post-Pact territorial acquisitions may have contributed to preventing its fall (and thus a Nazi victory in the war) remains a factor in evaluating the Pact. Soviet sources point out that the German advance eventually stopped just a few kilometers away from Moscow, so the role of the extra territory might have been crucial in such a close call. Others postulate that Poland and the Baltic countries played the important role of buffer states between the Soviet Union and Nazi Germany, and that the Molotov–Ribbentrop Pact was a precondition not only for Germany's invasion of Western Europe, but also for the Third Reich's invasion of the Soviet Union. The military aspect of moving from established fortified positions on the Stalin Line into undefended Polish territory could also be seen as one of the causes of rapid disintegration of Soviet armed forces in the border area during the German 1941 campaign, as the newly constructed Molotov Line was unfinished and unable to provide Soviet troops with the necessary defense capabilities.
Documentary evidence of early Soviet-German rapprochement
In 1948, the U.S. State Department published a collection of documents recovered from the Foreign Office of Nazi Germany, that formed a documentary base for studies of Nazi-Soviet relations. This collection contains the German State Secretary's account on a meeting with Soviet ambassador Merekalov. This memorandum reproduces the following ambassador's statement: "'there exists for Russia no reason why she should not live with us on a normal footing. And from normal the relations might become better and better." According to Carr, this document is the first recorded Soviet step in the rapprochement with Germany.
The next documentary evidence is the memorandum on the May 17 meeting between the Soviet ambassador and German Foreign Office official, where the ambassador "stated in detail that there were no conflicts in foreign policy between Germany and Soviet Russia and that therefore there was no reason for any enmity between the two countries."
The third document is the summary of the May 20 meeting between Molotov and German ambassador von der Schulenburg. According to the document, Molotov told the German ambassador that he no longer wanted to discuss only economic matters, and that it was necessary to establish a "political basis", which German officials saw as an "implicit invitation."
The last document is the German State Office memorandum on the telephone call made on June 17 by Bulgarian ambassador Draganov. In German accounts of Draganov's report, Astakhov explained that a Soviet deal with Germany better suited the Soviets than one with Britain and France, although from the Bulgarian ambassador it "could not be ascertained whether it had reflected the personal opinions of Herr Astakhov or the opinions of the Soviet Government".
This documentary evidence of an early Nazi-Soviet rapprochement were questioned by Geoffrey Roberts, who analyzed Soviet archival documents that had been de-classified and released on the eve of 1990s. Roberts found no evidence that the alleged statements quoted by the Germans had ever been made in reality, and came to the conclusion that the German archival documents cannot serve as evidence for the existence of a dual policy during first half of 1939. According to him, no documentary evidence exists that the USSR responded to or made any overtures to the Germans "until the end of July 1939 at the earliest".
Litvinov's dismissal and Molotov's appointment
Many historians note that the dismissal of Foreign Minister Litvinov, whose Jewish ethnicity was viewed unfavorably by Nazi Germany, removed a major obstacle to negotiations between them and the USSR.
Carr, however, has argued that the Soviet Union's replacement of Litvinov with Molotov on May 3, 1939 indicated not an irrevocable shift towards alignment with Germany, but rather was Stalin’s way of engaging in hard bargaining with the British and the French by appointing a tough negotiator, namely Molotov, to the Foreign Commissariat. Albert Resis argued that the replacement of Litvinov by Molotov was both a warning to Britain and a signal to Germany. Derek Watson argued that Molotov could get the best deal with Britain and France because he was not encumbered with the baggage of collective security and could more easily negotiate with Germany. Geoffrey Roberts argued that Litvinov's dismissal helped the Soviets with British-French talks, because Litvinov doubted or maybe even opposed such discussions.
See also
- George F. Kennan Soviet Foreign Policy 1917-1941, Kreiger Publishing Company, 1960.
- Text of the 3 March, 1918 Peace Treaty of Brest-Litovsk
- Ericson 1999, pp. 11–12
- Ericson 1999, pp. 1–2
- Hehn 2005, p. 15
- Ericson 1999, pp. 14–5
- Hehn 2005, p. 212
- Ericson 1999, pp. 17–18
- Ericson 1999, pp. 23–24
- Bendersky,Joseph W., A History of Nazi Germany: 1919-1945, Rowman & Littlefield, 2000, ISBN 0-8304-1567-X, page 177
- Wette, Wolfram, Deborah Lucas SchneiderThe Wehrmacht: History, Myth, Reality, Harvard University Press, 2006 ISBN 0-674-02213-0, page 15
- Lee, Stephen J. and Paul Shuter, Weimar and Nazi Germany, Heinemann, 1996, ISBN 0-435-30920-X, page 33
- Bendersky, Joseph W., A History of Nazi Germany: 1919-1945, Rowman & Littlefield, 2000, ISBN 0-8304-1567-X, page 159
- Müller, Rolf-Dieter, Gerd R. Ueberschär, Hitler's War in the East, 1941-1945: A Critical Assessment, Berghahn Books, 2002, ISBN 157181293, page 244
- Rauschning, Hermann, Hitler Speaks: A Series of Political Conversations With Adolf Hitler on His Real Aims, Kessinger Publishing, 2006,ISBN 142860034, pages 136-7
- Hehn 2005, p. 37
- Jurado, Carlos Caballero and Ramiro Bujeiro, The Condor Legion: German Troops in the Spanish Civil War, Osprey Publishing, 2006, ISBN 1-84176-899-5, page 5-6
- Gerhard Weinberg: The Foreign Policy of Hitler's Germany Diplomatic Revolution in Europe 1933-36, Chicago: University of Chicago Press, 1970, pages 346.
- Robert Melvin Spector. World Without Civilization: Mass Murder and the Holocaust, History, and Analysis, pg. 257
- Piers Brendon, The Dark Valley, Alfred A. Knopf, 2000, ISBN 0-375-40881-9
- Ericson 1999, pp. 27–28
- Text of the Agreement concluded at Munich, September 29, 1938, between Germany, Great Britain, France and Italy
- Kershaw, Ian, Hitler, 1936-1945: Nemesis, W. W. Norton & Company, 2001, ISBN 0-393-32252-1, page 157-8
- Ericson 1999, pp. 29–35
- Hehn 2005, pp. 42–3
- Ericson 1999, pp. 3–4
- Manipulating the Ether: The Power of Broadcast Radio in Thirties America Robert J. Brown ISBN 0-7864-2066-9
- Watson 2000, p. 698
- Ericson 1999, pp. 23–35
- Roberts 2006, p. 30
- Tentative Efforts To Improve German–Soviet Relations, April 17 – August 14, 1939
- "Natural Enemies: The United States and the Soviet Union in the Cold War 1917–1991" by Robert C. Grogin 2001, Lexington Books page 28
- Zachary Shore. What Hitler Knew: The Battle for Information in Nazi Foreign Policy. Published by Oxford University Press US, 2005 ISBN 0-19-518261-8, ISBN 978-0-19-518261-3, p. 109
- Nekrich, Ulam & Freeze 1997, p. 107
- Karski, J. The Great Powers and Poland, University Press, 1985, p.342
- Nekrich, Ulam & Freeze 1997, pp. 108–9
- Roberts (1992; Historical Journal) p. 921-926
- Ericson 1999, p. 43
- Biskupski & Wandycz 2003, pp. 171–72
- Ulam 1989, p. 508
- Watson 2000, p. 695
- In Jonathan Haslam's view it shouldn't be overlooked that Stalin's adherence to the collective security line was purely conditional. [Review of] Stalin's Drive to the West, 1938–1945: The Origins of the Cold War. by R. Raack; The Soviet Union and the Origins of the Second World War: Russo-German Relations and the Road to War, 1933–1941. by G. Roberts. The Journal of Modern History > Vol. 69, No. 4 (Dec., 1997), p.787
- D.C. Watt, How War Came: the Immediate Origins of the Second World War 1938-1939 (London, 1989), p. 118. ISBN 0-394-57916-X, 9780394579160
- Watson 2000, p. 696
- Resis 2000, p. 47
- Israeli?, Viktor Levonovich, On the Battlefields of the Cold War: A Soviet Ambassador's Confession, Penn State Press, 2003, ISBN 0-271-02297-3, page 10
- Nekrich, Ulam & Freeze 1997, pp. 109–110
- Shirer 1990, pp. 480–1
- Herf 2006, pp. 97–98
- Osborn, Patrick R., Operation Pike: Britain Versus the Soviet Union, 1939-1941, Greenwood Publishing Group, 2000, ISBN 0-313-31368-7, page xix
- Levin, Nora, The Jews in the Soviet Union Since 1917: Paradox of Survival, NYU Press, 1988, ISBN 0-8147-5051-6, page 330. Litvniov "was referred to by the German radio as 'Litvinov-Finkelstein'-- was dropped in favor of Vyascheslav Molotov. 'The emininent Jew', as Churchill put it, 'the target of German antagonism was flung aside . . . like a broken tool . . . The Jew Litvinov was gone and Hitler's dominant prejudice placated.'"
- In an introduction to a 1992 paper, Geoffrey Roberts writes: "Perhaps the only thing that can be salvaged from the wreckage of the orthodox interpretation of Litvinov's dismissal is some notion that, by appointing Molotov foreign minister, Stalin was preparing for the contingency of a possible deal with Hitler. In view of Litvinov's Jewish heritage and his militant anti-nazism, that is not an unreasonable supposition. But it is a hypothesis for which there is as yet no evidence. Moreover, we shall see that what evidence there is suggests that Stalin's decision was determined by a quite different set of circumstances and calculations", Geoffrey Roberts. The Fall of Litvinov: A Revisionist View Journal of Contemporary History, Vol. 27, No. 4 (Oct., 1992), pp. 639-657 Stable URL: http://www.jstor.org/stable/260946
- Resis 2000, p. 35
- Moss, Walter, A History of Russia: Since 1855, Anthem Press, 2005, ISBN 1-84331-034-1, page 283
- Gorodetsky, Gabriel, Soviet Foreign Policy, 1917-1991: A Retrospective, Routledge, 1994, ISBN 0-7146-4506-0, page 55
- Resis 2000, p. 51
- According to Paul Flewers, Stalin’s address to the eighteenth congress of the Communist Party of the Soviet Union on March 10, 1939 discounted any idea of German designs on the Soviet Union. Stalin had intended: "To be cautious and not allow our country to be drawn into conflicts by warmongers who are accustomed to have others pull the chestnuts out of the fire for them." This was intended to warn the Western powers that they could not necessarily rely upon the support of the Soviet Union. As Flewers put it, “Stalin was publicly making the none-too-subtle implication that some form of deal between the Soviet Union and Germany could not be ruled out.” From the Red Flag to the Union Jack: The Rise of Domestic Patriotism in the Communist Party of Great Britain 1995
- Resis 2000, pp. 33–56
- Watson 2000, p. 699
- Montefiore 2005, p. 312
- Imlay, Talbot, "France and the Phony War, 1939-1940", pages 261-280 from French Foreign and Defence Policy, 1918-1940 edited by Robert Boyce, London, United Kingdom: Routledge, 1998 page 264
- Ericson 1999, p. 44
- Ericson 1999, p. 45
- Nekrich, Ulam & Freeze 1997, p. 111
- Ericson 1999, p. 46
- Biskupski & Wandycz 2003, p. 179
- Watson 2000, p. 703
- Shirer 1990, p. 502
- Watson 2000, p. 704
- Roberts 1995, p. 1995
- J. Haslam, The Soviet Union and the Struggle for Collective Security in Europe, 1933-39 (London, 1984), pp. 207, 210. ISBN 0-333-30050-5, ISBN 978-0-333-30050-3
- Ericson 1999, p. 47
- Nekrich, Ulam & Freeze 1997, p. 114
- Hehn 2005, p. 218
- Biskupski & Wandycz 2003, p. 186
- Watson 2000, p. 708
- Hiden, John, The Baltic and the Outbreak of the Second World War, Cambridge University Press, 2003, ISBN 0-521-53120-9, page 46
- Shirer 1990, p. 447
- Ericson 1999, pp. 54–55
- Fest 2002, p. 588
- Ulam 1989, pp. 509–10
- Roberts 1992, p. 64
- Shirer 1990, p. 503
- Shirer 1990, p. 504
- Fest 2002, pp. 589–90
- Vehviläinen, Olli, Finland in the Second World War: Between Germany and Russia, Macmillan, 2002, ISBN 0-333-80149-0, page 30
- Bertriko, Jean-Jacques Subrenat, A. and David Cousins, Estonia: Identity and Independence, Rodopi, 2004, ISBN 90-420-0890-3 page 131
- Nekrich, Ulam & Freeze 1997, p. 115
- Ericson 1999, p. 56
- Erickson 2001, p. 539-30
- Shirer 1990, p. 513
- Watson 2000, p. 713
- Shirer 1990, pp. 533–4
- Shirer 1990, p. 535
- Taylor and Shaw, Penguin Dictionary of the Third Reich, 1997, p.246.
- Shirer 1990, p. 521
- Shirer 1990, pp. 523–4
- Murphy 2006, p. 22
- Shirer 1990, p. 536
- Shirer 1990, p. 525
- Shirer 1990, pp. 526–7
- Murphy 2006, pp. 24–28
- Ericson 1999, p. 57
- Shirer 1990, p. 668
- Wegner 1997, p. 99
- Grenville & Wasserstein 2001, p. 227
- Ericson 1999, p. 61
- Watson 2000, p. 715
- Murphy 2006, p. 23
- Shirer 1990, p. 528
- Shirer 1990, p. 540
- Text of the Nazi-Soviet Non-Aggression Pact, executed August 23, 1939
- Shirer 1990, p. 539
- Shirer 1990, pp. 541–2
- Nekrich, Ulam & Freeze 1997, p. 123
- Sanford, George (2005). Katyn and the Soviet Massacre Of 1940: Truth, Justice And Memory. London, New York: Routledge. ISBN 0-415-33873-5.
- Wettig, Gerhard, Stalin and the Cold War in Europe, Rowman & Littlefield, Landham, Md, 2008, ISBN 0-7425-5542-9, page 20
- Kennedy-Pipe, Caroline, Stalin's Cold War, New York : Manchester University Press, 1995, ISBN 0-7190-4201-1
- Senn, Alfred Erich, Lithuania 1940 : revolution from above, Amsterdam, New York, Rodopi, 2007 ISBN 978-90-420-2225-6
- Wettig, Gerhard, Stalin and the Cold War in Europe, Rowman & Littlefield, Landham, Md, 2008, ISBN 0-7425-5542-9, page 21
- Ericson 1999, pp. 150–3
- Johari, J.C., Soviet Diplomacy 1925-41: 1925-27, Anmol Publications PVT. LTD., 2000, ISBN 81-7488-491-2 pages 134-137
- Roberts 2006, p. 58
- Brackman, Roman, The Secret File of Joseph Stalin: A Hidden Life, London and Portland, Frank Cass Publishers, 2001, ISBN 0-7146-5050-1, page 341
- Roberts 2006, p. 59
- Roberts 2006, p. 82
- Roberts 2006, p. 85
- Roberts 2006, pp. 116–7
- Glantz, David, The Soviet-German War 1941–45: Myths and Realities: A Survey Essay, October 11, 2001, page 7
- Edward E. Ericson, III. Karl Schnurre and the Evolution of Nazi-Soviet Relations, 1936-1941. German Studies Review, Vol. 21, No. 2 (May, 1998), pp. 263-283
- Carr, Edward H., German–Soviet Relations between the Two World Wars, 1919–1939, Oxford 1952, p. 136.
- E. H. Carr., From Munich to Moscow. I., Soviet Studies, Vol. 1, No. 1, (Jun., 1949), pp. 3–17. Published by: Taylor & Francis, Ltd.
- Taylor, A.J.P., The Origins of the Second World War, London 1961, p. 262–3
- Max Beloff. Soviet Foreign Policy, 1929-41: Some Notes Soviet Studies, Vol. 2, No. 2 (Oct., 1950), pp. 123-137
- Stalin's Other War: Soviet Grand Strategy, 1939–1941 ISBN 0-7425-2191-5
- Nazi-Soviet relations 1939-1941. : Documents from the Archives of The German Foreign Office. Raymond James Sontag and James Stuart Beddie, ed. 1948. Department of State. Publication 3023
- Geoffrey Roberts.The Soviet Decision for a Pact with Nazi Germany. Soviet Studies, Vol. 44, No. 1 (1992), pp. 57-78
- Memorandum by the State Secretary in the German Foreign Office - Weizsacker
- E. H. Carr. From Munich to Moscow. II Soviet Studies, Vol. 1, No. 2 (Oct., 1949), pp. 93-105
- Foreign Office Memorandum : May 17, 1939
- Memorandum by the German Ambassador in the Soviet Union (Schulenburg) May 20, 1939
- Nekrich, Ulam & Freeze 1997, pp. 112–3
- God krizisa: 1938-1939 : dokumenty i materialy v dvukh tomakh.By A. P. Bondarenko, Soviet Union Ministerstvo inostrannykh del. Contributor A. P. Bondarenko. Published by Izd-vo polit. lit-ry, 1990. Item notes: t. 2. Item notes: v.2. Original from the University of Michigan. Digitized Nov 10, 2006. ISBN 5-250-01092-X, 9785250010924
- Roberts 1992, pp. 57–78
- Geoffrey Roberts. On Soviet-German Relations: The Debate Continues. A Review Article Europe-Asia Studies, Vol. 50, No. 8 (Dec., 1998), pp.1471-1475
- Carr, E.H. German-Soviet Relations Between the Two World Wars, Harper & Row: New York, 1951, 1996 pages 129-130
- Albert Resis. The Fall of Litvinov: Harbinger of the German-Soviet Non-Aggression Pact. Europe-Asia Studies, Vol. 52, No. 1 (Jan., 2000), pp. 33-56 Published by: Taylor & Francis, Ltd. Stable URL: http://www.jstor.org/stable/153750 "By replacing Litvinov with Molotov, Stalin significantly increased his freedom of maneuver in foreign policy. Litvinov's dismissal served as a warning to London and Paris that Moscow had another option: rapprochement with Germany. After Litvinov's dismissal, the pace of Soviet-German contacts quickened. But that did not mean that Moscow had abandoned the search for collective security, now exemplified by the Soviet draft triple alliance. Meanwhile, Molotov's appointment served as an additional signal to Berlin that Moscow was open to offers. The signal worked, the warning did not."
- Derek Watson. Molotov's Apprenticeship in Foreign Policy: The Triple Alliance Negotiations in 1939, Europe-Asia Studies, Vol. 52, No. 4 (Jun., 2000), pp. 695-722. Stable URL: http://www.jstor.org/stable/153322 "The choice of Molotov reflected not only the appointment of a nationalist and one of Stalin's leading lieutenants, a Russian who was not a Jew and who could negotiate with Nazi Germany, but also someone unencumbered with the baggage of collective security who could obtain the best deal with Britain and France, if they could be forced into an agreement."
- Geoffrey Roberts. The Fall of Litvinov: A Revisionist View. Journal of Contemporary History Vol. 27, No. 4 (Oct., 1992), pp. 639-657. Stable URL: http://www.jstor.org/stable/260946. "the foreign policy factor in Litvinov's downfall was the desire of Stalin and Molotov to take charge of foreign relations in order to pursue their policy of a triple alliance with Britain and France - a policy whose utility Litvinov doubted and may even have opposed or obstructed."
- Biskupski, Mieczyslaw B.; Wandycz, Piotr Stefan (2003), Ideology, Politics, and Diplomacy in East Central Europe, Boydell & Brewer, ISBN 1-58046-137-9
- Ericson, Edward E. (1999), Feeding the German Eagle: Soviet Economic Aid to Nazi Germany, 1933-1941, Greenwood Publishing Group, ISBN 0-275-96337-3
- Fest, Joachim C. (2002), Hitler, Houghton Mifflin Harcourt, ISBN 0-15-602754-2
- Herf, Jeffrey (2006), The Jewish Enemy: Nazi Propaganda During World War II and the Holocaust, Harvard University Press, ISBN 0-674-02175-4
- Montefiore, Simon Sebac (2005) . Stalin: The Court of the Red Tsar (5th ed.). Great Britain: Phoenix. ISBN 0-7538-1766-7.
- Murphy, David E. (2006), What Stalin Knew: The Enigma of Barbarossa, Yale University Press, ISBN 0-300-11981-X
- Nekrich, Aleksandr Moiseevich; Ulam, Adam Bruno; Freeze, Gregory L. (1997), Pariahs, Partners, Predators: German-Soviet Relations, 1922-1941, Columbia University Press, ISBN 0-231-10676-9
- Philbin III, Tobias R. (1994), The Lure of Neptune: German-Soviet Naval Collaboration and Ambitions, 1919–1941, University of South Carolina Press, ISBN 0-87249-992-8
- Resis, Albert (2000), "The Fall of Litvinov: Harbinger of the German-Soviet Non-Aggression Pact", Europe-Asia Studies 52 (1), JSTOR 153750
- Roberts, Geoffrey (2006), Stalin's Wars: From World War to Cold War, 1939–1953, Yale University Press, ISBN 0-300-11204-1
- Roberts, Geoffrey (1995), "Soviet Policy and the Baltic States, 1939-1940: A Reappraisal", Diplomacy and Statecraft 6 (3), JSTOR 153322
- Roberts, Geoffrey (1992), "The Soviet Decision for a Pact with Nazi Germany", Soviet Studies 55 (2), JSTOR 152247
- Roberts, Geoffrey (1992), "Infamous Encounter? The Merekalov-Weizsacker Meeting of 17 April 1939", The Historical Journal 35 (4), JSTOR 2639445
- Shirer, William L. (1990), The Rise and Fall of the Third Reich: A History of Nazi Germany, Simon and Schuster, ISBN 0-671-72868-7
- Watson, Derek (2000), "Molotov's Apprenticeship in Foreign Policy: The Triple Alliance Negotiations in 1939", Europe-Asia Studies 52 (4), JSTOR 153322
- Ulam, Adam Bruno (1989), Stalin: The Man and His Era, Beacon Press, ISBN 0-8070-7005-X | http://en.wikipedia.org/wiki/Molotov%e2%80%93Ribbentrop_Pact_negotiations | 13 |
77 | In mathematics, an ellipse (Greek ἔλλειψις (elleipsis), a 'falling short') is the finite or bounded case of a conic section, the geometric shape that results from cutting a circular conical or cylindrical surface with an oblique plane (the other two cases being the parabola and the hyperbola). It is also the locus of all points of the plane whose distances to two fixed points add to the same constant.
Ellipses also arise as images of a circle or a sphere under parallel projection, and some cases of perspective projection. An ellipse is also the closed and bounded case of an implicit curve of degree 2, and of a rational curve of degree 2. It is also the simplest Lissajous figure, formed when the horizontal and vertical motions are sinusoids with the same frequency.
Elements of an ellipseEdit
An ellipse is a smooth closed curve which is symmetric about its center. The distance between antipodal points on the ellipse, or pairs of points whose midpoint is at the center of the ellipse, is maximum and minimum along two perpendicular directions, the major axis or transverse diameter, and the minor axis or conjugate diameter. The semimajor axis or major radius (denoted by a in the figure) and the semiminor axis or minor radius (denoted by b in the figure) are one half of the major and minor diameters, respectively.
There are two special points F1 and F2 on the ellipse's major axis, on either side of the center, such that the sum of the distances from any point of the ellipse to those two points is constant and equal to the major diameter (2 a). Each of these two points is called a focus of the ellipse.
The eccentricity of an ellipse, usually denoted by ε or e, is the ratio of the distance between the foci to the length of the major axis. The eccentricity is necessarily between 0 and 1; it is zero if and only if a=b, in which case the ellipse is a circle. The distance a'e from a focal point to the centre is called the linear eccentricity of the ellipse.
The pins-and-string methodEdit
An ellipse can be drawn using two pins, a length of string, and a pencil:
- Push the pins into the paper at two points, which will become the ellipse's foci. Tie the string into a loose loop around the two pins. Pull the loop taut with the pencil's tip, so as to form a triangle. Move the pencil around, while keeping the string taut, and its tip will trace out an ellipse.
To draw an ellipse inscribed within a specified rectangle, tangent to its four sides at their midpoints, one must first determine the positon of the foci and the length of the loop:
- Let A,B,C,D be the corners of the rectangle, in clockwise order, with A-B being one of the long sides. Draw a circle centered on A, having radius the short side A-D. From corner B draw a tangent to the circle. The length L of this tangent is the distance between the foci. Draw two perpendicular lines through the center of the rectangle and parallel to its sides; these will be the major and minor axes of the ellipse. Place the foci on the major axis, symmetrically, at distance L/2 from the center.
- To adjust the length of the string loop, insert a pin at one focus, and another pin at the opposite end of the major diameter. Loop the string around the two pins and tie it taut.
- Draw two perpendicular lines M,N on the paper; these will be the major and minor axes of the ellipse. Mark three points A, B, C on the ruler. With one hand, move the ruler onto the paper, turning and sliding it so as to keep point A always on line M, and B on line N. With the other hand, keep the pencil's tip on the paper, following point C of the ruler. The tip will trace out an ellipse.
This method can be implemented with a router to cut ellipses from board material The ellipsograph is a mechanical device that implements this principle. The ruler is replaced by a rod with a pencil holder (point C) at one end, and two adjustable side pins (points A and B) that slide into two perpendicular slots cut into a a metal plate.
Ellipses in physicsEdit
Elliptical reflectors Edit
If the water surface is disturbed at at one focus of an elliptical tank, the circular waves created by that disturbance, after being reflected by the walls, will converge simultaneously to a single point — the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.
Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid, or prolatum), this property will hold for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners.
Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstraed with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the U.S. Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters), at an exhibit on sound at the Museum of Science and Industry in Chicago, in front of the University of Illinois at Urbana-Champaign Foellinger Auditorium, and also at a side chamber of the Palace of Charles V, in the Alhambra.
Planet orbits Edit
In the 17th century, Johannes Kepler explained that the orbits along which the planets travel around the Sun are ellipses in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation.
More generally, in the gravitational two-body problem, if the two bodies are bound to each other (i.e., the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. Interestingly, the orbit of either body in the reference frame of the other is also an ellipse, with the other body at one focus.
Keplerian elliptical orbits are the result of any radially-directed attraction force whose strength is inversely proportional to distance. Thus, in principle, the motion of two oppositely-charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects which become significant when the particles are moving at high speed.)
Harmonic oscillators Edit
The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions, or of a mass attached to a fixed point by a perfectly elastic spring.
Phase visualization Edit
In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the display is an ellipse, rather than a straight line, the two signals are out of phase.
Elliptical gears Edit
Two gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, will turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt. Such elliptical gears may be used in mechanical equipment to vary the torque or angular speed during each turn of the driving axle.
In a material that is optically anisotropic (birefringent), the refractive index depends on the direction of the light. The dependency can be described by an index ellipsoid. (If the material is optically isotropic, this ellipsoid is a sphere.)
Mathematical definitions and propertiesEdit
In Euclidean geometryEdit
In Euclidean geometry, an ellipse is usually defined as the bounded case of a conic section, or as the locus of the points P such that the distances to two fixed points is constant.
Angular eccentricity Edit
The distance from the center to either focus is a·sin(ɶ).
Each focus F of the ellipse is associated to a line D perpendicular to the major axis (the directrix) such that the distance from any point on the ellipse to F is a constant fraction of its distance from D. The ratio between the two distances is the eccentricity ε of the ellipse; so the distance from the center to the directrix is a/ε, or a/sin(ɶ).
Ellipse as hypotrochoidEdit
The ellipse is a special case of the hypotrochoid when R=2r.
The area enclosed by an ellipse is , where (as before) a and b are 1/2 of the ellipse's major and minor axes respectively.
where (e.g.) is a binomial coefficient,
or better approximation:
For the special case where the minor axis is half the major axis, we can use:
or the better approximation
More generally, the arc length of a portion of the circumference, as a function of the angle subtended, is given by an incomplete elliptic integral. The inverse function, the angle subtended as a function of the arc length, is given by the elliptic functions.
In projective geometryEdit
In projective geometry, an ellipse can be defined as the set of all points of intersection between corresponding lines of two pencils of lines which are related by a projective map. By projective duality, an ellipse can be defined also as the envelope of all lines that connect corresponding points of two lines which are related by a projective map.
This definition also generates hyperbolas and parabolas. However, in projective geometry every conic section is equivalent to an ellipse. A parabola is an ellipse that is tangent to the line at infinity Ω, and the hyperbola is an ellipse that crosses Ω.
An ellipse is also the the result of projecting a circle, sphere, or ellipse in three dimensions onto a plane, by parallel lines. It is also the result of conical (perspective) projection any of those geometric objects from a point O onto a plane P, provided that the plane Q that goes through through O and is parallel to P does not cut the object. The image of an ellipse by any affine map is an ellipse, and so is the image of an ellipse by any projective map M such that the line M-1(Ω) does not touch or cross the ellipse.
In analytic geometryEdit
provided that F is not zero and is positive; or of the form
By a proper choice of coordinate system, the ellipse can be described by the canonical implicit equation
Here are the point coordinates in the canonical system, whose origin is the center
of the ellipse, whose -axis is the unit vector
parallel to the major axis, and whose -axis is the perpendicular vector
That is, and .
In this system, the center is the origin
- and the foci are and .
Any ellipse can be obtained by rotation and translation of a canonical ellipse with the proper semi-diameters. Moreover, any canonical ellipse can be obtained by scaling the unit circle of redefined by the equation
by factors a and b along the two axes.
For an ellipse in canonical form, we have
The distances from a point on the ellipse to the left and right foci are
- and , respectively.
General parametric form Edit
An ellipse in general position can be expressed parametrically as the path of a point , where
Here is the center of the ellipse, and is the angle between the -axis and the major axis of the ellipse.
Canonical form Edit
For an ellipse in canonical position (center at origin, major axis along the X-axis), the equation simplifies to
This is not the equation of the ellipse in polar form, since the parameter t is not the angle of with the X-axis.
Polar form relative to focus Edit
With one focus at the origin, the ellipse's polar equation is
where angle is the true anomaly of the point and the numerator a' is the radius of curvature at the major axis (smaller than b), equaling the ellipse’s semi-latus rectum, usually denoted in such reference as , and also equaling the distance from a focus of the ellipse to the ellipse itself, measured along a line perpendicular to the major axis:
Likewise, the radius of curvature at the minor axis (larger than a) can be similarly denoted as b' or c:
The Gauss-mapped equation of the ellipse gives the coordinates of the point on the ellipse where the normal makes an angle with the X-axis:
Degrees of freedomEdit
An ellipse in the plane has five degrees of freedom, the same as as a general conic section. Said another way, the set of all ellipses in the plane, with any natural metric (such as the Hausdorff distance) is a five-dimensional manifold. These degrees can be identified with the coefficients of the implicit equation. In comparison, circles have only three degrees of freedom, while parabolas have four.
See also Edit
- Apollonius of Perga, the classical authority
- Ellipsoid, a higher dimensional analog of an ellipse
- Spheroid, the ellipsoids obtained by rotating an ellipse about its major or minor axis.
- Superellipse, a generalization of an ellipse that can look more rectangular or more "pointy"
- True, eccentric, and mean anomaly
- Matrix representation of conic sections
- Kepler's Laws of Planetary Motion
- Elliptic coordinates, an orthogonal coordinate system based on families of ellipses and hyperbolae
- ↑ last = Haswell | first = Charles Haynes | url = http://books.google.com/books?id=Uk4wAAAAMAAJ&pg=RA1-PA381&zoom=3&hl=en&sig=3QTM7ZfZARnGnPoqQSDMbx8JeHg | title = Mechanics' and Engineers' Pocket-book of Tables, Rules, and Formulas | publisher = Harper & Brothers | date = 1920 | accessdate = 2007-04-09
- ↑ Woodworking videos showing how to cut ellipses with a router.
- Charles D.Miller, Margaret L.Lial, David I.Schneider: Fundamentals of College Algebra. 3rd Edition Scott Foresman/Little 1990. ISBN 0-673-38638-4. Page 381
- Coxeter, H. S. M.: Introduction to Geometry, 2nd ed. New York: Wiley, pp. 115-119, 1969.
- Ellipse at the Encyclopedia of Mathematics (Springer)
- Ellipse at Planetmath
- Weisstein, Eric W., "Ellipse" from MathWorld.
- Apollonius' Derivation of the Ellipse at Convergence
- Ellipse & Hyperbola Construction - An interactive sketch showing how to trace the curves of the ellipse and hyperbola. (Requires Java.)
- Ellipse Construction - Another interactive sketch, this time showing a different method of tracing the ellipse. (Requires Java.)
- The Shape and History of The Ellipse in Washington, D.C. by Clark Kimberling
- Collection of animated ellipse demonstrations. Ellipse, axes, semi-axes, area, perimeter, tangent, foci.
- Ellipse as hypotrochoid | http://math.wikia.com/wiki/Ellipse | 13 |
113 | Methods of detecting extrasolar planets
Any planet is an extremely faint light source compared to its parent star. In addition to the intrinsic difficulty of detecting such a faint light source, the light from the parent star causes a glare that washes it out. For those reasons, fewer than 5% of the extrasolar planets known as of November 2011 have been observed directly.
Instead, astronomers have generally had to resort to indirect methods to detect extrasolar planets. At the present time, several different indirect methods have yielded success.
Established detection methods
Radial velocity
A star with a planet will move in its own small orbit in response to the planet's gravity. This leads to variations in the speed with which the star moves toward or away from Earth, i.e. the variations are in the radial velocity of the star with respect to Earth. The radial velocity can be deduced from the displacement in the parent star's spectral lines due to the Doppler effect. The radial-velocity method measures these variations in order to confirm the presence of the planet.
The velocity of the star around the system's center of mass is much smaller than that of the planet, because the radius of its orbit around the center of mass is so small. However, velocity variations down to 1 m/s or even somewhat less can be detected with modern spectrometers, such as the HARPS (High Accuracy Radial Velocity Planet Searcher) spectrometer at the ESO 3.6 meter telescope in La Silla Observatory, Chile, or the HIRES spectrometer at the Keck telescopes. An especially simple and inexpensive method for measuring radial velocity is "externally dispersed interferometry".
The radial-velocity method has been by far the most productive technique used by planet hunters. It is also known as Doppler spectroscopy. The method is distance independent, but requires high signal-to-noise ratios to achieve high precision, and so is generally only used for relatively nearby stars out to about 160 light-years from Earth. It easily finds massive planets that are close to stars. Modern spectrographs can also easily detect Jupiter-mass planets orbiting 10 astronomical units away from the parent star but detection of those planets requires many years of observation.
It is easier to detect planets around low-mass stars for two reasons: First, these stars are are more affected by gravitational tug from planets. The second reason is that low-mass main-sequence generally rotate relatively slowly. Fast rotation makes spectral line data less clear as half of the star moves away from observer's viewpoint while the the other half closes in.
Planets with orbits highly inclined to the line of sight from Earth produce smaller wobbles, and are thus more difficult to detect. One of the advantages of radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass. The posterior distribution of the inclination angle depends on the true mass distribution of the planets. The radial-velocity method can be used to confirm findings made by using the transit method. When both methods are used in combination, then the planet's true mass can be estimated.
Although radial-velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial-velocity of the planet itself can be found and this gives the inclination of the planet's orbit and therefore the planet's actual mass can be determined.
Pulsar timing
A pulsar is a neutron star: the small, ultradense remnant of a star that has exploded as a supernova. Pulsars emit radio waves extremely regularly as they rotate. Because the intrinsic rotation of a pulsar is so regular, slight anomalies in the timing of its observed radio pulses can be used to track the pulsar's motion. Like an ordinary star, a pulsar will move in its own small orbit if it has a planet. Calculations based on pulse-timing observations can then reveal the parameters of that orbit.
This method was not originally designed for the detection of planets, but is so sensitive that it is capable of detecting planets far smaller than any other method can, down to less than a tenth the mass of Earth. It is also capable of detecting mutual gravitational perturbations between the various members of a planetary system, thereby revealing further information about those planets and their orbital parameters. In addition, it can easily detect planets which are relatively far away from pulsar.
The main drawback of the pulsar-timing method is that pulsars are relatively rare, so it is unlikely that a large number of planets will be found this way. Also, life as we know it could not survive on planets orbiting pulsars since high-energy radiation there is extremely intense.
In 1992 Aleksander Wolszczan and Dale Frail used this method to discover planets around the pulsar PSR 1257+12. Their discovery was quickly confirmed, making it the first confirmation of planets outside our Solar System.
Transit method
While the above methods provide information about a planet's mass, this photometric method can determine the radius of a planet. If a planet crosses (transits) in front of its parent star's disk, then the observed visual brightness of the star drops a small amount. The amount the star dims depends on the relative sizes of the star and the planet. For example, in the case of HD 209458, the star dims 1.7%.
This method has two major disadvantages. First of all, planetary transits are only observable for planets whose orbits happen to be perfectly aligned from the astronomers' vantage point. The probability of a planetary orbital plane being directly on the line-of-sight to a star is the ratio of the diameter of the star to the diameter of the orbit. About 10% of planets with small orbits have such alignment, and the fraction decreases for planets with larger orbits. For a planet orbiting a sun-sized star at 1 AU, the probability of a random alignment producing a transit is 0.47%. Therefore the method cannot answer the question of whether any particular star is a host to planets. However, by scanning large areas of the sky containing thousands or even hundreds of thousands of stars at once, transit surveys can in principle find extrasolar planets at a rate that could potentially exceed that of the radial-velocity method. Several surveys have taken that approach, such as the ground-based MEarth Project and the space-based COROT and Kepler missions.
Secondly, the method suffers from a high rate of false detections. A 2012 study found that the rate of false positives for transits observed by the Kepler mission could be as high as 35%. For this reason, a transit detection requires additional confirmation, typically from the radial-velocity method.
The main advantage of the transit method is that the size of the planet can be determined from the lightcurve. When combined with the radial-velocity method (which determines the planet's mass) one can determine the density of the planet, and hence learn something about the planet's physical structure. The planets that have been studied by both methods are by far the best-characterized of all known exoplanets.
The transit method also makes it possible to study the atmosphere of the transiting planet. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. By studying the high-resolution stellar spectrum carefully, one can detect elements present in the planet's atmosphere. A planetary atmosphere (and planet for that matter) could also be detected by measuring the polarisation of the starlight as it passed through or is reflected off the planet's atmosphere.
Additionally, the secondary eclipse (when the planet is blocked by its star) allows direct measurement of the planet's radiation and helps to constrain the planet's eccentricity without the presence of other planets. If the star's photometric intensity during the secondary eclipse is subtracted from its intensity before or after, only the signal caused by the planet remains. It is then possible to measure the planet's temperature and even to detect possible signs of cloud formations on it. In March 2005, two groups of scientists carried out measurements using this technique with the Spitzer Space Telescope. The two teams, from the Harvard-Smithsonian Center for Astrophysics, led by David Charbonneau, and the Goddard Space Flight Center, led by L. D. Deming, studied the planets TrES-1 and HD 209458b respectively. The measurements revealed the planets' temperatures: 1,060 K (790°C) for TrES-1 and about 1,130 K (860°C) for HD 209458b. In addition the hot Neptune Gliese 436 b enters secondary eclipse. However some transiting planets orbit such that they do not enter secondary eclipse relative to Earth; HD 17156 b is over 90% likely to be one of the latter.
A French Space Agency mission, COROT, began in 2006 to search for planetary transits from orbit, where the absence of atmospheric scintillation allows improved accuracy. This mission was designed to be able to detect planets "a few times to several times larger than Earth" and is currently performing "better than expected", with two exoplanet discoveries (both "hot jupiter" type) as of early 2008. The 17th CoRoT exoplanet was announced in 2010.
In March 2009, NASA mission Kepler was launched to scan a large number of stars in the constellation Cygnus with a measurement precision expected to detect and characterize Earth-sized planets. The NASA Kepler Mission uses the transit method to scan a hundred thousand stars in the constellation Cygnus for planets. It is hoped that by the end of its mission of 3.5 years, the satellite will have collected enough data to reveal planets even smaller than Earth. By scanning a hundred thousand stars simultaneously, it will not only be able to detect Earth-sized planets, it will be able to collect statistics on the numbers of such planets around sunlike stars.
On February 2, 2011, the Kepler team released a list of 1,235 extrasolar planet candidates, including 54 that may be in the habitable zone. On December 5, 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data.
Transit timing variation method (TTV)
If a planet has been detected by the transit method, then variations in the timing of the transit provide an extremely sensitive method which is capable of detecting additional planets in the system with sizes potentially as small as Earth-sized planets. The first significant detection of a non-transiting planet using TTV was carried out with NASA's Kepler satellite. The transiting planet Kepler-19b shows TTV with an amplitude of 5 minutes and a period of about 300 days, indicating the presence of a second planet, Kepler-19c, which has a period which is a near rational multiple of the period of the transiting planet
"Timing variation" asks whether the transit occurs with strict periodicity or if there's a variation.
Transit duration variation method (TDV)
Orbital phase reflected light variations
Short period giant planets in close orbits around their stars will undergo reflected light variations changes because, like the Moon, they will go through phases from full to new and back again. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets are detectable by space telescopes such as the Kepler Space Observatory. Like with many other methods, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination of the planet's orbit and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light doesn't change. In addition, the phase function of the giant planet is also a function of its thermal properties and atmosphere, if any. Therefore the phase curve may constrain other planet properties, such as the particle size distribution of the atmospheric particles.
Both Corot and Kepler have measured the reflected light from planets. However, these planets were already known since they transit their host star. The first planets discovered by this method are Kepler-70b and Kepler-70c, found by Kepler.
Light variations due to Relativistic Beaming
A separate novel method to detect exoplanets from light variations uses relativistic beaming of the observed flux from the star due to its motion. The method was first proposed by Abraham Loeb and Scott Gaudi in 2003 . As the planet tugs the star with its gravitation, the density of photons and therefore the apparent brightness of the star changes from observer's viewpoint. Like using radial velocity method, it can be used to determine the orbital eccentricity and the minimum mass of the planet. Unlike radial velocity method, it does not require accurate spectrum of a star and therefore can be used more easily to find planets around fast-rotating stars.
Gravitational microlensing
Gravitational microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. This effect occurs only when the two stars are almost exactly aligned. Lensing events are brief, lasting for weeks or days, as the two stars and Earth are all moving relative to each other. More than a thousand such events have been observed over the past ten years.
If the foreground lensing star has a planet, then that planet's own gravitational field can make a detectable contribution to the lensing effect. Since that requires a highly improbable alignment, a very large number of distant stars must be continuously monitored in order to detect planetary microlensing contributions at a reasonable rate. This method is most fruitful for planets between Earth and the center of the galaxy, as the galactic center provides a large number of background stars.
In 1991, astronomers Shude Mao and Bohdan Paczyński proposed using gravitational microlensing to look for binary companions to stars, and their proposal was sharpened by Andy Gould and Abraham Loeb in 1992 as a method to detect exoplanets. Successes with the method date back to 2002, when a group of Polish astronomers (Andrzej Udalski, Marcin Kubiak and Michał Szymański from Warsaw, and Bohdan Paczyński) during project OGLE (the Optical Gravitational Lensing Experiment) developed a workable technique. During one month they found several possible planets, though limitations in the observations prevented clear confirmation. Since then, several confirmed extrasolar planets have been detected using microlensing. This was the first method capable of detecting planets of Earthlike mass around ordinary main-sequence stars.
A notable disadvantage of the method is that the lensing cannot be repeated because the chance alignment never occurs again. Also, the detected planets will tend to be several kiloparsecs away, so follow-up observations with other methods are usually impossible. In addition, the only physical characteristic that can be determined by microlensing are loose constraints of a mass of the planet. The only orbital characteristic that can be directly determined is its current semi-major axis from the parent star. However, microlensing method is less biased towards detecting massive planets close to the parent stars than the majority of other methods and it can detect planets around very distant stars. When enough background stars can be observed with enough accuracy then the method should eventually reveal how common earth-like planets are in the galaxy.
Observations are usually performed using networks of robotic telescopes. In addition to the European Research Council-funded OGLE, the Microlensing Observations in Astrophysics (MOA) group is working to perfect this approach.
The PLANET (Probing Lensing Anomalies NETwork)/RoboNet project is even more ambitious. It allows nearly continuous round-the-clock coverage by a world-spanning telescope network, providing the opportunity to pick up microlensing contributions from planets with masses as low as Earth. This strategy was successful in detecting the first low-mass planet on a wide orbit, designated OGLE-2005-BLG-390Lb.
Direct imaging
As mentioned previously, planets are extremely faint light sources compared to stars and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect them directly. It is easier to obtain images when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; the images have then been made at infrared where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star while leaving the planet visible. While direct imaging cannot be used to directly determine the mass and size of the exoplanet, it is independent from inclination of the planet from Earth's viewpoint and it can be used to accurately measure the planet's orbit around the star. In addition, the spectra emitted from planets helps to determine chemical composition of planets.
Early discoveries
In 2004, a group of astronomers used the European Southern Observatory's Very Large Telescope array in Chile to produce an image of 2M1207b, a companion to the brown dwarf 2M1207. In the following year the planetary status of the companion was confirmed. The planet is believed to be several times more massive than Jupiter and to have an orbital radius greater than 40 AU.
In September 2008, an object was imaged at a separation of 330AU from the star 1RXS J160929.1−210524, but it was not until 2010 that it was confirmed to be a companion planet to the star and not just a chance alignment.
The first multiplanet system, announced on 13 November 2008, was imaged in 2007 using telescopes at both Keck Observatory and Gemini Observatory. Three planets were directly observed orbiting HR 8799, whose masses are approximately 10, 10 and 7 times that of Jupiter. On the same day, 13 November 2008, it was announced that the Hubble Space Telescope directly observed an exoplanet orbiting Fomalhaut with mass no more than 3MJ. Both systems are surrounded by disks not unlike the Kuiper belt.
In 2009 it was announced that analysis of images dating back to 2003 revealed a planet orbiting Beta Pictoris.
In 2012 it was announced that a "Super-Jupiter" planet with a mass about 12.8MJ orbiting Kappa Andromedae was directly imaged using the Subaru Telescope in Hawaii. It orbits its parent star at a distance of about 55 astronomical units, or nearly twice the distance of Neptune to the sun.
Other possible exoplanets to have been directly imaged: GQ Lupi b, AB Pictoris b, and SCR 1845 b. As of March 2006 none have been confirmed as planets; instead, they might themselves be small brown dwarfs.
Imaging instruments
In 2010 a team from NASAs Jet Propulsion Laboratory demonstrated that a vortex coronagraph could enable small scopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets using just a 1.5 m portion of the Hale Telescope.
It has also been proposed that space-telescopes that focus light using zone plates instead of mirrors would provide higher-contrast imaging and be cheaper to launch into space due to being able to fold up the lightweight foil zone plate.
Other possible methods
This method consists of precisely measuring a star's position in the sky and observing how that position changes over time. Originally this was done visually with hand-written records. By the end of the 19th century this method used photographic plates, greatly improving the accuracy of the measurements as well as creating a data archive. If the star has a planet, then the gravitational influence of the planet will cause the star itself to move in a tiny circular or elliptical orbit. Effectively, star and planet each orbit around their mutual center of mass (barycenter), as explained by solutions to the two-body problem. Since the star is much more massive, its orbit will be much smaller. Frequently, the mutual center of mass will lie within the radius of the larger body.
Astrometry is the oldest search method for extrasolar planets and originally popular because of its success in characterizing astrometric binary star systems. It dates back at least to statements made by William Herschel in the late 18th century. He claimed that an unseen companion was affecting the position of the star he cataloged as 70 Ophiuchi. The first known formal astrometric calculation for an extrasolar planet was made by W. S. Jacob in 1855 for this star. Similar calculations were repeated by others for another half-century until finally refuted in the early 20th century. For two centuries claims circulated of the discovery of unseen companions in orbit around nearby star systems that all were reportedly found using this method, culminating in the prominent 1996 announcement of multiple planets orbiting the nearby star Lalande 21185 by George Gatewood. None of these claims survived scrutiny by other astronomers, and the technique fell into disrepute. Unfortunately, the changes in stellar position are so small and atmospheric and systematic distortions so large that even the best ground-based telescopes cannot produce precise enough measurements. All claims of a planetary companion of less than 0.1 solar mass, as the mass of the planet, made before 1996 using this method are likely spurious. In 2002, the Hubble Space Telescope did succeed in using astrometry to characterize a previously discovered planet around the star Gliese 876.
One potential advantage of the astrometric method is that it is most sensitive to planets with large orbits. This makes it complementary to other methods that are most sensitive to planets with small orbits. However, very long observation times will be required — years, and possibly decades, as planets far enough from their star to allow detection via astrometry also take a long time to complete an orbit.
In 2009 the discovery of VB 10b by astrometry was announced. This planetary object was reported to have a mass 7 times that of Jupiter and orbiting the nearby low mass red dwarf star VB 10. If confirmed, this would be the first exoplanet discovered by astrometry of the many that have been claimed through the years. However recent radial velocity independent studies rule out the existence of the claimed planet.
Eclipsing binary minima timing
When a double star system is aligned such that – from the Earth's point of view – the stars pass in front of each other in their orbits, the system is called an "eclipsing binary" star system. The time of minimum light, when the star with the brighter surface area is at least partially obscured by the disc of the other star, is called the primary eclipse, and approximately half an orbit later, the secondary eclipse occurs when the brighter surface area star obscures some portion of the other star. These times of minimum light, or central eclipse, constitute a time stamp on the system, much like the pulses from a pulsar (except that rather than a flash, they are a dip in the brightness). If there is a planet in circum-binary orbit around the binary stars, the stars will be offset around a binary-planet center of mass. As the stars in the binary are displaced by the planet back and forth, the times of the eclipse minima will vary; they will be too late, on time, too early, on time, too late, etc.. The periodicity of this offset may be the most reliable way to detect extrasolar planets around close binary systems.
Eclipsing timing method allows the detection of planets further away from host star than transit method. However, signals around cataclysmic variable stars hinting for planets tend to match with unstable orbits.
Light given off by a star is un-polarized, i.e. the direction of oscillation of the light wave is random. However, when the light is reflected off the atmosphere of a planet, the light waves interact with the molecules in the atmosphere and they are polarized.
By analyzing the polarization in the combined light of the planet and star (about one part in a million), these measurements can in principle be made with very high sensitivity, as polarimetry is not limited by the stability of the Earth's atmosphere.
Astronomical devices used for polarimetry, called polarimeters, are capable of detecting the polarized light and rejecting the unpolarized beams (starlight). Groups such as ZIMPOL/CHEOPS and PlanetPol are currently using polarimeters to search for extra-solar planets, though no planets have yet been detected using this method.
Auroral radio emissions
Pulsation frequency
Some pulsating variable stars are regular enough that radial velocity could be determined purely photometrically from the Doppler shift of the pulsation frequency, without needing spectroscopy.
Detection of extrasolar asteroids and debris disks
Circumstellar disks
Disks of space dust (debris disks) surround many stars. The dust can be detected because it absorbs ordinary starlight and re-emits it as infrared radiation. Even if the dust particles have a total mass well less than that of Earth, they can still have a large enough total surface area that they outshine their parent star in infrared wavelengths.
The Hubble Space Telescope is capable of observing dust disks with its NICMOS (Near Infrared Camera and Multi-Object Spectrometer) instrument. Even better images have now been taken by its sister instrument, the Spitzer Space Telescope, and by the European Space Agency's Herschel Space Observatory, which can see far deeper into infrared wavelengths than the Hubble can. Dust disks have now been found around more than 15% of nearby sunlike stars.
The dust is believed to be generated by collisions among comets and asteroids. Radiation pressure from the star will push the dust particles away into interstellar space over a relatively short timescale. Therefore, the detection of dust indicates continual replenishment by new collisions, and provides strong indirect evidence of the presence of small bodies like comets and asteroids that orbit the parent star. For example, the dust disk around the star tau Ceti indicates that that star has a population of objects analogous to our own Solar System's Kuiper Belt, but at least ten times thicker.
More speculatively, features in dust disks sometimes suggest the presence of full-sized planets. Some disks have a central cavity, meaning that they are really ring-shaped. The central cavity may be caused by a planet "clearing out" the dust inside its orbit. Other disks contain clumps that may be caused by the gravitational influence of a planet. Both these kinds of features are present in the dust disk around epsilon Eridani, hinting at the presence of a planet with an orbital radius of around 40 AU (in addition to the inner planet detected through the radial-velocity method). These kinds of planet-disk interactions can be modeled numerically using collisional grooming techniques.
Contamination of stellar atmospheres
Recent spectral analysis of white dwarfs' atmospheres by Spitzer Space Telescope found contamination of heavier elements like magnesium and calcium. These elements cannot originate from the stars' core and it is probable that the contamination comes from asteroids that got too close (within the Roche limit) to these stars by gravitational interaction with larger planets and were torn apart by star's tidal forces. Spitzer data suggests that 1-3% of the white dwarfs has similar contamination.
- Schneider, Jean (10 September 2011). "Interactive Extra-solar Planets Catalog". The Extrasolar Planets Encyclopedia. Retrieved 2011-11-30.
- "Externally Dispersed Interferometry". SpectralFringe.org. LLNL/SSL. June 2006. Retrieved 2009-12-06.
- D.J. Erskine, J. Edelstein, D. Harbeck and J. Lloyd (2005). "Externally Dispersed Interferometry for Planetary Studies". In Daniel R. Coulter. Proceedings of SPIE: Techniques and Instrumentation for Detection of Exoplanets II 5905. pp. 249–260.
- Weighing The Non-Transiting Hot Jupiter Tau BOO b, Florian Rodler, Mercedes Lopez-Morales, Ignasi Ribas, 27 Jun 2012
- Townsend, Rich (27 January 2003). The Search for Extrasolar Planets (Lecture). Department of Physics & Astronomy, Astrophysics Group, University College, London. Retrieved 2006-09-10.[dead link]
- A. Wolszczan and D. A. Frail (9 January 1992). "A planetary system around the millisecond pulsar PSR1257+12". Nature 355: 145–147. Retrieved 2007-04-30.
- Kepler's photometry
- Hidas, M. G.; Ashley, M. C. B.; Webb, et al. (2005). "The University of New South Wales Extrasolar Planet Search: methods and first results from a field centred on NGC 6633". Monthly Notices of the Royal Astronomical Society 360 (2): 703–717. arXiv:astro-ph/0501269. Bibcode:2005MNRAS.360..703H. doi:10.1111/j.1365-2966.2005.09061.x.
- Santerne, A.; Díaz, R. F.; Moutou, C.; Bouchy, F.; Hébrard, G.; Almenara, J. -M.; Bonomo, A. S.; Deleuil, M. et al. (2012). "SOPHIE velocimetry of Kepler transit candidates". Astronomy & Astrophysics 545: A76. doi:10.1051/0004-6361/201219608.
- O'Donovan et al.; Charbonneau, David; Torres, Guillermo; Mandushev, Georgi; Dunham, Edward W.; Latham, David W.; Alonso, Roi; Brown, Timothy M. et al. (2006). "Rejecting Astrophysical False Positives from the TrES Transiting Planet Survey: The Example of GSC 03885-00829". The Astrophysical Journal 644 (2): 1237–1245. arXiv:astro-ph/0603005. Bibcode:2006ApJ...644.1237O. doi:10.1086/503740.
- Charbonneau, D.; T. Brown; A. Burrows; G. Laughlin (2006). "When Extrasolar Planets Transit Their Parent Stars". Protostars and Planets V. University of Arizona Press. arXiv:astro-ph/0603376.
- Charbonneau et al.; Allen, Lori E.; Megeath, S. Thomas; Torres, Guillermo; Alonso, Roi; Brown, Timothy M.; Gilliland, Ronald L.; Latham, David W. et al. (2005). "Detection of Thermal Emission from an Extrasolar Planet". The Astrophysical Journal 626 (1): 523–529. arXiv:astro-ph/0503457. Bibcode:2005ApJ...626..523C. doi:10.1086/429991.
- Deming, D.; Seager, S.; Richardson, J.; Harrington, J. (2005). "Infrared radiation from an extrasolar planet" (PDF). Nature 434 (7034): 740–743. arXiv:astro-ph/0503554. Bibcode:2005Natur.434..740D. doi:10.1038/nature03507. PMID 15785769.
- "COROT surprises a year after launch", ESA press release 20 December 2007
- Kepler Mission page
- Miralda-Escude (2001). "Orbital perturbations on transiting planets: A possible method to measure stellar quadrupoles and to detect Earth-mass planets". The Astrophysical Journal 564 (2): 1019. arXiv:astro-ph/0104034. Bibcode:2002ApJ...564.1019M. doi:10.1086/324279.
- Holman; Murray (2004). "The Use of Transit Timing to Detect Extrasolar Planets with Masses as Small as Earth". Science :-,2005 307 (1291). arXiv:astro-ph/0412028. doi:10.1106/science.1107822.
- Agol; Sari; Steffen; Clarkson (2004). "On detecting terrestrial planets with timing of giant planet transits". Monthly Notices of the Royal Astronomical Society 359 (2): 567–579. arXiv:astro-ph/0412032. Bibcode:2005MNRAS.359..567A. doi:10.1111/j.1365-2966.2005.08922.x.
- Invisible World Discovered, NASA Kepler News, 8 September 2011
- Ballard; et. al.; Francois Fressin; David Charbonneau; Jean-Michel Desert; Guillermo Torres; Geoffrey Marcy; Burke et al. (2011). "The Kepler-19 System: A Transiting 2.2 R_Earth Planet and a Second Planet Detected via Transit Timing Variations". arXiv:1109.1561 [astro-ph.EP].
- Nascimbeni; Piotto; Bedin; Damasso (2010). "TASTE: The Asiago Survey for Timing transit variations of Exoplanets". arXiv:1009.5905 [astro-ph.EP].
- Jenkins, J.M.; Laurance R. Doyle (2003-09-20). "Detecting reflected light from close-in giant planets using space-based photometers" (PDF). Astrophysical Journal 1 (595): 429–445. arXiv:astro-ph/0305473. Bibcode:2003ApJ...595..429J. doi:10.1086/377165.
- Snellen, I.A.G. and De Mooij, E.J.W. and Albrecht, S. (2009). "The changing phases of extrasolar planet CoRoT-1b". Nature (Nature Publishing Group) 459 (7246): 543–545. arXiv:0904.1208. Bibcode:2009Natur.459..543S. doi:10.1038/nature08045. PMID 19478779. Preprint from arxiv.
- Borucki, W.J. et al. (2009). "Kepler's Optical Phase Curve of the Exoplanet HAT-P-7b". Science 325 (5941): 709. Bibcode:2009Sci...325..709B. doi:10.1126/science.1178312. PMID 19661420.
- Charpinet, S. and Fontaine, G. and Brassard, P. and Green, EM and Van Grootel, V. and Randall, SK and Silvotti, R. and Baran, AS and Østensen, RH and Kawaler, SD and others (2011). "A compact system of small planets around a former red-giant star". Nature 480 (7378) (Nature Publishing Group). pp. 496–499. doi:10.1038/nature10631.
- New method of finding planets scores its first discovery, phys.org, may 2013
- J.-P. Beaulieu; D.P. Bennett; P. Fouque; A. Williams; M. Dominik; U.G. Jorgensen; D. Kubas; A. Cassan; C. Coutures; J. Greenhill; K. Hill; J. Menzies; P.D. Sackett; M. Albrow; S. Brillant; J.A.R. Caldwell; J.J. Calitz; K.H. Cook; E. Corrales; M. Desort; S. Dieters; D. Dominis; J. Donatowicz; M. Hoffman; S. Kane; J.-B. Marquette; R. Martin; P. Meintjes; K. Pollard; K. Sahu; C. Vinter; J. Wambsganss; K. Woller; K. Horne; I. Steele; D. Bramich; M. Burgdorf; C. Snodgrass; M. Bode; A. Udalski; M. Szymanski; M. Kubiak; T. Wieckowski; G. Pietrzynski; I. Soszynski; O. Szewczyk; L. Wyrzykowski; B. Paczynski (2006). "Discovery of a Cool Planet of 5.5 Earth Masses Through Gravitational Microlensing". Nature 439 (7075): 437–440. arXiv:astro-ph/0601563. Bibcode:2006Natur.439..437B. doi:10.1038/nature04441. PMID 16437108.
- G. Chauvin; A.M. Lagrange; C. Dumas; B. Zuckerman; D. Mouillet; I. Song; J.-L. Beuzit; P. Lowrance (2004). "A giant planet candidate near a young brown dwarf". Astronomy & Astrophysics 425 (2): L29–L32. arXiv:astro-ph/0409323. Bibcode:2004A&A...425L..29C. doi:10.1051/0004-6361:200400056.
- "Yes, it is the Image of an Exoplanet (Press Release)". ESO website. April 30, 2005. Retrieved 2010-07-09.
- Astronomers verify directly imaged planet
- Marois, Christian; et al. (November 2008). "Direct Imaging of Multiple Planets Orbiting the Star HR 8799". Science 322 (5906): 1348–52. arXiv:0811.2606. Bibcode:2008Sci...322.1348M. doi:10.1126/science.1166585. PMID 19008415. Retrieved 2008-11-13. (Preprint at exoplanet.eu)
- "Astronomers capture first image of newly-discovered solar system" (Press release). W. M. Keck Observatory. 2008-10-13. Retrieved 2008-10-13.
- "Hubble Directly Observes a Planet Orbiting Another Star". Retrieved November 13, 2008.
- "Direct Imaging of a Super-Jupiter Around a Massive Star". Retrieved 2012-11-19.
- Francis Reddy (2012-11-19). "NASA - Astronomers Directly Image Massive Star's 'Super Jupiter'". NASA.com. Retrieved 2012-11-19.
- Thalmann, Christian; Joseph Carson; Markus Janson; Miwa Goto; Michael McElwain; Sebastian Egner; Markus Feldt; Jun Hashimoto et al. (2009). "Discovery of the Coldest Imaged Companion of a Sun-Like Star". arXiv:0911.1127v1 [astro-ph.EP].
- R. Neuhauser; E. W. Guenther; G. Wuchterl; M. Mugrauer; A. Bedalov; P.H. Hauschildt (2005). "Evidence for a co-moving sub-stellar companion of GQ Lup". Astronomy & Astrophysics 435 (1): L13–L16. arXiv:astro-ph/0503691. Bibcode:2005A&A...435L..13N. doi:10.1051/0004-6361:200500104.
- "Is this a Brown Dwarf or an Exoplanet?". ESO Website. April 7, 2005. Retrieved 2006-07-04.
- M. Janson; W. Brandner; T. Henning; H. Zinnecker (2005). "Early ComeOn+ adaptive optics observation of GQ Lupi and its substellar companion". Astronomy & Astrophysics 453 (2): 609–614. arXiv:astro-ph/0603228. Bibcode:2006A&A...453..609J. doi:10.1051/0004-6361:20054475.
- New method could image Earth-like planets
- Earth-like Planets May Be Ready for Their Close-Up
- Twinkle, twinkle, little planet, The Economist, 9th June 2012
- Alexander, Amir. "Space Topics: Extrasolar Planets Astrometry: The Past and Future of Planet Hunting". The Planetary Society. Retrieved 2006-09-10.
- See, Thomas Jefferson Jackson (1896). "Researches on the Orbit of F.70 Ophiuchi, and on a Periodic Perturbation in the Motion of the System Arising from the Action of an Unseen Body". The Astronomical Journal 16: 17. Bibcode:1896AJ.....16...17S. doi:10.1086/102368.
- Sherrill, Thomas J. (1999). "A Career of controversy: the anomaly OF T. J. J. See" (PDF). Journal for the history of astronomy 30. Retrieved 2007-08-27.
- Heintz, W.D. (June 1988). "The Binary Star 70 Ophiuchi Revisited". Journal of the Royal Astronomical Society of Canada 82 (3): 140. Bibcode:1988JRASC..82..140H.
- Gatewood, G. (May 1996). "Lalande 21185". Bulletin of the American Astronomical Society (American Astronomical Society, 188th AAS Meeting, #40.11;) 28: 885. Bibcode:1996AAS...188.4011G.
- John Wilford (1996-06-12). "Data Seem to Show a Solar System Nearly in the Neighborhood". The New York Times. p. 1. Retrieved 2009-05-29.
- Alan Boss (2009-02-02). The Crowded Universe. Basic Books. ISBN 0-465-00936-0.
- Benedict et al.; McArthur, B. E.; Forveille, T.; Delfosse, X.; Nelan, E.; Butler, R. P.; Spiesman, W.; Marcy, G. et al. (2002). "A Mass for the Extrasolar Planet Gliese 876b Determined from Hubble Space Telescope Fine Guidance Sensor 3 Astrometry and High-Precision Radial Velocities". The Astrophysical Journal Letters 581 (2): L115–L118. arXiv:astro-ph/0212101. Bibcode:2002ApJ...581L.115B. doi:10.1086/346073.
- Pravdo, Steven H.; Shaklan, Stuart B. (2009). "An Ultracool Star's Candidate Planet". Submitted to the Astrophysical Journal 700: 623. arXiv:0906.0544. Bibcode:2009ApJ...700..623P. doi:10.1088/0004-637X/700/1/623. Retrieved 2009-05-30.
- "First find Planet-hunting method succeeds at last". PlanetQuest. 2009-05-28. Retrieved 2009-05-29.
- Bean et al., J. et al.; Andreas Seifahrt; Henrik Hartman; Hampus Nilsson; Ansgar Reiners; Stefan Dreizler; Henry; Guenter Wiedemann (2009). "The Proposed Giant Planet Orbiting VB 10 Does Not Exist". arXiv:0912.0003v2 [astro-ph.EP].
- Anglada-Escude, G. el al.; Shkolnik; Weinberger; Thompson; Osip; Debes (2010). "Strong Constraints to the Putative Planet Candidate around VB 10 Using Doppler Spectroscopy". arXiv:1001.0043v2 [astro-ph.EP].
- The PHASES Differential Astrometry Data Archive. V. Candidate Substellar Companions to Binary Systems: Matthew W. Muterspaugh (1), Benjamin F. Lane (2), S. R. Kulkarni (3), Maciej Konacki (4), Bernard F. Burke (5), M. M. Colavita (6), M. Shao (6), William I. Hartkopf (7), Alan P. Boss (8), M. Williamson (1) ((1) Tennessee State University, (2) Draper Laboratory, (3) Caltech, (4) Nicolaus Copernicus Astronomical Center Torun, (5) MIT, (6) JPL, (7) USNO, (8) Carnegie DTM)
- Doyle, Laurance R.; Deeg; Hans-Jorg Deeg (2002). "Timing detection of eclipsing binary planets and transiting extrasolar moons". Bioastronomy 7: 80. arXiv:astro-ph/0306087. Bibcode:2004IAUS..213...80D. "Bioastronomy 2002: Life Among the Stars" IAU Symposium 213, R.P Norris and F.H. Stootman (eds), A.S.P., San Francisco, California, 80–84.
- Deeg, Hans-Jorg; Doyle; Kozhevnikov; Blue; Martín; Schneider; Laurance R. Doyle, V.P. Kozhevnikov, J Ellen Blue, L. Rottler, and J. Schneider (2000). "A search for Jovian-mass planets around CM Draconis using eclipse minima timing". Astronomy & Astrophysics 358 (358): L5–L8. arXiv:astro-ph/0003391. Bibcode:2000A&A...358L...5D.
- Doyle, Laurance R., Hans-Jorg Deeg, J.M. Jenkins, J. Schneider, Z. Ninkov, R. P.S. Stone, J.E. Blue, H. Götzger, B, Friedman, and M.F. Doyle (1998). "Detectability of Jupiter-to-brown-dwarf-mass companions around small eclipsing binary systems". Brown Dwarfs and Extrasolar Planets, A.S.P. Conference Proceedings, in Brown Dwarfs and Extrasolar Planets, R. Rebolo, E. L. Martin, and M.R.Z. Osorio (eds.), A.S.P. Conference Series 134, San Francisco, California, 224–231.
- Dynamical Constraints on Multi-Planet Exoplanetary Systems: Jonathan Horner, Robert A. Wittenmyer, Chris G. Tinney, Paul Robertson, Tobias C. Hinse, Jonathan P. Marshall
- Schmid, H. M.; Beuzit, J.-L.; Feldt, M. et al. (2006). "Search and investigation of extra-solar planets with polarimetry". Direct Imaging of Exoplanets: Science & Techniques. Proceedings of the IAU Colloquium #200 1 (C200): 165–170. Bibcode:2006dies.conf..165S. doi:10.1017/S1743921306009252.
- Schmid, H. M.; Gisler, D.; Joos, F. et al.; Gisler; Joos; Povel; Stenflo; Feldt; Lenzen; Brandner et al. (2004). "ZIMPOL/CHEOPS: a Polarimetric Imager for the Direct Detection of Extra-solar Planets". Astronomical Polarimetry: Current Status and Future Directions ASP Conference Series 343: 89. Bibcode:2005ASPC..343...89S.
- Hough, J. H.; Lucas, P. W.; Bailey, J. A.; Tamura, M.; Hirst, E.; Harrison, D.; Bartholomew-Biggs, M. (2006). "PlanetPol: A Very High Sensitivity Polarimeter". Publications of the Astronomical Society of the Pacific 118 (847): 1305–1321. Bibcode:2006PASP..118.1305H. doi:10.1086/507955.
- Magnetosphere–ionosphere coupling at Jupiter-like exoplanets with internal plasma sources: implications for detectability of auroral radio emissions, J. D. Nichols, Monthly Notices of the Royal Astronomical Society, published online: 31 MAR 2011, arXiv version
- Radio Telescopes Could Help Find Exoplanets, RedOrbit – Apr 18, 2011
- FM stars: A Fourier view of pulsating binary stars, a new technique for measuring radial velocities photometrically, Hiromoto Shibahashi, Donald W. Kurtz, 2012
- Kepler Mission Manager Update 07.10.12
- J.S. Greaves; M.C. Wyatt; W.S. Holland; W.F.R. Dent (2004). "The debris disk around tau Ceti: a massive analogue to the Kuiper Belt". Monthly Notices of the Royal Astronomical Society 351 (3): L54 – L58. Bibcode:2004MNRAS.351L..54G. doi:10.1111/j.1365-2966.2004.07957.x.
- Greaves, J.S.; M.C. Wyatt; W.S. Holland; W.F.R. Dent (2003). "Submillimetre Images of the Closest Debris Disks". Scientific Frontiers in Research on Extrasolar Planets. Astronomical Society of the Pacific. pp. 239–244.
- Greaves et al.; Holland, W. S.; Wyatt, M. C.; Dent, W. R. F.; Robson, E. I.; Coulson, I. M.; Jenness, T.; Moriarty-Schieven, G. H. et al. (2005). "Structure in the Epsilon Eridani Debris Disk". The Astrophysical Journal Letters 619 (2): L187–L190. Bibcode:2005ApJ...619L.187G. doi:10.1086/428348.
- Stark, C. C; Kuchner, M. J (2009). "A New Algorithm for Self-consistent Three-dimensional Modeling of Collisions in Dusty Debris Disks". The Astrophysical Journal 707: 543. arXiv:0909.2227. Bibcode:2009ApJ...707..543S. doi:10.1088/0004-637X/707/1/543.
- Thompson, Andrea (2009-04-20). "Dead Stars Once Hosted Solar Systems". SPACE.com. Retrieved 2009-04-21.
- NASA's PlanetQuest
- The detection and characterization of exoplanets
- Transiting exoplanet light curves
- Hardy, Liam. "Exoplanet Transit". Deep Space Videos. Brady Haran. | http://en.wikipedia.org/wiki/Methods_of_detecting_extrasolar_planets | 13 |
60 | Double Integrals and Volume
Double Riemann Sums
In first year calculus, the definite integral was defined as a Riemann sum that gave the area under a curve. There is a similar definition for the volume of a region below a function of two variables. Let f(x,y) be a positive function of two variables and consider the solid that is bounded below by f(x,y) and above a region R in the xy-plane.
For a two dimensional region, we approximated the area by adding up the areas of many approximating rectangles. For the volume of a three dimensional solid, we take a similar approach. Instead of rectangles, we use rectangular solids for the approximation. We cut the region R into rectangles by drawing vertical and horizontal lines in the xy-plane. Rectangles will be formed. We let the rectangles be the base of the solid, while the height is the z-coordinate of the lower left vertex. One such rectangular solid is shown in the figure.
Taking the limit as the rectangle size approaches zero (and the number of rectangles approaches infinity) will give the volume of the solid. If we fix a value of x and look at the rectangular solids that contain this x, the union of the solids will be a solid with constant width Dx. The face will be approximately equal to the area in the yz-plane of the (one variable since x is held constant) function z = f(x,y).
This area is equal to
If we add up all these slices and take the limit as Dx approaches zero, we get
Which is just the double integral defined in the last section
Instead of fixing the variable x we could have held y constant. The picture below illustrates the resulting wedge.
By a similar argument, the volume of the wedge is
Adding up all the wedge areas gives the total volume
This shows that the volume is equal to the iterated integral no matter which we integrate first. This is called Fubini's Theorem. Technically the volume is defined as the double Riemann sum of f(x,y) where we sum over the partition of R in the xy-plane. We state it below.
Notice that all the typical properties of the double integral hold. For example, constants can be pulled out and the double integral of the sum of two functions is the sum of the double integrals of each function.
Set up the integral to find the volume of the solid that lies below the cone
and above the xy-plane.
The cone is sketched below
We can see that the region R is the blue circle in the xy-plane. We can find the equation by setting z = 0.
Solving for y (by moving the square root to the left hand side, squaring both sides, etc) gives
The "-" gives the lower limit and the "+" gives the upper limit. For the outer limits, we can see that
-4 < x < 4
Putting this all together gives
Either by hand or by machine we can obtain the result
Volume = 64 p/3
Notice that this agrees with the formula
Volume = p r2h/3
Set up the double integral for this problem with dxdy instead of dydx. Then show that the two integrals give the same result.
Set up the double integral that gives the volume of the solid that lies below the sphere
x2 + y2 + z2 = 6
and above the paraboloid
z = x2 + y2
The picture below indicated that the region is the disk that lies inside that circle of intersection of the two surfaces. We substitute
x2 + y2 + (x2 + y2)2 = 6
x2 + y2 + (x2 + y2)2 - 6 = 0
Now factor with x2 + y2 as the variable to get
(x2 + y2 - 2)(x2 + y2 + 3) = 0
The second factor has no solution, while the first is
x2 + y2 = 2
Solving for y gives
- < x <
Just as we did in one variable calculus, the volume between two surfaces is the double integral of the top surface minus the bottom surface. We have
Again we can perform this integral by hand or by machine and get
Volume = 7.74 | http://ltcconline.net/greenl/courses/202/multipleIntegration/Volume.htm | 13 |
144 | In vector calculus, the divergence theorem, also known as Gauss's theorem or Ostrogradsky's theorem, is a result that relates the flow (that is, flux) of a vector field through a surface to the behavior of the vector field inside the surface.
More precisely, the divergence theorem states that the outward flux of a vector field through a closed surface is equal to the volume integral of the divergence over the region inside the surface. Intuitively, it states that the sum of all sources minus the sum of all sinks gives the net flow out of a region.
In physics and engineering, the divergence theorem is usually applied in three dimensions. However, it generalizes to any number of dimensions. In one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Green's theorem.
If a fluid is flowing in some area, and we wish to know how much fluid flows out of a certain region within that area, then we need to add up the sources inside the region and subtract the sinks. The fluid flow is represented by a vector field, and the vector field's divergence at a given point describes the strength of the source or sink there. So, integrating the field's divergence over the interior of the region should equal the integral of the vector field over the region's boundary. The divergence theorem says that this is true.
The divergence theorem is thus a conservation law which states that the volume total of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary.
Mathematical statement
Suppose V is a subset of Rn (in the case of n = 3, V represents a volume in 3D space) which is compact and has a piecewise smooth boundary S (also indicated with ∂V=S). If F is a continuously differentiable vector field defined on a neighborhood of V, then we have
The left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold ∂V is quite generally the boundary of V oriented by outward-pointing normals, and n is the outward pointing unit normal field of the boundary ∂V. (dS may be used as a shorthand for n dS.) By the symbol within the two integrals it is stressed once more that ∂V is a closed surface. In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volume V, and the right-hand side represents the total flow across the boundary ∂V.
- Applying the divergence theorem to the product of a scalar function g and a vector field F, the result is
- A special case of this is , in which case the theorem is the basis for Green's identities.
- Applying the divergence theorem to the cross-product of two vector fields , the result is
- Applying the divergence theorem to the product of a scalar function, f, and a non-zero constant vector, the following theorem can be proven:
- Applying the divergence theorem to the cross-product of a vector field F and a non-zero constant vector, the following theorem can be proven:
Suppose we wish to evaluate
where S is the unit sphere defined by
and F is the vector field
The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem:
where W is the unit ball (i.e., the interior and surface of the unit sphere, ). Since the function is positive in one hemisphere of W and negative in the other, in an equal and opposite way, its total integral over W is zero. The same is true for :
because the unit ball W has volume
Differential form and integral form of physical laws
As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples are Gauss's law (in electrostatics), Gauss's law for magnetism, and Gauss's law for gravity.
Continuity equations
Continuity equations offer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. In fluid dynamics, electromagnetism, quantum mechanics, relativity theory, and a number of other fields, there are continuity equations that describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution of sources or sinks of that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux).
Inverse-square laws
Any inverse-square law can instead be written in a Gauss' law-type form (with a differential and integral form, as described above). Two examples are Gauss' law (in electrostatics), which follows from the inverse-square Coulomb's law, and Gauss' law for gravity, which follows from the inverse-square Newton's law of universal gravitation. The derivation of the Gauss' law-type equation from the inverse-square formulation (or vice-versa) is exactly the same in both cases; see either of those articles for details.
The theorem was first discovered by Lagrange in 1762, then later independently rediscovered by Gauss in 1813, by Ostrogradsky, who also gave the first proof of the general theorem, in 1826, by Green in 1828, etc. Subsequently, variations on the divergence theorem are correctly called Ostrogradsky's theorem, but also commonly Gauss's theorem, or Green's theorem.
To verify the planar variant of the divergence theorem for a region R, where
and R is the region bounded by the circle
The boundary of R is the unit circle, C, that can be represented parametrically by:
such that where s units is the length arc from the point s = 0 to the point P on C. Then a vector equation of C is
At a point P on C:
Because , , and because , . Thus
Multiple dimensions
One can use the general Stokes' Theorem to equate the n-dimensional volume integral of the divergence of a vector field F over a region U to the (n-1)-dimensional surface integral of F over the boundary of U:
This equation is also known as the Divergence theorem.
When n = 2, this is equivalent to Green's theorem.
When n = 1, it reduces to the Fundamental theorem of calculus.
Tensor fields
Writing the theorem in index notation:
suggestively, replacing the vector field F with a rank-n tensor field T, this can be generalized to:
where on each side, tensor contraction occurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4d spacetime in general relativity).
- or less correctly as Gauss' theorem (see history for reason)
- Stewart, James (2008), "Vector Calculus", Calculus: Early Transcendentals (6 ed.), Thomson Brooks/Cole, ISBN 978-0-495-01166-8
- R.G. Lerner, G.L. Trigg (1994). Encyclopaedia of Physics (2nd ed.). VHC publishers. ISBN 3-527-26954-1 (Verlagsgesellschaft), VHC Inc.) 0-89573-752-3 Check
- Byron, Frederick; Fuller, Robert (1992), Mathematics of Classical and Quantum Physics, Dover Publications, p. 22, ISBN 978-0-486-67164-2
- M.R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector Analysis (2nd Edition). Schaum’s Outlines, McGraw Hill (USA). ISBN 978-0-07-161545-7.
- M.R. Spiegel, S. Lipcshutz, D. Spellman (2009). Vector Analysis (2nd ed.). Schaum’s Outlines, McGraw Hill (USA). ISBN 978-0-07-161545-7.
- C.B. Parker (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. ISBN 0-07-051400-3.
- In his 1762 paper on sound, Lagrange treats a special case of the divergence theorem: Lagrange (1762) "Nouvelles recherches sur la nature et la propagation du son" (New researches on the nature and propagation of sound), Miscellanea Taurinensia (also known as: Mélanges de Turin ), 2: 11 - 172. This article is reprinted as: "Nouvelles recherches sur la nature et la propagation du son" in: J.A. Serret, ed., Oeuvres de Lagrange, (Paris, France: Gauthier-Villars, 1867), vol. 1, pages 151-316; on pages 263-265, Lagrange transforms triple integrals into double integrals using integration by parts.
- C. F. Gauss (1813) "Theoria attractionis corporum sphaeroidicorum ellipticorum homogeneorum methodo nova tractata," Commentationes societatis regiae scientiarium Gottingensis recentiores, 2: 355-378; Gauss considered a special case of the theorem; see the 4th, 5th, and 6th pages of his article.
- Mikhail Ostragradsky presented his proof of the divergence theorem to the Paris Academy in 1826; however, his work was not published by the Academy. He returned to St. Petersburg, Russia, where in 1828-1829 he read the work that he'd done in France, to the St. Petersburg Academy, which published his work in abbreviated form in 1831.
- His proof of the divergence theorem -- "Démonstration d'un théorème du calcul intégral" (Proof of a theorem in integral calculus) -- which he had read to the Paris Academy on February 13, 1826, was translated, in 1965, into Russian together with another article by him. See: Юшкевич А.П. (Yushkevich A.P.) and Антропова В.И. (Antropov V.I.) (1965) "Неопубликованные работы М.В. Остроградского" (Unpublished works of MV Ostrogradskii), Историко-математические исследования (Istoriko-Matematicheskie Issledovaniya / Historical-Mathematical Studies), 16: 49-96; see the section titled: "Остроградский М.В. Доказательство одной теоремы интегрального исчисления" (Ostrogradskii M. V. Dokazatelstvo odnoy teoremy integralnogo ischislenia / Ostragradsky M.V. Proof of a theorem in integral calculus).
- M. Ostrogradsky (presented: November 5, 1828 ; published: 1831) "Première note sur la théorie de la chaleur" (First note on the theory of heat) Mémoires de l'Académie impériale des sciences de St. Pétersbourg, series 6, 1: 129-133; for an abbreviated version of his proof of the divergence theorem, see pages 130-131.
- Victor J. Katz (May1979) "The history of Stokes' theorem," Mathematics Magazine, 52(3): 146-156; for Ostragradsky's proof of the divergence theorem, see pages 147-148.
- George Green, An Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism (Nottingham, England: T. Wheelhouse, 1838). A form of the "divergence theorem" appears on pages 10-12.
- Other early investigators who used some form of the divergence theorem include:
- Poisson (presented: February 2, 1824 ; published: 1826) "Mémoire sur la théorie du magnétisme" (Memoir on the theory of magnetism), Mémoires de l'Académie des sciences de l'Institut de France, 5: 247-338; on pages 294-296, Poisson transforms a volume integral (which is used to evaluate a quantity Q) into a surface integral. To make this transformation, Poisson follows the same procedure that is used to prove the divergence theorem.
- Frédéric Sarrus (1828) "Mémoire sur les oscillations des corps flottans" (Memoir on the oscillations of floating bodies), Annales de mathématiques pures et appliquées (Nismes), 19: 185-211.
- K.F. Riley, M.P. Hobson, S.J. Bence (2010). Mathematical methods for physics and engineering. Cambridge University Press. ISBN 978-0-521-86153-3.
- see for example:
J.A. Wheeler, C. Misner, K.S. Thorne (1973). Gravitation. W.H. Freeman & Co. pp. 85–86, §3.5. ISBN 0-7167-0344-0., and
R. Penrose (2007). The Road to Reality. Vintage books. ISBN 0-679-77631-1.
- Hazewinkel, Michiel, ed. (2001), "Ostrogradski formula", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Differential Operators and the Divergence Theorem at MathPages
- The Divergence (Gauss) Theorem by Nick Bykov, Wolfram Demonstrations Project.
- Weisstein, Eric W., "Divergence Theorem", MathWorld.
— This article was originally based on the GFDL article from PlanetMath at http://planetmath.org/encyclopedia/Divergence.html | http://en.wikipedia.org/wiki/Divergence_theorem | 13 |
86 | Table of Contents
One of the most stupendous tasks in the history of science, started 200 years ago by William Lambton and completed four decades later by George Everest, resulted in the Great Indian Arc of the Meridian. It also established that the Himalayas
constituted a mountain range and Mount Everest was the highest point on the earth.
in New Delhi
TWO HUNDRED years ago, on April 10, 1802, the British surveyor Col. William Lambton began an ambitious, audacious and mathematically meticulous scientific odyssey at St. Thomas Mount in Madras (now Chennai). It took four decades to be completed. The
project ended on the foothills of the Himalayas. Lambton carefully laid the baseline, which stretched across a distance of 12 kilometres between St. Thomas Mount and another hillock in the southern direction, for the "measurement of the length of a
degree of latitude" along a longitude in the middle of peninsular India.
SURVEY OF INDIA
This 12-km-long horizontal at about sea level grew into what is known as the Great Indian Arc of the Meridian, a gigantic geometric web of 'triangulations' roughly along the 78° longitude across the entire length of the subcontinent covering a
distance of about 2,400 km in the north-south direction. As a corollary, at the end of this massive and perilous exercise, which consumed "more lives than in most contemporary wars" and involved tomes of calculations and equations more complex than any
in the pre-computer age, it was conclusively proved in 1843 that the Himalayas constituted a mountain range that was higher than the Andes, until then believed to be the highest. It also established the height of the highest point on the earth, what is
now called Mount Everest.
Lambton had originally planned a short arc. It later grew in size and scale to become "one of the most stupendous works in the history of science", one of the greatest human endeavours ever undertaken. It involved, among other difficult aspects of the
relentless journey of scientific discovery, moving across the subcontinent with diverse measuring instruments and other paraphernalia, including the massive 36-inch theodolite which weighed over half a tonne, to be carried by as many as 12 men to be
placed at vantage points - from the tops of temple gopurams to 30-metre high bamboo structures - from where the crucial measurements of angles were made. The Great Arc became the longest measurement of the earth's surface ever to have been attempted.
William Lambton's genius had conceived the idea of the Great Trigonometrical Survey (GTS) of the country with the Great Arc providing the skeletal framework for it. But he died to the cause at the age of 70, midway through his task, while surveying at a
place called Hinganghat in Maharashtra, where is situated his uncared-for grave, today no more than a flat, weathered and battered piece of stone. The Great Arc was completed by George Everest, after whom the highest point on the earth is named.
The measurement of the Great Arc resulted in new values for the curvature of the earth throwing fresh light on the then longstanding debate on the exact shape of the planet. Actually, it was these unsettled fundamental scientific issues that drove
Lambton to undertake this mammoth task rather than the more direct applications - especially from the perspective of the expanding British empire - of the countrywide survey in cartography and production of accurate topographic and revenue maps.
The GTS was not the first survey of the country. Lord Robert Clive of the East India Company had raised a mapping agency, which later became the Survey of India, and commissioned the Bengal Surveys as early as 1767 under Major James Rennel. This is
regarded as the beginning of systematic topographical mapping in India and the founding of one of the oldest survey and mapping agencies in the world. In 1799, after Mysore was brought under the control of the British, the third Mysore Surveys were
ordered under Col. Lambton, to define and defend the newly acquired territory.
Lambton was part engineer, part mathematician and astronomer, but his passion was geodesy, the study of the earth's shape. He had read advanced books on mathematics and astronomy during his term as a prisoner during the American War of Independence.
After his release he was deployed to assist in the survey of the New Brunswick area, which was carried out to delineate the boundary between British Canada and the United States. He later joined the British Army and was posted to India in 1796, when he
took part in the Fourth Anglo-Mysore War.
SURVEY OF INDIA
Sir George Everest.
At that time, detailed knowledge of the earth's size and shape did not exist. Although the fact that the earth was not a true sphere but more curved at the Equator and flatter at the poles had been established about 70 years earlier, a precise
determination of exactly 'how much flatter' or more curved, remained an outstanding issue for the scientific community. The length of the arc of a degree of latitude is a measure of the curvature of the earth at that point. In the 1730s, two expeditions
had been sent out from France, one to the Equator in what is now Ecuador and the other to the Arctic Circle in Lapland, to measure the length of a degree of latitude by 'triangulating' north and south from a carefully measured baseline so as to cover a
short arc of about 300 km. By determining the exact positions (in terms of latitude and longitude) of the arc's extremities by astronomical observations of fixed stars, the value of one degree of latitude was obtained. The length of a degree in Ecuador
had turned out to be a kilometre shorter than that in Lapland, indicating that the parallels at the Equator were bunched closer than at the poles and the earth was determined to be an 'oblate' spheroid.
THERE still remained the question of how oblate the spheroid was and whether this flattening was of a regular or consistent nature. Exercises in triangulations were being undertaken in France and in Britain and attempts were made to link the two series
of triangulations across the English Channel. Lambton now had the idea of doing the same thing in the tropical latitudes, roughly midway between the Equator and northern Europe. In the East India Company's need to define, defend and exploit its newly
conquered areas, Lambton saw expedient means of answering the outstanding scientific questions in geodesy.
By playing down the elements of scientific research and stressing the practical value of "ascertaining the correct positions of the principal geographical points (within Mysore) upon correct mathematical principles", Lambton proposed in November 1799 a
scheme for a 'Mathematical and Geographical Survey' based on a north-south series of triangulations which could later be 'continued to an almost unlimited extent in every other direction' and could yield such information as the precise width of the
peninsula. He essentially proposed a framework - the primary triangulation in the north-south direction - the sides of whose triangles could serve as the bases for different local surveys and networks of secondary triangulations that would cover the
entire country, instead of local surveys having to measure their own baselines. The principal geographical points of his series could also correct the often doubtful orientations of existing local surveys.
The technique of 'triangulation' involves identifying three mutually visible reference points, usually prominent hills or buildings, as the corners of a triangle. Knowing the exact distance between two of these points, and then measuring the angles made
at each by the respective lines of sight with the third reference point, the distance and position of the third point can be deduced by simple trigonometry. One of the newly determined sides of this triangle would now serve as the baseline for a second
triangle that will include a new reference point whose position would, in turn, be established by the same procedure. Another triangle is thus completed, which serves the basis for yet another new triangle and so on. A chain of triangles thus results.
In early 1800, Lambton's proposal for the Third Mysore Survey was approved. For what was described as the 'trigonometrical survey of the peninsula', Lambton had first to determine a working value for the length of a degree of latitude in mid-peninsula
for which he planned a short arc in the vicinity of Madras. But he could begin only in April 1802: it took two years for a suitable 'theodolite', required for the crucial measurements of the angles of his primary triangles, to arrive from Britain.
SURVEY OF INDIA
One of the Great Arc's towers, minus its topmost railings, at Begarazpur to north of Delhi.
A theodolite, then a key instrument of a surveyor, is essentially a telescope mounted on an elaborate structure so that it pivots both vertically about an upright ring or 'circle' and horizontally about a larger horizontal circle. This enables the angle
of elevation as well as angles in a plane to be read off the instrument's calibrated circles. Plummets, spirit levels and adjustment screws in the instrument allow its alignment and levelling and micrometers and microscopes enable reading the
calibrations. Further, the whole thing has to be rock steady and its engineering optics and calibration have to be of the highest precision. When Lambton undertook his mission, in the world there were perhaps two or three instruments of sufficient
accuracy and sophistication. Lambton could locate one in Britain, built by the famed manufacturer William Cary who had built a similar instrument for an elaborate British Survey undertaken by Wiliam Roy.
For baseline measurement, a steel measurement chain of high quality was located in Calcutta (now Kolkata). Along with a large Zenith Sector (for astronomical observation) and other instruments, the chain was apparently intended for the Emperor of China
but for political and other reasons its journey was cut short in Calcutta. Lambton bought the instruments. The chain consisted of 40 bars of blistered steel, each two and a half feet long and linked to the next with a finely wrought brass hinge. The
whole contraption could be folded up to fit neatly into a teakwood chest.
So, on April 10, 1802, Lambton laid out the first baseline which would also serve as the sheet anchor for the Great Trigonometrical Survey of India. To measure the full 7.5 miles (12 km) of the baseline with his 100-foot chain, Lambton had to make 400
individual measurements with the chain. At each measurement, the fully extended chain was supported and kept taut inside five wooden coffers, each 20 ft long. These were propped up with tripods fitted with elevating screws for levelling. Each coffer was
fitted with a thermometer to record the temperature at the time of measurement so that the expansion of the bars due to temperature variations could be accounted for. This was done by comparing with a similar chain obtained from Britain but kept in a
cool vault. The measurement of the first baseline took 57 days.
In late September, Lambton began his triangulation. He took angles from his baseline to pre-selected points to the south and the west with his 36-inch Great Theodolite. He completed the short meridional arc from Madras to Cuddalore, observing latitudes
at both ends, to determine the length of a degree that was essential for his work. This short southern series of triangles down the coast took about a year. In October 1804, Lambton headed westward and inland and carried his triangles towards Bangalore.
The original plan of using the small hillocks which dotted the landscape (known as doorg in the local language) as reference points for the westward chain of triangles posed unforeseen problems - many of them were guarded by local chieftains who refused
access to its tops. This forced Lambton to realign the chain's northern edge, but once further west of Bangalore he apparently did not face this problem and the progression, in fact, yielded the largest triangle yet measured. Lambton's first
longitudinal arc right across the peninsula (as against the north-south latitudinal arc) yielded a surprising discovery. The measurement revealed that the peninsula had apparently shrunk. Against the value of about 400 miles of the then current maps,
which were based on coastal surveys and astronomical references, Lambton's triangles showed that it was only 360 miles from Madras to Mangalore.
With the onset of the monsoon in 1805, Lambton returned to Bangalore to embark on his original mission - the latitudinal measurement or the Great Arc of the Meridian. With the Bangalore base as its starting point, the triangles of Lambton's Great Arc
extended north about 100 miles up to where the independent territory of the Nizam of Hyderabad lay, and then south towards Cape Comorin (Kanyakumari), the tip of the subcontinent.
SURVEY OF INDIA
Lambton's Great Theodolite weighed half a tonne. Severely damaged on various occasions, it was rebuilt and is now in the Survey of India's headquarters in Dehra Dun.
The next baseline was measured in Coimbatore in 1806. To get an idea of the accuracy of Lambton's measurements, it may be mentioned here that, in its length of over six miles, the difference between the triangulated measurement carried from Bangalore
along the Great Arc and the actual measurement at Coimbatore was only 7.6 inches. From Coimbatore, the big stride along the Great Arc to Cape Comorin - with an equally accurate baseline check at Tirunelveli - was completed successfully in 1809.
By 1815, Lambton had covered the whole peninsula south of the river Kistna (Krishna), which resulted in the measurement of the longest geodetic arc closest to the Equator, from Cape Comorin to the 18th Parallel.
But this journey was marred by an accident at Thanjavur in late 1808 when Lambton moved away from the Giant Arc to carry out a parallel triangulation further down the east cost. While trying to hoist the half-tonne theodolite atop the 217-ft gopuram of
the Brihadeeshwarar temple for his angle measurements, the guy that was lifting the theodolite snapped and the instrument crashed down into a mangled mass of steel and iron. Lambton was not one who would give up so easily, nor would he wait for a couple
of years for a new instrument to arrive - though he, accepting full responsibility for the mishap, had ordered a new one at his own expense. According to an account of the Great Arc, Lambton shut himself up in a survey tent for nearly six weeks and
repaired it himself with help from the military workshop at Tiruchi. Lambton, more than anyone else, was aware of the possible errors in the repaired instrument, but checks on measurements revealed that discrepancies were marginal.
Repeating and reviewing measurements was an integral part of the Great Arc measurement. For example, the length of a degree of latitude as calculated from the short arc from Madras in 1802 was soon revised when the Great Arc produced a more refined
value. Correspondingly, the earliest triangles based on the Madras measurement had also to be revised. As the arc got longer, the parameters associated with the curvature of the earth were recalculated. Isaac Newton's value for the compression of the
earth at the poles was 1/230. It was revised down to 1/304 in 1812 and by Lambton himself to 1/310 in 1818.
In 1818, Lt. George Everest joined Lambton, and with his help the Nizam was convinced to permit Lambton's passage further north. As the Great Arc progressed beyond the territories under the control of the Madras government, Lambton's survey was
transferred to the supreme government in Calcutta and the survey, until now variously called, was officially named the Great Trigonometrical Survey (GTS) of India. In 1818 it was hoped that the survey would continue north, east and west, at least until
lateral triangulations could link Calcutta and Bombay.
Lambton planned to continue on the same 78° meridian from Hyderabad to Nagpur in central India. But he died at the age of 70 in 1823, before he could fulfil his dream. Of all his contributions, the greatest contribution is the measurement of the
meridional arcs, the results of which were employed well into the 20th century in all investigations of the figure of the earth.
Lambton's mantle fell on Everest's shoulders. Everest, however, wanted to improve upon Lambton's work by basing the surveys on a rigid reference framework. This raised the problem of finding a suitable reference spheroid to fit the shape of the earth's
'gravitational equipotential surface' for India and the adjacent countries. From Kalianpur in Madhya Pradesh, more or less the centre of India, Everest conceived covering the length and breadth of the country by a 'gridiron' of triangular chains, as
against a network of triangulations as conceived by Lambton. He redesigned the 36-inch Great Theodolite. He replaced the steel chain with 10-ft compensation bars. He completed the Great Arc up to Banog, in the first Himalayan range, near Mussoorie, a
length of 2,400 km.
Everest made the government agree to a revision of Lambton's measurements based on more accurate instruments and procedures laid down by him. Later, in 1830, he was appointed the Surveyor General of India but, much against the wishes of the government,
which wanted the Survey to focus on the needs of the expanding Empire's infrastructural needs, Everest continued to devote much time to the Great Arc. This task was completed by him (together with Lambton's associates) in the year 1841, and he took two
more years for computations and adjustments. The work and norms laid down by Everest have stood the test of time. The Everest Spheroid, the geodetic datum of reference for the Indian region, evolved by him in 1830, is still used in India (albeit with
revisions), Pakistan, Nepal, Myanmar, Sri Lanka, Bhutan and other South Asian countries. Based on his concept, the gridiron network now covers all of India and forms a solid foundation for accurate surveys and mapping for defence and other developmental
It was with the help of the gridiron network that the highest peak of the world was observed and identified in 1852 and its height declared as 29,002 ft (8,840 m). After fresh observations and computations, the Survey of India declared its height in
1954 as 8,848 m. In 1975, China put a metallic beacon on Mt. Everest and observed it from nine stations, and the height was declared to be 8,848.13 plus or minus 0.35 m.
The significance of Lambton's and Everest's work can be gauged from the fact that they worked at a time when there was no modern communication network. The country was marked by jungles, wild animals, hostile weather conditions, mighty rivers, swamps
and floods and infectious diseases, particularly malaria. The average length of a side of a triangulation was 31 miles, the maximum being 62 miles. One cannot imagine how such long-distance measurements were planned, laid down on the ground, and the
line of sight cleared of trees and sometimes even houses and other structures. If Lambton went on relentlessly on his journey despite itinerant fever and other ailments like dysentery, Everest did all this despite poor health.
TODAY, the Survey of India in its post-Independence phase has consolidated the work of Lambton-Everest, built upon it and mapped the entire country on 1:50,000 scale and about 40 per cent of it on 1:25,000 scale. With advances in instrumentation,
measurement and the advent of digital technology, the Survey of India is at present in the process of digitising the existing topographic data on 1:50,000 scale.
Alongside, an effort is being made to enable free public access to topographic data - analogue as well as digital - without the prevalent security and defence-related restrictions on access. For this a mammoth exercise has been undertaken to map the
geographic control points on a different geodetic datum called WGS-84 instead of the Everest Spheroid, using satellite-based Global Positioning System (GPS) surveying techniques. While some digital topographic maps of southern India on 1:250,000 scale
on WGS-84 should become available in the next three to four months, the mapping of the WGS-84 control points is expected to take three to four years and digitised topographic maps of the entire country are expected to be available for public access in
seven to eight years.
This is the mandate of the newly conceptualised National Spatial Data Infrastructure (NSDI), a joint effort of the Department of Science and Technology (DST) under which the Survey of India functions, and the Departments of Space and Defence. The NSDI
is expected to be launched formally on August 15, 2002. Since all this was made possible because of the pioneering efforts of Lambton and Everest in their gigantic work on the Great Arc, the Survey of India has rightly launched a year-long celebration
of 200 years of the Great Arc, which got under way in Delhi with a map-based treasure hunt for schoolchildren. This is to be followed by year-long bicentennial celebrations, which will include international conferences and workshops. | http://www.hindu.com/fline/fl1909/19090660.htm | 13 |
60 | Chapter 2: The locomotor system
The skeleton consists of bones and cartilages. A bone is composed of several tissues, predominantly a specialized connective tissue that is, itself, called “bone.” Bones provide a framework of levers, they protect organs such as the brain and heart, their marrow forms certain blood cells, and they store and exchange calcium and phosphate ions.
The term osteology, meaning the study of bones, is derived from the Greek word osteon, meaning "bone." The Latin term os is used in names of specific bones, e.g., os coxae, or hip bone; the adjective is osseous.
Cartilage is a tough, resilient connective tissue composed of cells and fibers embedded in a firm, gel-like, intercellular matrix. Cartilage is an integral part of many bones, and some skeletal elements are entirely cartilaginous.
The skeleton includes the axial skeleton (bones of the head, neck, and trunk) and the appendicular skeleton (bones of the limbs). Bone may be present in locations other than in the bony skeleton. It often replaces the hyaline cartilage in parts of the laryngeal cartilages. Furthermore, it is sometimes formed in soft tissues, such as scars. Bone that forms where it is not normally present is called heterotopic bone.
Bones may be classified according to shape: long, short, flat, and irregular.
Long bones (fig. 2-1).
Long bones are those in which the length exceeds the breadth and thickness. They include the clavicle, humerus, radius, ulna, femur, tibia, and fibula, and also the metacarpals, metatarsals, and phalanges.
Each long bone has a shaft and two ends or extremities, which are usually articular. The shaft is also known as the diaphysis. The ends of a long bone are usually wider than the shaft, and are known as epiphyses. The epiphyses of a growing bone are either entirely cartilaginous or, if epiphysial ossification has begun, are separated from the shaft by cartilaginous epiphysial plates (discs). Clinically, the term epiphysis usually means bony epiphysis. The part of the shaft adjacent to an epiphysial disc contains the growth zone and newly formed bone and is called the metaphysis. The bony tissue of the metaphysis and of the epiphysis is continuous in the adult, with diappearance of the caritlagenous epiphyseal plate.
The shaft of a long bone (diaphysis) is a tube of compact bone ("compacta"), the cavity of which is known as a medullary (marrow) cavity. The cavity contains either red (blood-forming) or yellow (fatty) marrow, or combinations of both. The cavity of the epiphysis and metaphysis contains irregular, anastomosing bars or trabeculae, which form what is known as spongy or cancellous bone. The spaces between the trabeculae are filled with marrow. The bone on the articular surfaces of the ends is covered by cartilage, which is usually hyaline.
The shaft of a long bone is surrounded by a connective tissue sheath, the periosteum. Periosteum is composed of a tough, outer fibrous layer, which acts as a limiting membrane, and an inner, more cellular osteogenic layer. The inner surface of compact bone is lined by a thin, cellular layer, the endosteum. At the ends of the bone the periosteum is continuous with the joint capsule, but it does not cover the articular cartilage. Periosteum serves for the attachment of muscles and tendons to bone.
Short bones occur in the hands and feet and consist of spongy bone and marrow enclosed by a thin layer of compact bone. They are surrounded by periosteum, except on their articular surfaces.
Sesamoid bones are a type of short bone embedded within tendons or joint capsules. and These occur mainly in the hands and feet, although the patella represents a particularly large example of a sesamoid bone. They vary in size and number. Some clearly serve to alter the angle of pull of a tendon. Others, however, are so small that they are of scant functional importance.
Accessory, or supernumerary, bones are bones that are not regularly present. They occur chiefly in the hands and feet. They include some sesamoid bones and certain ununited epiphyses in the adult. They are of forensic importance in that, when seen in radiograms, they may be mistaken for fractures. Callus, however, is absent, the bones are smooth, and they are often present bilaterally.
Flat bones include the ribs, sternum, scapulae, lateral part of the clavicle, and many bones of the skull. They consist of two layers of compact bone with intervening spongy bone and marrow. The intervening spongy layer in the bones of the vault of the skull is termed diploe: it contains many venous channels. Some bones, such as the lacrimal and parts of the scapula, are so thin that they consist of only a thin layer of compact bone.
Irregular bones are those that do not readily fit into other groups. They include many of the skull bones, the vertebrae, and the hip bones.
Contours and markings
The shafts of long bones usually have three surfaces, separated from one another by three borders. The articular surfaces are smooth, even after articular cartilage is removed, as in a dried bone. A projecting articular process is often referred to as a head, its narrowed attachment to the rest of the bone as the neck. The remainder is the body or, in a long bone, the shaft. A condyle (knuckle) is a protruding mass that carries an articular surface. A ramus is a broad arm or process that projects from the main part or body of the bone.
Other prominences are called processes, trochanters, tuberosities, protuberances, tubercles, and spines. These are frequently the site of attachment of muscles and tendons. Linear prominences are ridges, crests, or lines, and linear depressions are grooves. Other depressions are fossae or foveae (pits). A large cavity in a bone is termed a sinus, a cell, or an antrum. A hole or opening in a bone is a foramen. If it has length, it is a canal, a hiatus, or an aqueduct. Many of these terms (e.g., canal, fossa, foramen, and aqueduct) are not, however, limited to bones and may be descriptors for other anatomical features.
The ends of bones, except for the articular surfaces, contain many foramina for blood vessels. Vascular foramina on the shaft of a long bone are much smaller, except for one or sometimes two large nutrient foramina that lead into oblique canals, which contain vessels that supply the bone marrow. The nutrient canals usually point away from the growing end of the bone and toward the epiphysis that unites first with the shaft. The directions of the vessels are indicated by the following mnemonic: To the elbow I go; from the knee I flee.
The surfaces of bones are commonly roughened and elevated where there are powerful fibrous attachments but smooth where muscle fibers are attached directly.
Blood and nerve supply (fig. 2-1).
The nutrient artery is shown in fig. 2-1, along with periosteal, metaphysial, and epiphysial vessels. In a growing bone, the metaphysial and epiphysial vessels are separated by the cartilaginous epiphysial plate. Both groups of vessels are important for the nutrition of the growth zone, and disturbances of blood supply may result in disturbances in growth. When growth stops and the epiphysial plate disappears, the metaphysial and epiphysial vessels anastomose within the bone.
Many nerve fibers accompany the blood vessels of bone. Most such fibers are vasomotor, but some are sensory, ending in periosteum and in the adventitia of blood vessels. Some of the sensory fibers are pain fibers. Periosteum is especially sensitive to tearing or tension. Fractures are painful, and an anesthetic injected between the broken ends of the bone may give relief. A tumor or infection that enlarges within a bone may be quite painful. Pain arising in a bone may be felt locally, or it may spread or be referred. For example, pain arising in the shaft of the femur may be felt diffusely in the thigh or knee.
Before birth, the medullary cavities of bones, as well as the spaces between trabeculae, are filled with red marrow, which produces red blood corpuscles and to certain white blood cells (granulocytes). From infancy onward there is both a progressive diminution in the amount of blood cell-forming marrow and a progressive increase in the amount of fat (yellow marrow).
In the adult, red marrow is usually present in the ribs, vertebrae, sternum, and hip bones. This latter site is the common location of bone marrow biopsy, in which a small amount of red marrow is removed for cytopathologic analysis. The radius, ulna, tibia, and fibula contain fatty marrow in their shafts and epiphyses. The femur and humerus usually contain a small amount of red marrow in the upper parts of the shafts, and small patches may be present in their proximal epiphyses. The tarsal and carpal bones generally contain only fatty marrow. Loss of blood may be followed by an increase in the amount of red marrow as more blood cells are formed.
All bones begin as mesenchymal proliferations that appear early in the embryonic period. In membrane bones (comprising the clavicle, mandible, and certain skull bones), the cells differentiate into osteoblasts that lay down an organic matrix called osteoid. Bone salts are then deposited in this matrix. Some osteoblasts are trapped in the matrix and become osteocytes. Others continue to divide and form more osteoblasts on the surface of the bone. Bone grows only by apposition, that is, by the laying down of new bone on free surfaces.
Most bones, however, develop as cartilage bones. The mesenchymal proliferations become chondrified as the cells lay down cartilage matrix and form hyaline cartilages that have the shapes of the future bones. These cartilages are then replaced by bone, as illustrated in figure 2-2. There is usually more than one ossification center in each bone, and a cartilaginous plate (epiphyseal plate) is the site of lengthening of the bone. The centers of ossification at the lengthening portions of the long bone are termed the epiphyses, while the centers on various other processes of the bone are termed apophyses.
Skeletal development involves three inter-related but dissociable components: increase in size (growth), increase in maturity, and aging. Skeletal maturation is "the metamorphosis of the cartilaginous and membranous skeleton of the foetus to the fully ossified bones of the adult". (Acheson). Skeletal status, however, does not necessarily correspond with height, weight, or age. In fact, the maturational changes in the skeleton are intimately related to those of the reproductive system. These in turn are directly responsible for most of the externally discernible changes on which the estimation of general bodily maturity is usually based. The skeleton of a healthy child develops as a unit, and the various bones tend to keep pace with one another. Hence, radiographic examination of a limited portion of the body is believed by some workers to suffice for an estimation of the entire skeleton. The hand is the portion most frequently examined: "As the hand grows, so grows the entire skeleton," it is sometimes stated.
The assessment of skeletal maturity is important in determining whether an individual child is advanced or retarded skeletally and, therefore, in diagnosing endocrine and nutritional disorders. * Skeletal status is frequently expressed in terms of skeletal age. This involves the comparison of radiograms of certain areas with standards for those areas; the skeletal age assigned is that of the standard that corresponds most closely. Detailed standards have been published for the normal postnatal development of the hand, knee, and foot. Tables showing the times of appearance of the postnatal ossific centers in the limbs are provided for the upper limb and the lower limb.
Skeletal Maturation Periods.
The following arbitrary periods are convenient:
- Embryonic period proper. This comprises the first eight postovulatory weeks of development. The clavicle, mandible, maxilla, humerus, radius, ulna, femur, and tibia commence to ossify during the last two weeks of this period.
- Fetal period. This begins at eight postovulatory weeks, when the crown-rump length has reached about 30 mm. The following elements commence to ossify early in the fetal period or sometimes late in the embryonic period: scapula, ilium, fibula, distal phalanges of the hand, and certain cranial bones (e.g., the frontal). The following begin to ossify during the first half of intra-uterine life: most cranial bones and most diaphyses (ribs, metacarpals, metatarsals, phalanges), calcaneus sometimes, ischium, pubis, some segments of the sternum, neural arches, and vertebral bodies. The following commence to ossify shortly before birth: calcaneus, talus, and cuboid; usually the distal end of the femur and the proximal end of the tibia; sometimes the coracoid process, the head of the humerus, and the capitate and hamate; rarely the head of the femur and the lateral cuneiform.
- Childhood. The period from birth to puberty includes infancy (i.e., the first one or two postnatal years). Most epiphyses in the limbs, together with the carpals, tarsals, and sesamoids, begin to ossify during childhood. Ossification centers generally appear one or two years earlier in girls than in boys. Furthermore, those epiphyses that appear first in a skeletal element usually are the last to unite with the diaphysis. They are located at the socalled growing ends (e.g., shoulder, wrist, knee).
- Adolescence. This includes puberty and the period from puberty to adulthood. Puberty usually occurs at 13+/-2 years of age in girls, and two years later in boys. Most of the secondary centers for the vertebrae, ribs, clavicle, scapula, and hip bone begin to ossify during adolescence. The fusions between epiphysial centers and diaphyses occur usually during the second and third decades. These fusions usually take place one or two years earlier in girls than in boys. The closure of epiphysial lines is under hormonal control.
- Adulthood. The humerus serves as a skeletal criterion for the transitions into adolescence and into adulthood, in that its distal epiphysis is the first of those of the long bones to unite, and its proximal epiphysis is the last (at age 19 or later). The center for the iliac crest fuses in early adulthood (age 21 to 23). The sutures of the vault of the skull commence to close at about the same time (from age 22 onward).
Cartilage is a tough, resilient connective tissue composed of cells and fibers embedded in a firm, gel-like intercellular matrix.
A skeletal element that is mainly or entirely cartilaginous is surrounded by a connective tissue membrane, the perichondrium, the structure of which is similar to periosteum. Cartilage grows by apposition, that is, by the laying down of new cartilage on the surface of the old. The new cartilage is formed by chondroblasts derived from the deeper cells of the perichondrium. Cartilage also grows interstitially, that is, by an increase in the size and number of its existing cells and by an increase in the amount of intercellular matrix. Adult cartilage grows slowly, and repair or regeneration after a severe injury is inadequate. Adult cartilage lacks nerves, and it usually lacks blood vessels.
Cartilage is classified into three types: hyaline, fibrous, and elastic.
- Hyaline Cartilage. This is so named because it has a glassy, translucent appearance resulting from the character of its matrix. The cartilaginous models of bones in the embryo consist of hyaline cartilage, as do the epiphysial plates. Most articular cartilages, the costal cartilages, the cartilages of the trachea and bronchi, and most of the cartilages of the nose and larynx are formed of hyaline cartilage. Nonarticular hyaline cartilage has a tendency to calcify and to be replaced by bone.
- Fibrocartilage. Bundles of collagenous fibers are the prominent constituent of the matrix of fibrocartilage. Fibrocartilage is present in certain cartilaginous joints, and it forms articular cartilage in a few joints, for example, the temporomandibular.
- Elastic Cartilage. In this tissue, the fibers in the matrix are elastic, and such cartilage rarely if ever calcifies with advancing age. Elastic cartilage is present in the auricle and the auditory tube, and it forms some of the cartilages of the larynx.
Enlow, D. H., Principles of Bone Remodeling, Thomas, Springfield, Illinois, 1963. An excellent review with original observations.
Frazer's Anatomy of the Human Skeleton, 6th ed., rev. by A. S. Breathnach, Churchill, London, 1965. A detailed synthesis of skeletal and muscular anatomy arranged regionally.
Vaughan, J. M., The Physiology of Bone, Clarendon Press, Oxford, 1970. An excellent account of bone as a tissue and of its role in mineral homeostasis.
A joint or articulation is "the connexion subsisting in the skeleton between any of its rigid component parts, whether bones or cartilages" (Bryce). Arthrology means the study of joints, and arthritis refers to their inflammation.
Joints may be classified into three main types: fibrous, cartilaginous, and synovial.
The bones of a fibrous joint (synarthrosis) are united by fibrous tissue. There are two types: sutures and syndesmoses. With few exceptions, little if any movement occurs at either type. The joint between a tooth and the bone of its socket is termed a gomphosis and is sometimes classed as a third type of fibrous joint.
Sutures. In the sutures of the skull, the bones are connected by several fibrous layers. The mechanisms of growth at these joints (still in dispute) are important in accommodating the growth of the brain.
Syndesmoses. A syndesmosis is a fibrous joint in which the intervening connective tis sue is considerably greater in amount than in a suture. Examples are the tibiofibular syndesmosis and the tympanostapedial syndesmosis.
The bones of cartilaginous joints are united either by hyaline cartilage or by fibrocartilage.
Hyaline Cartilage Joints. This type (synchondrosis) is a temporary union. The hyaline cartilage that joins the bones is a persistent part of the embryonic cartilaginous skeleton and as such serves as a growth zone for one or both of the bones that it joins. Most hyaline cartilage joints are obliterated, that is, replaced by bone, when growth ceases. Examples include epiphysial plates and the sphenooccipital synchondrosis.
Fibrocartilaginous Joints. In this type (amphiarthrosis), the skeletal elements are united by fibrocartilage during some phase of their existence. The fibrocartilage is usually separated from the bones by thin plates of hyaline cartilage. Fibrocartilaginous joints include the pubic symphysis and the intervertebral discs between the bodies of the vertebrae.
Synovia is the fluid present in certain joints, which are consequently termed synovial. Similar fluid is present in bursae and in synovial tendon sheaths.
Synovial (diarthrodial) joints possess a cavity and are specialized to permit more or less free movement. Their chief characteristics (fig. 2-3) are as follows:
The articular surfaces of the bones are covered with cartilage, which is usually hyaline in type. The bones are united by a joint capsule and ligaments. The joint capsule consists of an outer, fibrous layer, with a vascular, connective tissue lining its inner surface. This is termed the synovial membrane, which produces the synovial fluid (synovia) that fills the joint cavity and lubricates the joint. The joint cavity is sometimes partially or completely subdivided by fibrous or fibrocartilaginous discs or menisci.
Synovial joints may be classified according to axes of movement, assuming the existence of three mutually perpendicular axes. A joint that has but one axis of rotation, such as a hinge joint or pivot joint, is said to have one degree of freedom. Ellipsoidal and saddle joints have two degrees of freedom. Each can be flexed or extended, abducted or adducted, but not rotated (at least not independently). A ball-and-socket joint has three degrees of freedom.
Synovial joints may also be classified according to the shapes of the articular surfaces of the constituent bones. The types of synovial joints are plane, hinge and pivot (uniaxial), ellipsoidal and saddle (biaxial), condylar (modified biaxial), and ball-and-socket (triaxial). These shapes determine the type of movement and are partly responsible for determining the range of movement.
- Plane Joint. The articular surfaces of a plane joint permit gliding or slipping in any direction, or the twisting of one bone on the other.
- Hinge Joint, or Ginglymus. A hinge joint is uniaxial and permits movement in but one plane, e.g., flexion and extension at an interphalangeal joint.
- Pivot, or Trochoid, Joint. This type, of which the proximal radio-ulnar joint is an example, is uniaxial, but the axis is vertical, and one bone pivots within a bony or an osseoligamentous ring.
- Ellipsoidal Joint. In this type, which resembles a ball-and-socket joint, the articulating surfaces are much longer in one direction than in the direction at right angles. The circumference of the joint thus resembles an ellipse. It is biaxial, and the radiocarpal joint is an example.
- Saddle, or Sellar, Joint. This type is shaped like a saddle; an example is the carpometacarpal joint of the thumb. It is biaxial.
- Condylar Joint. Each of the two articular surfaces is called a condyle. Although resembling a hinge joint, a condylar joint (e.g., the knee) permits several kinds of movements.
- Ball-and-Socket, or Spheroidal, Joint. A spheroidal surface of one bone moves within a "socket" of the other bone about three axes, e.g., as in the shoulder and hip joints. Flexion, extension, adduction, abduction, and rotation can occur, as well as a combination of these movements termed circumduction. In circumduction, the limb is swung so that it describes the side of a cone, the apex of which is the center of the "ball."
Active Movements. Usually one speaks of movement of a part or of movement at a joint; thus, flexion of the forearm or flexion at the elbow. Three types of active movements occur at synovial joints: (1) gliding or slipping movements, (2) angular movements about a horizontal or side-to-side axis (flexion and extension) or about an anteroposterior axis (abduction and adduction), and (3) rotary movements about a longitudinal axis (medial and lateral rotation). Whether one, several, or all types of movement occur at a particular joint depends upon the shape and ligamentous arrangement of that joint.
The range of movement at joints is limited by (1) the muscles, (2) the ligaments and capsule, (3) the shapes of the articular surfaces, and (4) the opposition of soft parts, such as the meeting of the front of the forearm and arm during full flexion at the elbow. The range of motion varies greatly in different individuals. In trained acrobats or gymnasts, the range of joint movement may be extraordinary.
Passive and Accessory Movements. Passive movements are produced by an external force, such as gravity or an examiner. For example, the examiner holds the subject's wrist so as to immobilize it and can then flex, extend, adduct, and abduct the subject's hand at the wrist, movements that the subject can normally carry out actively.
By careful manipulation, the examiner can also produce a slight degree of gliding and rotation at the wrist, movements that the subject cannot actively generate. These are called accessory movements (often classified with passive movements), and are defined as movements for which the muscular arrangements are not suitable, but which can be brought about by manipulation.
The production of passive and accessory movements is of value in testing and in diagnosing muscle and joint disorders.
Structure and function.
The mechanical analysis of joints is very complicated, and articular movements involve spherical as well as plane geometry.
The lubricating mechanisms of synovial joints are such that the effects of friction on articular cartilage are minimized. This is brought about by the nature of the lubricating fluid (viscous synovial fluid), by the nature of the cartilaginous bearing surfaces that adsorb and absorb synovial fluid, and by a variety of mechanisms that permit a replaceable fluid rather than an irreplaceable bearing to reduce friction.
- Synovial Membrane. Synovial membrane is a vascular connective tissue that lines the inner surface of the capsule but does not cover articular cartilage. Synovial membrane differs from other connective tissues in that it produces a ground substance that is a fluid rather than a gel. The most characteristic structural feature of synovial membrane is a capillary network adjacent to the joint cavity. A variable number of villi, folds, and fat pads project into the joint cavity from the synovial membrane.
- Synovial fluid is formed by the synovial membrane. This fluid is a sticky, viscous fluid, somewhat similar to egg-white in consistency. The main function of synovial fluid is lubrication, but it also nourishes articular cartilage. Synovial fluid has one of the lowest known coefficients of friction.
- Articular Cartilage. Adult articular cartilage is an avascular, nerveless, and relatively acellular tissue. The part immediately adjacent to bone is usually calcified. Cartilage is elastic in the sense that, when it is compressed, it becomes thinner but, on release of the pressure, slowly regains its original thickness. Articular cartilage is not visible in ordinary radiograms. Hence the so-called radiological joint space is wider than the true joint space.
- Joint Capsule and Ligaments. The capsule is composed of bundles of collagenous fibers, which are arranged somewhat irregularly. Ligaments are classified as capsular, extracapsular, and intra-articular. Most ligaments serve as sense organs in that nerve endings in them are important in reflex mechanisms and in the detection of movement and position. Ligaments also have mechanical functions.
The relationship of the epiphysial plate to the line of capsular attachment is important (see fig. 2-1). For example, the epiphysial plate is a barrier to the spread of infection between the metaphysis and the epiphysis. If the epiphysial plate is intra-articular, then part of the metaphysis is also intra-articular, and a metaphysial infection may involve the joint. In such instances, a metaphysial fracture becomes intra-articular, always serious because of possible damage to articular surfaces. If the capsule is attached directly to the periphery of the epiphysial plate, damage to the joint may involve the plate and thereby interfere with growth.
Menisci, intra-articular discs, fat pads, and synovial folds (fig. 2-3) aid in spreading synovial fluid throughout the joint, and thereby assist in lubrication.
Intra-articular discs and menisci, which are composed mostly of fibrous tissue but may contain some fibrocartilage, are attached at their periphery to the joint capsule. They are usually present in joints at which flexion and extension are coupled with gliding (e.g., in the knee), and that require a rounded combined with a relatively flattened surface.
The fascial investments around the joint blend with capsule and ligaments, with musculotendinous expansions, and with the looser connective tissue that invests the vessels and nerves approaching the joint.
Joints are often injured, and they are subject to many disorders, some of which involve the periarticular tissues as well as the joints themselves. Increased fibrosis (adhesions) of the periarticular tissues may limit movement almost as much as does fibrosis within a joint.
Absorption from joint cavity.
A capillary network and a lymphatic plexus lie in the synovial membrane, adjacent to the joint cavity. Diffusion takes place readily between these vessels and the cavity. Hence, traumatic infection of a joint may be followed by septicemia. Most substances in the blood stream, normal or pathological, easily enter the joint cavity.
Blood and nerve supply.
The pattern is illustrated in figure 2-4. Articular and epiphysial vessels arise more or less in common and form networks around the joint and in the synovial membrane, respectively.
The principles of distribution of nerves to joints were best expressed by Hilton in 1863: "The same trunks of nerves, whose branches supply the groups of muscles moving a joint, furnish also a distribution of nerves to the skin over the insertions of the same muscles; and what at this moment more especially merits our attention-the interior of the joint receives its nerves from the same source." Articular nerves contain sensory and autonomic fibers, the distribution of which is summarized in figure 2-4.
Some of the sensory fibers form proprioceptive endings in the capsule and ligaments. These endings are very sensitive to position and movement. Their central connections are such that they are concerned with the reflex control of posture and locomotion and the detection of position and movement.
Other sensory fibers form pain endings, which are most numerous in joint capsules and ligaments. Twisting or stretching of these structures is very painful. The fibrous capsule is highly sensitive; synovial membrane is relatively insensitive.
Use-Destruction (Wear and Tear, Attrition). With time, articular cartilage wears away, sometimes to the extent of exposing, eroding, and polishing or eburnating the underlying bone. Use-destruction may be hastened or exaggerated by trauma, disease, and biochemical changes in articular cartilage. The bone adjacent to such damaged joints may expand as "osteophytes."
Barnett, C. H., Davies, D. V., and MacConaill, M. A., Synovial Joints. Longmans, London, 1961. A good account of the biology, mechanics, and functions of joints.
Freeman, M. A. R. (ed.), Adult Articular Cartilage, Pitman, London, 1973. A good account of lubrication and synovial fluid.
Gardner, E., The Structure and Function of Joints, in Arthritis, 8th ed., ed. by J. L. Hollander and D. J. McCarty, Lea & Febiger, Philadelphia, 1972.
Movement is carried out by specialized cells called muscle fibers, the latent energy of which can be controlled by the nervous system. Muscle fibers are classified as skeletal (or striated), cardiac, and smooth.
Skeletal muscle fibers are long, multinucleated cells having a characteristic crossstriated appearance under the microscope. These cells are supplied by motor fibers from cells in the central nervous system. The muscle of the heart is also composed of crossstriated fibers, but its activity is regulated by the autonomic nervous system. The walls of most organs and many blood vessels contain fusiform (spindle-shaped) muscle fibers that are arranged in sheets, layers, or bundles. These cells lack cross-striations and are therefore called smooth muscle fibers. Their activity is regulated by the autonomic nervous system and certain circulating hormones, as well as often reacting to local mechanical factors. They supply the motive power for various aspects of digestion, circulation, secretion, and excretion.
Skeletal muscles are sometimes called voluntary muscles, because they can usually be controlled voluntarily. However, many of the actions of skeletal muscles are automatic, and the actions of some of them are reflex and only to a limited extent under voluntary control. Smooth muscle and cardiac muscle are sometimes spoken of as involuntary muscle.
Most muscles are discrete structures that cross one or more joints and, by contracting, can cause movements at these joints. Exceptions are certain subcutaneous muscles (e.g., facial muscles) that move or wrinkle the skin or close orifices, the muscles that move the eyes, and other muscles associated with the respiratory and digestive systems.
Each muscle fiber is surrounded by a delicate connective tissue sheath, the endomysium. Muscle fibers are grouped into fasciculi, each of which is enclosed by a connective tissue sheath termed perimysium. A muscle as a whole is composed of many fasciculi and is surrounded by epimysium, which is closely associated with fascia and is sometimes fused with it.
The fibers of a muscle of rectangular or quadrate shape run parallel to the long axis of the muscle (fig. 2-5). The fibers of a muscle of pennate shape are parallel to one another, but lie at an angle with respect to the tendon. The fibers of a triangular or fusiform muscle converge upon a tendon.
The names of muscles usually indicate some structural or functional feature. A name may indicate shape, e.g., trapezius, rhomboid, or gracilis. A name may refer to location, e.g., tibialis posterior. The number of heads of origin is indicated by the terms biceps, triceps, and quadriceps. Action is reflected in terms such as levator scapulae and extensor digitorum.
Muscles are variable in their attachments: they may be absent, and many supernumerary muscles have been described. Variations of muscles are so numerous that detailed accounts of them are available only in special works.
Individual muscles are described according to their origin, insertion, nerve supply, and action. Certain features of blood supply are also important.
Origin and insertion.
Most muscles are attached either directly or by means of their tendons or aponeuroses to bones, cartilages, ligaments, or fasciae, or to some combination of these. Other muscles are attached to organs, such as the eyeball, and still others are attached to skin. When a muscle contracts and shortens, one of its attachments usually remains fixed and the other moves. The fixed attachment is called the origin, the movable one the insertion. In the limbs, the more distal parts are generally more mobile. Therefore the distal attachment is usually called the insertion. However, the anatomical insertion may remain fixed and the origin may move. Sometimes both ends remain fixed: the muscle then stabilizes a joint. The belly of a muscle is the part between the origin and the insertion.
Blood and nerve supply.
Muscles are supplied by adjacent vessels, but the pattern varies. Some muscles receive vessels that arise from a single stem, which enters either the belly or one of the ends, whereas others are supplied by a succession of anastomosing vessels.
Each muscle is supplied by one or more nerves, containing motor and sensory fibers that are usually derived from several spinal nerves. Some groups of muscles, however, are supplied mainly if not entirely by one segment of the spinal cord. For example, the motor fibers that supply the intrinsic muscles of the hand arise from the first thoracic segment of the spinal cord. Not infrequently, muscles having similar functions are supplied by the same peripheral nerve.
Nerves usually enter the deep surface of a muscle. The point of entrance is known as the "motor point" of a muscle, because electrical stimulation here is more effective in producing muscular contraction than it is elsewhere on the muscle, nerve fibers being more sensitive to electrical stimulation than are muscle fibers.
Each motor nerve fiber that enters a muscle supplies many muscle fibers. The parent nerve cell and its motor fiber, together with all of the muscle fibers that it supplies, make up a motor unit. Motor units range from a few muscle fibers in muscles of fine control (such as eye muscles) to several thousand fibers (such as in large, postural muscles like the gluteus maximus).
Denervation of muscle.
Skeletal muscle cannot function without a nerve supply. A denervated muscle becomes flabby and atrophic. The process of atrophy consists of a decrease in size of individual muscle fibers. Each fiber shows occasional spontaneous contractions termed fibrillations. In spite of the atrophy, the muscle fibers retain their histological characteristics for a year or more, eventually being replaced by fat and connective tissue. Provided that nerve regeneration occurs, human muscles may regain fairly normal function up to a year after denervation.
Actions and Functions.
In a muscle as a whole, gradation of activity is made possible by the number of motor units recruited to a movement. Force is built up by first increasing the frequency of activation of a single motor nerve fiber and then by adding activity of more motor nerve fibers that will then increase their frequency of activation. If all the motor units are activated simultaneously, the muscle will contract once. But if motor units are activated out of phase or asynchronously (nerve impulses reaching motor units at different times), tension is maintained in the muscle.
Long and rectangular muscles produce a greater range of movement, whereas pennate muscles exert more force. Power is greatest when the insertion is far removed from the axis of movement, whereas speed is usually greatest when the insertion is near the axis.
The actions of muscles that cross two or more joints are particularly complicated. For instance, the hamstring muscles that cross the hip and knee joints cannot shorten enough to extend the hip and flex the knee completely at the same time. If the hips are flexed fully, as in bending forward to touch the floor, the hamstrings may not be able to lengthen enough to allow one to touch the floor without bending the knees. This is also known as the ligamentous action of muscles: it restricts movement at a joint. It is due in part to relative inextensibility of connective tissue and tendons, and can be modified greatly by training. The term contracture means a more or less permanent shortening of the connective tissue components of a muscle.
The pattern of muscular activity is controlled by the central nervous system. Most movements, even so-called simple ones, are complex and in many respects automatic. The overall pattern of movement may be voluntary, but the functions of individual muscles are complex, variable, and often not under voluntary control. For example, if one reaches out and picks something off a table, the use of the fingers is the chief movement. But in order to get the fingers to the object, the forearm is extended (the elbow flexors relaxing), other muscles stabilize the shoulder, and still others stabilize the trunk and lower limbs so as to ensure maintenance of posture.
Muscles may be classified according to the functions they serve in such patterns, namely as prime movers, antagonists, fixation muscles, and synergists. A special category includes those that have a paradoxical or eccentric action, in which muscles lengthen while contracting (fig. 2-6). In so doing, they perform negative work. A muscle may be a prime mover in one pattern, an antagonist in another, or a synergist in a third.
- Prime Movers. A prime mover (fig. 2-6) is a muscle or a group of muscles that directly brings about a desired movement (e.g., flexion of the fingers). Gravity may also act as a prime mover. For example, if one holds an object and lowers it to the table, gravity brings about the lowering (fig. 2-6). The only muscular action involved is in controlling the rate of descent, an example of paradoxical action.
- Antagonists. Antagonists are muscles that directly oppose the movement under consideration. Thus, the triceps brachii, which is the extensor at the elbow when acting as a prime mover, is the antagonist to the flexors of the elbow. Depending on the rate and force of movement, antagonists may be relaxed, or, by lengthening while contracting, they may control movement and make it smooth, free from jerkiness, and precise. The term antagonist is a poor one, because such muscles cooperate rather than oppose. Gravity may also act as an antagonist, as when the forearm is flexed at the elbow from the anatomical position.
- Fixation Muscles. Fixation muscles generally stabilize joints or parts and thereby maintain posture or position while the prime movers act.
- Synergists. Synergists are a special class of fixation muscles. When a prime mover crosses two or more joints, synergists prevent undesired actions at intermediate joints. Thus, the long muscles that flex the fingers would at the same time flex the wrist if the wrist were not stabilized by the extensors of the wrist, these being synergists in this particular movement. The term synergist is sometimes also used for muscles that contribute to a movement, while not being the prime mover, although this usage is not as appropriate.
Testing of muscles.
Five chief methods are available to determine the action of a muscle. These are the anatomical method, palpation, electrical stimulation, electromyography, and the clinical method. No one of these methods alone is sufficient to provide full and accurate information.
- Anatomical Method. Actions are deduced from the origin and insertion and are verified by pulling upon the muscle, for example, during an operation or ina cadaver specimen. The anatomical method may be the only way of determining the actions of muscles too deep to be examined during life. This method shows what a muscle can do, but not necessarily what it actually does.
- Palpation. The subject is asked to perform a certain movement, and the examiner inspects and palpates the participating muscles. The movement may be carried out without loading or extra weight and with gravity minimized so far as possible by support or by the recumbent position. Alternatively, the movement may be carried out against gravity, as when flexing the forearm from the anatomical position, with or without extra load. Finally it may be tested with a heavy load, most simply by fixing the limb by an opposing force. For example, the examiner requests the subject to flex the forearm and at the same time holds the forearm so as to prevent flexion. Palpation of muscles that are contracting against resistance provides the best and simplest way of learning the locations and actions of muscles in the living body. Palpation is also the simplest and most direct method of testing weak or paralyzed muscles, and it is widely used clinically. However, when several muscles take part, it may not be possible to determine the functions of each muscle by palpation alone.
- Electrical Stimulation. The electrical stimulation of a muscle over its motor point causes the muscle to contract and to remain contracted if repetitive stimulation is used. Like the anatomical method, electrical stimulation shows what a muscle can do, but not necessarily what its functions are.
- Electromyography. The mechanical twitch of a muscle fiber is preceded by a conducted electrical impulse that can be detected and recorded with appropriate instruments. When an entire muscle is active, the electrical activity of its fibers can be detected by electrodes placed within the muscle or on the overlying skin. The recorded response constitutes an electromyogram (EMG). Records can be obtained from several muscles simultaneously and during physiological activities. This makes electromyography valuable for studying patterns of activity. Finally, the electromyographic pattern may be altered by nervous or muscular disease. Electromyography, therefore, can be used in diagnosis. The disadvantage, as in palpation, is the difficulty in assessing the precise function of a muscle that is taking part in a movement pattern.
- Clinical Method. A study of patients who have paralyzed muscles or muscle groups provides valuable information about muscle function, primarily by determining which functions are lost. But great caution must be exercised. In some central nervous system disorders, a muscle may be paralyzed in one movement yet take part in another. Even in the presence of peripheral nerve injuries or direct muscle involvement, patients may learn trick movements with other muscles that compensate for or mask the weakness or paralysis.
Reflexes and muscle tone
Many muscular actions are reflex in nature, that is, they are brought about by sensory impulses that reach the spinal cord and activate motor cells. The quick withdrawal of a burned finger and the blinking of the eyelids when something touches the cornea are examples of reflexes. It is generally held that muscles that support the body against gravity possess tone, owing to the operation of stretch reflexes initiated by the action of gravity in stretching the muscles. Whether this is strictly or always true in the human is open to question. There is evidence that when a subject is in an easy standing position, little if any muscular contraction or tone can be detected in human antigravity muscles.
The available evidence indicates that the only "tone" possessed by a completely relaxed muscle is that provided by its passive elastic tension. No impulses reach a completely relaxed muscle, and no conducted electrical activity can be detected.
Structure and function
Each skeletal muscle fiber is a long, multinucleated cell that consists of a mass of myofibrils. Most muscle fibers are less than 10 to 15 cm long, but some may be more than 30 cm long.
Resting muscle is soft, freely extensible, and elastic. Active muscle is hard, develops tension, resists stretching, and lifts loads. Muscles may thus be compared with machines for converting chemically stored energy into mechanical work. Muscles are also important in the maintenance of body temperature. Resting muscle under constant conditions liberates heat, which forms a considerable fraction of the basic metabolic rate.
One of the most characteristic changes after death is the stiffening of muscles, known as rigor mortis. Its time of onset and its duration are variable. It is due chiefly to the loss of adenosine triphosphate (ATP) from the muscles.
The attachment of muscle to bone (or other tissue) is usually by a long, cord-like tendon or sinew or by a broad, relatively thin aponeurosis. Tendons and aponeuroses are both composed of more or less parallel bundles of collagenous fibers. Tendons and aponeuroses are surrounded by a thin sheath of looser connective tissue. Where tendons are attached to bone, the bundles of collagenous fibers fan out in the periosteum.
Tendons are supplied by sensory fibers that reach them from nerves to muscles. They also receive sensory fibers from nearby superficial or deep nerves.
Synovial Tendon Sheaths. Where tendons run in osseofibrous tunnels, for example, in the hand and foot, they are covered by double-layered synovial sheaths (fig. 2-7). The mesotendineum, which is the tissue that forms the continuity between the synovial layers, carries blood vessels to the tendon. The fluid in the cavity of the sheath is similar to synovial fluid and facilitates movement by minimizing friction.
The lining of the sheath, like synovial membrane, is extremely cellular and vascular. It reacts to infection or to trauma by forming more fluid and by cellular proliferation. Such reactions may result in adhesions between the two layers and a consequent restriction of movement of the tendon.
Bursae (from L. bursa, a purse), like synovial tendon sheaths, are connective tissue sacs with a slippery inner surface and are filled with synovial fluid. Bursae are present where tendons rub against bone, ligaments, or other tendons, or where skin moves over a bony prominence. They may develop in response to friction. Bursae facilitate movement by minimizing friction.
Bursae are of clinical importance. Some communicate with joint cavities, and to open such a bursa is to enter the joint cavity, always a potentially dangerous procedure from the standpoint of infection. Some bursae are prone to fill with fluid when injured, for example, the bursae in front of or below the patella (housemaid's knee).
Fascia is a packing material, a connective tissue that remains between areas of more specialized tissue, such as muscle. The superficial fascia is the majority of the subcutaneous tissue immediately deep to the dermis, with which it blends. The superficial fascial may appear in layers in some parts of the body, with the more portions containing a lot of fat and the deeper layers being more fibrous. This transmits the cutaneouls nerves and blood vessels.
Fascia forms fibrous membranes that separate muscles from one another and invest them, and as such it is often called deep fascia. Its functions include providing origins and insertions for muscles, serving as an elastic sheath for muscles, and forming specialized retaining bands (retinacula) and fibrous sheaths for tendons. It provides pathways for the passage of vessels and nerves and surrounds these structures as neurovascular sheaths. It permits the gliding of one structure on another. The mobility, elasticity, and slipperiness of living fascia can never be appreciated by dissecting embalmed material.
The main fascial investment of some muscles is indistinguishable from epimysium. Other muscles are more' clearly separated from fascia, and are freer to move against adjacent muscles. In either instance, muscles or groups of muscles are generally separated by intermuscular septa, which are deep prolongations of fascia.
In the lower limb, the return of blood to the heart is impeded by gravity and aided by muscular action. However, muscles would swell with blood were it not for the tough fascial investment of these muscles, which serves as an elastic stocking. The investment also prevents bulging during contraction and thus makes muscular contraction more efficient in pumping blood upward.
Fascia is more or less continuous over the entire body, but it is commonly named accord ing to region, for example, pectoral fascia. It is attached to the superficial bony prominences that it covers, blending with periosteum, and, by way of intermuscular septa, is more deeply attached to bone.
Fascia may limit or control the spread of pus. When shortened because of injury or disease, fascia may limit movement. Strips of fascia are sometimes used for the repair of tendinous or aponeurotic defects.
Proprioceptive endings in aponeuroses and retinacula probably have a kinesthetic as well as a mechanical function.
Basmajian, J. V., Muscles Alive, 4th ed., Williams & Wilkins, Baltimore, 1978. An excellent study of muscle functions as revealed by electromyography.
Lockhart, R. D., Living Anatomy, 6th ed., Faber & Faber, London, 1963. Photographs showing muscles in action and methods of testing.
Rosse, C., and Clawson, D. K., The Musculoskeletal System in Health and Disease, Harper & Row, Hagerstown, Maryland, 1980. An attractive account offunctional anatomy, clinical applications, and diseases.
Royce, J., Surface Anatomy, Davis, Philadelphia, 1965. Photographs and key drawings of the living body.
2-1 Is there a difference between membrane bones and cartilage bones in the adult?
2-2 Where is red marrow found in the adult?
2-3 Which portion of the body is examined most frequently in the assessment of skeletal maturation?
2-4 Are epiphysial centers visible radiographically in the knee at birth?
2-5 Which parts of the limb bones are cartilaginous in the adult?
2-6 What result would be expected from premature closure of epiphysial plates?
2-7 Provide examples of (a) plane, (b) hinge, (c) pivot, (d) ellipsoidal, (e) saddle, (f) condylar, and (g) ball-and-socket joints.
2-8 What are (a) the origin and (b) the functions of synovial fluid?
2-9 What is the importance of the relationship between the epiphysial plate and the line of capsular attachment?
2-10 What are the advantages of pennate muscles?
2-11 What is the total number of (a) bones and (b) muscles in the body?
Figure 2-1 Diagram of a long bone and its blood supply. The inset shows the lamellae of the compacta arranged in osteons, i.e., vascular canals surrounded by concentric layers of bone.
Figure 2-2 Diagrams of the development of a long bone. A, Cartilaginous model. B, Bone collar. C, Vascular invasion of bone collar and cartilage. D, Endochondral ossification begins. E, Cartilaginous epiphyses begin to be vascularized (arrows). F, An epiphysial center of ossification appears. G, An epiphysial center begins in the other end. H, Two epiphysial plates are evident. I, The last (second) epiphysial center to appear fuses first with the shaft. J, The first epiphysial center to appear (where most growth in length occurs) fuses last with the shaft.
Figure 2-3 Synovial joints. The joint cavity is exaggerated. Articular cartilage, menisci, and intra-articular discs are not covered by synovial membrane, but intraarticular ligaments are.
Figure 2-4 The blood and nerve supply of a synovial joint. An artery is shown supplying the epiphysis, joint capsule, and synovial membrane. The nerve contains (1) sensory (mostly pain) fibers from the capsule and synovial membrane, (2) autonomic (postganglionic sympathetic) fibers to blood vessels, (3) sensory (pain) fibers from the adventitia of blood vessels, and (4) proprioceptive fibers. Arrowheads indicate direction of conduction.
Figure 2-5 The arrangement of fibers in muscles. The fibers are basically parallel (upper row) or pennate (or penniform), i.e., arranged as in a feather (lower row). A, Quadrilateral, e.g., pronator quadratus. B, Straplike, e.g., sartorius. C, Fusiform, e.g., flexor carpi radialis. D, Unipennate, e.g., flexor pollicis longus. E, Bipennate, e.g., rectus femoris. F, Multipennate, e.g., deltoid. Pennate muscles usually contain a larger number of fibers and hence provide greater power.
Figure 2-6 Muscular actions. When the arm is abducted against the examiner's resistance, the deltoid becomes tense. On adduction against resistance, the deltoid relaxes and the weight sinks into it. On adduction produced by lowering a pail from a horizontal position, the pectoralis major is relaxed. The contracted deltoid controls the descent by lengthening. The deltoid is now an antagonist to gravity, which is the prime mover, and is doing negative work (paradoxical action).
Figure 2-7 Synovial and fibrous sheaths of a tendon, and a section of the synovial sheath.
* Additionally, certain conditions, such as scoliosis (lateral curvature of the spine) progress up to the point of skeletal maturation. In this case, the fusion of the apophysis at the iliac crest (“Risser’s sign”) is used as an index of maturation. | http://www.dartmouth.edu/~humananatomy/part_1/chapter_2.html | 13 |
115 | Douglas H. Clements
Try this problem: Write an equation using the variables S and P to represent this statement: "There are six times as many students as professors at this university." Use S for the number of students and P for the number of professors.
Most university students make this mistake: 6S = 1P. They may assume that they can directly order the symbols in the algebraic equation as they appear in the English sentence. Or they may confuse the semantics of the equation. To them, the reversed equation, 6S = P, may mean that a large group of students is associated with a small group of professors.
In contrast, the correct equation, SS = 6P, expresses an active operation being performed on one number, the number of professors, to yield another number, the number of students. It does not describe the sizes of groups literally. It describes an equivalence relation that would occur if one were to make the group of professors six times larger. In one student's words, "If you want to even out the number of students and the number of professors, you'd have to have six times as many professors" (Soloway, Lochhead, and Clement, 1982; 175). This student viewed the equation in a procedural manner -- as an instruction to act. In brief, the incorrect answer is a description of a situation, whereas the correct answer represents a prescription for action.
So would students make fewer errors in an environment that helped them take a more active view of equations? Since a program is a prescription for action, a computer-programming environment was the choice of researchers Soloway, Lochhead, and Clement (1982). They asked students to write either an equation or a computer program to represent such statements. Students got significantly more problems correct writing a program. Similarly, they were more likely to read a program than an equation correctly.
Can computers provide a similar environment for elementary school students? Might they serve to develop algebraic thinking in unique ways?
Variables in Computer Programming
Several current computer environments may help students learn algebraic thinking. Computer programming has been the most widely used and extensively studied. We focus on learning variables in Logo programming, but other tools, such as spreadsheets, may be used in similar ways (Clements 1989).
Why should programming help?
Programming is a prescription for action. Logo, in particular, was designed to help students mathematize familiar processes. Mathematizing makes something more mathematical -- more general, exact, certain, and concise. For example, students might walk and draw shapes and describe their actions. In Logo, they can translate these actions into mathematical code; for example, fd 40 commands the turtle to move forward forty steps, leaving a trail. Such activities help develop rich mathematical ideas.
In the domain of algebra, two important ideas in are variables and functions (Noss 1986). Variables are sometimes introduced in arithmetic tasks, such as __ + 5 = 12. Used in this way, they are placeholders, but they do not lend much mathematical power to students who do not use them often. In contrast, we use variables constantly in Logo. For example, we might write commands that include variables. Instead of entering FD 40, we could enter FD :length, which commands the turtle to move forward the value of the variable named "length." Similarly, repeat 4 [FD :length rt 90] (rt 90 means "turn right 90 degrees" and repeat 4 means to repeat the action inside the brackets four times) commands the turtle to draw a square with sides of that "length." We can not run these commands, however, until we give the variable length a value, as shown here.
make "length 40
repeat 4 [FD :length rt 90]
Note that in Logo, unlike other programming languages, there is a clear distinction between the name of a variable, "length, and the value assigned to it, :length. We could also write a procedure to take length as an input (see fig. 1).
Figure 1: A square defined in a version of Logo Turtle Math
Figure 1 Caption: ©Douglas H. Clements and Julie Sarama Meredith and LCSI). The commands in the command window on the left assign the variable "length" the value of 80 (Clements and Meredith 1994).
In Logo, you also can build and combine functions in a way that models mathematics. For example, the following is a simple function that divides its input, number, in half.
to half :number
If you enter square half 80, half would divide 80 by 2 and "hand off" the result to the square procedure, which would then draw a square with side lengths of 40. What would square half half 80 draw?
So logo can provide an environment in which using variables and functions is natural. It is part of the ongoing activity and authentic play.
In addition to this formal and logical side, variables -- and programming -- involve a symbolic side. Much confusion can arise because in mathematics a letter can stand for a parameter, variable, or specific unknown. Logo's use of "length for the name of the variable and :length for the value of the variable can help students keep these uses straight.
Programming does help...
Noss (1986) investigated whether learning Logo gave ten-year-olds a "conceptual framework" for learning algebra. For example, he asked them to make up rules for situations. One was, "This is a square [a figure was shown]. What could you write for the distance all around it?" Another was, "Peter has some marbles. Jane has some marbles. What could you write for the number of marbles Peter and Jane have altogether?"
Six of the eight students who had studied Logo were able to suggest names for the unknowns and to employ them in a rule that related the unknowns as variables. The two exceptions had not used variables to any extent in their Logo work. Nicola was one who had. She was solving the marble problem.
Nicola: You could use inputs again.
Interviewer: All right, show me how.
N: [Writes] : Peter + :Jane = all the marbles.
I: Can you read it out?
N: Peter plus Jane equals all the marbles. You use those two as the inputs, with as many marbles as you want to...
I: But this isn't a Logo program is it?
N: I know, but if it was...just to say that it's an input.
I: So what does the input actually mean there then?
N: That you can type in however size you want it or how many you want it... How many they want Peter to have and how many they want Jane to have.
These students also built names that stood for a range of numbers, which is counter to the natural tendency of students to interpret letters as specific numbers. Discussing the problem about the distance around a square, Julie suggested you make a "word for the length" of a side, such as LEN, and proposed a rule, LEN X 4. Julie understood that LEN could have many values, although she was probably not aware of a specific range.
In this way, Logo helped students to formalize. The metaphor of typing in a value at the keyboard can be viewed as a means of thinking about a range of numbers while only dealing with one at a time. Logo variables are assigned a specific value at the time the procedure is run, although the name of the input may stand for a large range of possible values. So as students run the procedure repeatedly and enter different numbers, they construct an initial understanding of a range of values for variables.
In normal mathematical usage, such as in y = x,the relationship between xand yis the crucial factor, not the specific examples of the relationship. This gives algebra its power and is also what students find so hard. Experience using Logo provides them a way to think about this abstract idea by linking the assignment of specific values to the variables. For example, students might write a square procedure. As they enter different values, they see that each square is one concrete example of a range of possibilities.
to square: length
repeat 4 [FD: length rt 90]
In summary, learning Logo can help students form intuitive notions about algebraic concepts (Clements and Meredith1993). However, sometimes limitations are evident. For example, students may not fully generalize the variable idea as used in Logo to other situations (Lehrer and Smith 1986). Logo experience can enhance students' understanding of algebraic ideas, but the links that they make between Logo and algebra depend on the nature and extent of their Logo experience. What types of experience should we provide?
Exposing versus laying a foundation
Some types of experience are probably inadequate. Just exposing students to variables may not help them gain a foundation. We need to help students mathematize familiar processes. Writing commands to draw a square mathematizes the actions of walking or drawing a square. For an example dealing with variables, a teacher might ask her students to produce buildings in which number of green blocks is always four greater than the number of red blocks. Then she might challenge them to create a mathematical rule for these buildings. Papert (1980) suggests that when this is fluent, natural, and enjoyable, teachers can lead students to understand the mathematical structures. Both parts are important -- sufficient experience with using variables followed by tasks and guidance to extend that experience.
Noss (1986) asked his student, Stephen, to make a rule for the green-and-red-block buildings. Stephen used Logo-influenced formulations.
IF: REDS = 10 [ MAKE "GREENS 14]
This was only a partial solution. The teacher asked about the 10, and Stephen said it was "just a random number." The need to write a number in the Logo code allowed him to start concretely, but his experience running Logo programs also convinced him that he could change that number. Then on his own he changed his formulation thus:
MAKE "GREENS FOUR MORE THAN "REDS
The teacher asked if he could formalize this rule further. He wrote:
G. = R. + 4
So he used knowledge of Logo as a catalyst in moving from a descriptive, specific-number statement to a generalized algebraic equation. Remembering the students-and-professors problem, note that the "+ 4" is on the correct side of the equation! His teacher's use of Logo experiences and guidance had laid the foundation for his algebraic thinking.
Suggestions for Teaching
Students benefit when teachers guide their experiences with variables. Introduce variables by having students use procedures with inputs. At first, recognize that students might interpret only global changes (Hillel & Samurçay 1985). For example, they might say, "It makes a bigger square." This interpretation is exacerbated by the use of such names as square :n or even square :size, which do not help students figure out exactly whatthe variable represents. It hiss better to provide and encourage students to use such names as :length.side, which are descriptive enough to support students' specific thinking about the variable's meaning. Used in this way and through such discussions, names can help students analyze the procedure and describe what is varying.
When students begin defining procedures with variables, or inputs, they should learn to do the following:
1. Identify what is varying. Students have to find all the commands within the procedure to which they need to assign a variable input, which is not always obvious.
2. Name the variable with a specific identifying name, such as :length.side.
3. Operate on the variable. This rule involves using variables appropriately as inputs to commands. It also includes passing variables to commands and subprocedures with modification (e.g., changing :length.side to :length.side + 5).
Model these steps as you introduce students to procedures with inputs.
Use tasks that illustrate the role of variables using these steps. For example, have students write a procedure to draw the letter L. Discuss with them what is varying and write a procedure that draws an L of any size. Use only one variable to determine the length of both segments.
to L: length
FD: length * .75
This procedure for example, commands the turtle to draw a vertical line segment, return to the bottom, turn right, and then draw a horizontal segment for the "foot" of the L that is 75/100 the length of the vertical segment.
Ask what happens when decimal or negative numbers are given for these inputs. Have students create the largest and smallest L's that they can. Then have them write other variable letter procedures and create designs that combine these procedures.
Sutherland (1989) used these tasks successfully to teach students about variables. She also developed the idea of function to forge links between Logo and traditional algebra. Students made a "mystery" function machines. Their partners had to guess the function and write a similar procedure themselves, which helped them see that changing a literal symbol does not change what the symbol refers to. Also, letting them choose any variable and function name appears to have been quite motivating for them.
Students can develop their intuitive understanding of pattern and structure to the point where they generalize and formalize. Giving students this type of experience in traditional algebra is difficult. We might best use Logo as a context for generalizing and formalizing, rather than attempt to contrive problems in beginning algebra. Then we can help students build links between the use of variable in the two contexts.
Working with computers can help students develop algebraic thinking. They can build on their informal methods, learning to formalize so that they can "talk to" the computer. Interaction with the computer can play a crucial role in their developing an understanding of a general method -- the heart of algebraic thinking. Computers do not work in a vacuum, however. We teachers must select tasks and guide students' experience. We can provide such guidance better if we know specifically how the computer contributes.
Clear, unambiguous syntax
Symbols are open to a variety of interpretations in mathematics. With a computer, however, only one interpretation is possible for each symbol, which is why computers require explicitness. For example, the equation's 6Scan be thought of as "six students." In contrast, the computers "6 * :S" means the operation of multiplication.
Equations on the computer are active. They represent the process of acting on an input and yielding an output. Equations on paper are rarely "run." Running equations on a computer allows students to work with the process -- testing, debugging, and exploring. For example, if students run their L procedure, and the result does not look like they intended, they can reflect on their use of variables and change the relative lengths of the two line segments.
When students try out their ideas, they receive feedback from the computer that mirrors their thinking. For example, if the L appears as , they know that this design is what their Logo program specified. They have a clear direction for changing their variables. They may have reversed the two variables (i.e., used :length * .75 instead of :length) or inverted the scale factor (i.e., used 4/3 instead of 3/4 or 0.75).
Formalizing informal ideas
Computers help students explore, express, and formalize their informal ideas. One ten-year-old stated, "I think that it helps you because you put what you think in and then you can check to see if you are right..." (Sutherland and Rojano 1993; 380). For example, students beginning to write an L procedure might not use variables first but try different numbers, such as the following.
Once students informally decide on a number relationship, such as 0.75, they can go ahead and try out this relationship by
changing the procedure to use variables.
Scaffold problem solving
Early ideas and strategies may be precursors to more sophisticated mathematics when computers provide a thinking tool. One boy wrote a procedure to draw a rectangle. He created a different variable for the length of each of four sides. He gradually saw that he only needed two variables, as the lengths of the opposite sides are equal. He recognized that the variables could represent values rather than specific sides of the rectangle. No teacher intervened during this time; Logo provided the scaffolding by requiring a symbolic representation and by allowing the boy to link the symbols to the figure. The symbols become an aid to generalization.
Linking symbols to pictures
Computers can promote the connection of formal representations to dynamic visual representations. That is, the boy's rectangle procedure was coded in the formal mathematical symbols of Logo, and it commanded the turtle to move dynamically to draw a rectangle.
In some new versions of Logo, students can change their symbols and see the picture change automatically, or actually "pull" the lines of a picture and see the symbols change (fig. 2). The integration of ideas from algebra and geometry is particularly important, and computer tools play a critical role in that integration (NCTM 1989, 125).
Figure 2: Using the Change Shape tool in Turtle Math
In Turtle Math, the Change Shape tool allows the user to change the geometric figure directly and see the effect on the commands reflected immediately. In this way, Turtle Math provides a two-way street -- the user can make or change the symbolic Logo commands and see the geometric shape change automatically and can make or change the geometric shape and see the commands change automatically. This flexibility helps build solid ideas that connect symbolic and graphic representations of geometric ideas.
For example, these commands created this
geometric shape. When the user clicks on the Change
Shape tool, the turtle disappears. To move the
vertical line segment on the right, click anywhere
on the segment and drag it to its new location.
Dragging involves holding the mouse button
down while the mouse is moved.
As one drags the line segment, the corresponding
commands in the Command Center change
dynamically as the line segment is moved.
Release the mouse button when done.
One can then change another line segment or
change a corner.
To change a corner, click on the corner. Drag
the corner to the new location. The
commands change automatically. Release the
mouse button when done.
Note: Because the turtle starts at the home position
in the center of the Drawing window, one cannot
move that corner. Similarly, the user cannot drag the
first line segment that the turtle draws. If the last line
segment it draws connects to the first one at the
home position, one cannot drag that segment either,
so that a closed shape stays closed.
Teaching and Learning Algebraic Thinking with Computers
In summary, some evidence shows that computer tools can provide an "entry" to algebraic thinking. Students perceive the use of formalizations, such as variables, as being natural and useful. Students' ability to generalize their computer-based ideas may depend on the depth of their experience and the instructional support given them in making the abstraction and generalization. Time spent learning these tools is an even better investment if we think of such activity as a medium for expressing mathematics rather than as a tool for learning it.
In this view, technology is less a pedagogical tool and more a mathematical tool (Fey 1991). Graphing tools and spreadsheets should encourage us to reconsider the algebra curriculum just as calculators and computers have made us reconsider the arithmetic curriculum. If we see algebra as primarily the study of functions and their representations, we might use function plotters, curve fitters, and symbolic manipulators. Our students, too, would see algebra as being a source of mathematical models and ask what-if questions, such as "What if problem conditions change?" "What if the goal changes?"
The potential of such a view is revealed in a story of eight-year-old Robby, who had been introduced to variables through extensive work with Logo procedures (Lawler 1985). He learned to focus on systematic changes of a single variable as a useful way of understanding the complex interactions of several variables. At a later date, Robby worked on a paper-cutting puzzle that involves joining two loops perpendicularly, taping them, and cutting around their middles (fig. 3). Surprisingly, this action produces a square. Robby did not stop there, however. He joined and cut three circles. He got two rectangles. Four circles yielded four squares. Five circles produced two identical nonplanar shapes, and so on. Robby said, "Hey! I've got a new theory: the odd-numbered circles make two and the evens all stay together." When asked how he had gotten the idea for his exploration, he explained, "It's just like what we did at Logo with the shape families. I changed one thing, a little at a time" (p. 78). Robby saw the earlier Logo activities as the embodiment of the powerful idea of systematically incrementing a variable. When asked to explain his theory, he constructed a seven circle puzzle, expecting -- and demonstrating -- that this configuration did indeed produce two figures. Hypothesis testing had emerged from a nontheoretical but orderly investigation of interesting effects. Incrementing variables had become a method of determining what is what. Robby demonstrated true algebraic thinking.
Figure 3: Robby's paper-cutting puzzle. The two loops are taped together perpendicularly, then both are cut through their middles.
Clements, Douglas H. Computers in elementary mathematics education. Englewood Cliffs, N.J.: Prentice-Hall, 1989.
Clements, Douglas H., and Julie Sarama Meredith. "Research on Logo: Effects and Efficacy." Journal of Computing in Childhood Education 4 (1993):263-90.
-----. Turtle Math. Montreal, Quebec: Logo Computer Systems, 1994.
Fey, James. "Calculators, Computers, and Algebra in Secondary School Mathematics." In Proceedings of the U.S.-Japan Seminar on Computer Use in School Mathematics, edited by Jerry P. Becker & Tatsuro Miwa, 103-19. Honolulu, Hawaii: The East-West Center, 1991.
Hillel, Joel, and R. Samurçay. "Analysis of a Logo Environment for Learning the Concept of Procedures with Variables." Unpublished manuscript, Concordia University, Montreal, 1985.
Lawler, Robert. Computer Experience and Cognitive Development: A Children's Learning in a Computer Culture. New York: John Wiley & Sons, 1985.
Lehrer, Richard, and Paul Smith. Logo Learning: Is More Better? San Francisco: American Educational Research Association, 1986.
National Council of Teachers of Mathematics. Curriculum and Evaluation Standards for School Mathematics. Reston, VA: The Council, 1989.
Noss, Richard. "Constructing a Conceptual Framework for Elementary Algebra through Logo Programming." Educational Studies in Mathematics 17 (1986):335-357.
Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. New York: Basic Books, 1980.
Soloway, Elliot, Jack Lochhead, and John Clement. Does Computer Programming Enhance Problem Solving Ability? Some Positive Evidence on Algebra Word Problems. In Computer Literacy, edited by Robert J. Seidel, Ronald E. Anderson, and Beverly Hunter, 171-185. New York: Academic Press, 1982.
Sutherland, Rosamund. "Providing a Computer Based Framework for Algebraic Thinking. Educational Studies in Mathematics 20 (1989):317-44.
Sutherland, Rosamund, and Teresa Rojano. "A Spreadsheet Approach to Solving Algebra Problems." Journal of Mathematical Behavior 12 (1993):353-383.
Time to prepare this material was partially provided by "An Investigation of the Development of Elementary Children's Geometric Thinking in Computer and Noncomputer Environments," National Science Foundation Research grant number ESI - 8954664. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the author and do not necessarily reflect the views of the National Science Foundation.
Douglas Clements, Clements@acsu.buffalo. edu, conducts research at the State University at Buffalo, Buffalo, NY 14260, in the areas of computer applications in education, early development of mathematical ideas, and the learning and teaching of geometry. Julie Sarama, email@example.com, teaches at Wayne State University, Detroit, MI 48202. She is interested in children's conceptions of geometry and issues involving teachers' use of technology.
Other Articles by Douglas H. Clements: | http://investigations.terc.edu/library/bookpapers/computers_support.cfm | 13 |
51 | sums will set you free
how to teach your child numbers arithmetic mathematics
fractions, decimals, percentages and ratios 2
Fractions, decimals and percentages are particular types of division sum, and are all effectively the ‘same’ thing. It is important to internalise this realisation right from the start. That is why here, as with the first page on fractions, decimals and percentages, all three topics are discussed on the same page. Once you understand one of these topics, you are in a position to understand them all. They are just different ways communicating and writing the same manner of information.
And when you divide integers by fractions, the inverse
occurs - the value becomes greater.
Ordinary counting, addition and subtraction are one-dimensional.
Multiplication and division with fractions are two-dimensional.
Note that the method for multiplying fractions that follows can be used for any fraction multiplication.
The illustration on the left shows forty-nine blocks
in a seven by seven square.
Thus, three sevenths by (or times) four sevenths is the
same as (equals) twelve forty-ninths:
Five tenths by five tenths are twenty-five hundredths,
Of course, twenty-five hundredths is an equivalent fraction to one quarter, illustrated in the photo above. Twenty-five hundredths can be reduced to one quarter by cancelling, by dividing both the top and bottom parts of the fraction 25/100 by 5:
And now an example of multiplying fractions when the bottom numbers are different:
The examples of multiplication with fractions so far have illustrated multiplying fractions by fractions. When multiplying a fraction by a whole number (an integer), such as the sum 1/3 x 3, convert the whole number into a fraction - a whole number is a fraction where the bottom number is one, a number divided by one remains the same number.
So 1/3 x 3 can be multiplied like this:
Here’s an example, nine tenths divided by three
tenths, 9/10 ÷ 3/10.
Now, there are many permutations of dividing whole numbers by a fraction, a fraction by a whole number, a fraction by another fraction, and then there’s numbers that are combined whole numbers and fractions as well. So how to not be completely confused as what to do?
With this following example, 2 ÷ 2/5, firstly the whole number 2 is converted to 10 fifths (2 x 5). Next the division is done to the top part and to the bottom part. The photo of blocks illustrated how 2/5 divides five times into 2, or 10/5.
Often, a easier way to do divisions involving fractions is to invert the fraction doing the dividing, and then multiply the first fraction by the inverted fraction. Now, explaining this in words is not easy to understand, so after giving a short explication why this process works, we will give a couple of worked examples.
And why does multiplying with the inverted dividing fraction work? Well, multiplying is the inverse of dividing, just as subtraction is the inverse (the opposite) of addition. So when a fraction is inverted (or turned upsidedown) the action being done (dividing) with that fraction is also inverted (to become multiplying).
Examples of a fraction divided by a fraction and a mixed fraction divided by a mixed fraction:
Until we reach putting in details in this section, 1/10 is written as .1, that is a point (or dot) before the one, and one quarter is written as .25, that is 25/100, and so on.
Originally, the decimal was called the decimal fraction (the Latin word for ten being decem). Decimals are a convenient way of writing and using tenths, or other fractions divisible by ten, such as hundredths, thousandths and so on.
.1 (or more commonly, of less accurately, 0.1) is the
same as one tenth, 1/10.
Notice that with the fractional
and decimal parts of numbers, all the exciting action
takes place in
Fractions are, in part, called rational numbers, not because they are particularly sane (although they are) but because they consist of ratios - ratio-nal. The integers also come within the class of rational numbers, for any integer can be expressed as a fraction or ratio. For example, 2 can be expressed as 2/1, or as 6/3, or even as 50/25, while 1734 can be written as 1734/1.
Any fraction can be converted into a decimal form (13/19 = 13 ÷ 19 = .6841...) and any decimal can be converted into a fraction ( .731 = 731/1000), but there comes a time when this starts to become a bit more difficult, and even mathematicians have problems keeping their heads straight. If you do want to go deep-diving, see comparing predicates, relational strengths and irrational numbers.
The diagram above gives help for converting a decimal into a fraction. As you can see, the decimal number goes on the top of the fraction, and the bottom part is dictated by how many places there are after the decimal point (how many numbers to the right of the dot). So .3 = 3/10 and .03 = 3/100, while .38 = 38/100.
Multiplications and divisions involving decimals are like doing those sums with integers, but you must make sure that the decimal point is in the right position. Such sums are easy to check using a calculator, or the educational counter below.
The full version with more detailed instructions, go to the introduction page.
Here is how to practise sums with decimals, for example .25 x 4,
The counter counts up: 0, .25, .5, .75, 1. Thus .25 x 4 = 1.
Now help the learner to try other multiplication sums. Each time, click the red Reset button to return Manual Steps (the red number) to zero.
Below is a concise version of the abelard.org eduacational maths counter. For an expanded version with more detailed instructions, go to how to teach your child number arithmetic mathematics - introduction.
Until we reach putting in details in this section, 100% (one hundred percent) is the whole cake. 1% is 1/100th (one hundredth) of the cake.
1/10th (one tenth) is written as 10%, that is a percent sign (%) after the ten, and one quarter is written as 25% and so on.
Multiplications and divisions involving percentages are like doing those sums with integers and fractions.
|sums will set you free includes the series of documents about economics and money at abelard.org.|
|moneybookers information||e-gold information||fiat money and inflation|
|calculating moving averages||the arithmetic of fractional banking|
|You are here:
how to teach your child number arithmetic mathematics - fractions, decimals, percentages and ratios 2
< sums will set you free < Home
|about abelard||memory, paranoia & paradigms||francis galton||france zone||memory & intelligence||loud music & hearing damage||children & tv violence|
|information||abstracts||briefings||news headlines||news archives||latest|
email abelard at abelard.org
© abelard, 2009, 28 september
the web address for this page is http://www.abelard.org/sums/teaching_number_arithmetic_mathematics_fractions_decimals_percentages2.php | http://www.abelard.org/sums/teaching_number_arithmetic_mathematics_fractions_decimals_percentages2.php | 13 |
50 | - Ability to sketch a picture of a graph of a function showing the typical rectangles or trapezoids used in the Left, Right, and Trapezoid Rules.
- Ability to sketch a picture of a graph of a function showing how the Trapezoid Rule and Midpoint rule give over estimates or under estimates when applied to functions whose graphs are concave up or concave down.
Many functions that need to be integrated do not have antiderivatives that can be written in terms of well known functions (for example, sin(x)/x has no elementary antiderivative). In addition, data collected experimentally may represent a function whose symbolic formula is not known. Such integrations are performed numerically. The "standard" numerical techniques for estimating integrals are: Left, Right, and Midpoint Riemann Sums, the Trapezoid Rule, and Simpson's Rule.
- To see how Maple can be used to implement these numerical methods.
- To compare the efficiency of the methods.
- To be amazed at how much better some of the methods are than others and discover which are the "best".
- Read the explanation of what each block of Maple code below does.
- Copy and paste the Maple code, one block at a time, as you read it, into Maple and execute it. You should understand what each line of Maple code accomplishes.
- Answer the questions using paper and pencil on a separate sheet of paper. Be sure to explain your answers.
We begin by setting the interval [a,b], the function f(x), and choosing the number n of equal subintervals.
a:=0; b:=1; n:=10;
Next we compute the width of each rectangle in the Riemann sum.
When we calculate the height of the rectangles, we have the choice of using the left hand endpoint, midpoint, or right hand endpoint of each of the i subintervals, where i goes from 1 to n. On the ith subinterval, the right hand endpoint corresponds to a+i*DeltaX (why?), the midpoint corresponds to (the right hand endpoint) - (DeltaX/2).
- What expression corresponds to the left hand endpoint of the ith subinterval? Explain your answer. Also, substitute your answer for the blank in the Maple code below before executing the block.
The Riemann Sums are then just the following:
Right:= Sum( f( r(i) )*DeltaX,i=1..n);
Mid:=Sum( f( m(i) )*DeltaX,i=1..n);
Left:= Sum( f( l(i) )*DeltaX,i=1..n);
We are interested in getting a decimal approximation instead of an exact sum, so insert the Maple line RightValue:=evalf(Right); and similar lines for Left and Mid.
Instead of thin rectangles, it is possible to estimate area using thin trapezoids. Then the height of the trapezoid on the left is f(l(i)), that is, f at the left hand endpoint of the subinterval, and the height of the trapezoid on the right is f(r(i)), that is, f at the right hand endpoint of the subinterval. The area of the trapezoid is then [f(l(i))+f(r(i))]/2 * DeltaX, so:
Trap:= Sum( ( f( l(i) ) + f( r(i) ) ) / 2 *DeltaX,i=1..n);
Now consider the integral itself. Notice that we can evaluate it exactly, so we can compare our Right, Mid, Left, and Trap answers to the actual integral.
- Integrate f(x) over the interval from a to b by hand. Express the value you get both as a fraction and as a decimal.
Two numbers agree to n decimal places if they are equal when each is rounded to n places. For example, if you have followed the above directions, then the Left Riemann Sum is not even accurate to one decimal place, the Right Riemann Sum agrees with the true value to one decimal place.
Last Update: 2007/06/05
- To how many decimal places do the true value and the Midpoint Riemann Sum agree?
- To how many decimal places do the true value and the Trapezoid Rule agree?
- Find the Left and Right Riemann sums for n=59, 60, 199, 200. Based on what you find, what is the smallest number of subdivisions necessary to make the Right Riemann Sum agree with the Exact Value to 2 decimal places; the Left Riemann Sum agree with the Exact Value to 2 decimal places? Explain how you know.
- Use values of n less than 10 and find the smallest value for which the Trapezoid Rule gives a value that agrees with the True Value to two decimal places.
- Use values of n less than 10 and find the smallest value for which the Midpoint Riemann Sum gives a value that agrees with the True Value to two decimal places.
- Based on the previous questions:
- Rank the four estimation techniques from most accurate to least accurate.
- Is Left consistently too high or is it too low? Explain why refering to a graph.
- Is Right consistently too high or is it too low? Explain why referring to a graph.
- Is Mid consistently too high or is it too low? Explain why referring to a graph.
- Is Trap consistently too high or is it too low? Explain why referring to a graph.
- Go back to the line defining f, and redefine it to be 1/(1+x).
- Repeat question 2 for this new function.
- Experiment with different values of n and explain which answers to the preceeding question (8a-8d) will or will not change and why.
- (Optional) Simpson's Rule can be defined as the weighted average, Simpson=(2Mid+Trap)/3, of the Midpoint and Trapezoid Rules. Use Maple to compute the approximation given by Simpson's Rule for some of the same functions and values of n you found above. Does Simpson seem more accurate or is it less accurate? Try Simpson's rule on a variety of quadratic functions, even with low values of n. What do you notice? | http://www.plu.edu/math/math-teaching-tools/152/labs/NumericInt.html | 13 |
72 | In this section we are going to look at computing the arc
length of a function. Because it’s easy
enough to derive the formulas that we’ll use in this section we will derive one
of them and leave the other to you to derive.
We want to determine the length of the continuous function on the interval . Initially we’ll need to estimate the length
of the curve. We’ll do this by dividing
the interval up into n equal
subintervals each of width and we’ll denote the point on the curve
at each point by Pi. We can then approximate the curve by a series
of straight lines connecting the points.
Here is a sketch of this situation for .
Now denote the length of each of these line segments by and the length of the curve will then be
and we can get the exact length by taking n larger and larger. In other words, the exact length will be,
Now, let’s get a better grasp on the length of each of these
line segments. First, on each segment
let’s define . We can then compute directly the length of
the line segments as follows.
By the Mean
Value Theorem we know that on the interval there is a point so that,
Therefore, the length can now be written as,
The exact length of the curve is then,
However, using the definition of the definite integral,
this is nothing more than,
A slightly more convenient notation (in my opinion anyway)
is the following.
In a similar fashion we can also derive a formula for on . This formula is,
Again, the second form is probably a little more convenient.
Note the difference in the derivative under the square
root! Don’t get too confused. With one we differentiate with respect to x and with the other we differentiate
with respect to y. One way to keep the two straight is to notice
that the differential in the “denominator” of the derivative will match up with
the differential in the integral. This
is one of the reasons why the second form is a little more convenient.
Before we work any examples we need to make a small change
in notation. Instead of having two
formulas for the arc length of a function we are going to reduce it, in part,
to a single formula.
From this point on we are going to use the following formula
for the length of the curve.
Arc Length Formula(s)
Note that no limits were put on the integral as the limits
will depend upon the ds that we’re
using. Using the first ds will require x limits of integration and using the second ds will require y limits
Thinking of the arc length formula as a single integral with
different ways to define ds will be
convenient when we run across arc lengths in future sections. Also, this ds notation will be a nice notation for the next section as well.
Now that we’ve derived the arc length formula let’s work
Example 1 Determine
the length of between .
In this case we’ll need to use the first ds since the function is in the form . So, let’s get the derivative out of the
Let’s also get the root out of the way since there is
often simplification that can be done and there’s no reason to do that inside
Note that we could drop the absolute value bars here since
secant is positive in the range given.
The arc length is then,
As noted in the last example we really do have a choice as
to which ds we use. Provided we can get the function in the form
required for a particular ds we can
use it. However, as also noted above,
there will often be a significant difference in difficulty in the resulting
integrals. Let’s take a quick look at
what would happen in the previous example if we did put the function into the
Example 3 Redo
the previous example using the function in the form instead.
In this case the function and its derivative would be,
The root in the arc length formula would then be.
All the simplification work above was just to put the root
into a form that will allow us to do the integral.
Now, before we write down the integral we’ll also need to
determine the limits. This particular ds requires x limits of integration and we’ve got y limits. They are easy
enough to get however. Since we know x as a function of y all we need to do is plug in the
original y limits of integration
and get the x limits of
integration. Doing this gives,
Not easy limits to deal with, but there they are.
Let’s now write down the integral that will give the
That’s a really unpleasant looking integral. It can be evaluated however using the following
Using this substitution the integral becomes,
So, we got the same answer as in the previous
example. Although that shouldn’t
really be all that surprising since we were dealing with the same curve.
From a technical standpoint the integral in the previous
example was not that difficult. It was
just a Calculus I substitution. However,
from a practical standpoint the integral was significantly more difficult than
the integral we evaluated in Example 2.
So, the moral of the story here is that we can use either formula
(provided we can get the function in the correct form of course) however one
will often be significantly easier to actually evaluate.
Okay, let’s work one more example.
Example 4 Determine
the length of for . Assume that y is positive.
We’ll use the second ds
for this one as the function is already in the correct form for that
one. Also, the other ds would again lead to a particularly
difficult integral. The derivative and
root will then be,
Before writing down the length notice that we were given x limits and we will need y limits for this ds. With the assumption
that y is positive these are easy
enough to get. All we need to do is
plug x into our equation and solve
for y. Doing this gives,
The integral for the arc length is then,
This integral will require the following trig
The length is then,
The first couple of examples ended up being fairly simple
Calculus I substitutions. However, as
this last example had shown we can end up with trig substitutions as well for | http://tutorial.math.lamar.edu/Classes/CalcII/ArcLength.aspx | 13 |
60 | The Pythagorean theorem states that in a right triangle the sum of its squared legs equals the square of its hypotenuse. The Pythagorean theorem is one of the most well-known theorems in mathematics and is frequently used in Geometry proofs. There are many examples of Pythagorean theorem proofs in your Geometry book and on the Internet.
The Pythagorean theorem only applies to right triangles. And what it says is if you have two legs and a hypotenuse where the hypotenuse is the side that's opposite your right angle then a special relationship exists. And that is the square of one of your legs plus the square of the other leg has to equal the square of the hypotenuse. Now there are lots of proofs. There are probably hundreds of proofs out there. And there are even books that are nothing but proofs.
I'm going to show you just one here. And it's going to start with a triangle where we have two legs A and B and we have a hypotenuse C. What I'm going to do is I'm going to draw in four more triangles.
So what I'm going to do is I'm going to draw a Side A, and then I'm going to draw in my hypotenuse C it actually looks like I need to make this a little bit longer and then I'm going to have Side B. And that is going to be a right angle.
So these triangles are congruent, even though they might not look congruent. And then once we have that, then we need to draw in our other triangle.
So this is going to be a right angle. This is going to be B, this is going to be A, and that's going to be C. And then, here is our last triangle. So this is going to be C, that's A, and that is B.
So the key to this proof - because right now it's a little confusing - is what you know about the area of squares and rectangles.
Well, let's start with this large square. So I'm talking about this square right here. How can I calculate the area of that square?
Well first, I need to know what is one of my side lengths. And it looks like one is A+B, A+B, A+B, and A+B. So to calculate the area of this whole square, we can say we're going to take one of our side lengths - which is A+B - and square it. And that has to equal the sum of its parts.
So let's start off with this smaller square here in the middle. I see that I have Side C for all of these sides on the square, so that's going to be C^2. And I'm going to have one, two, three, four congruent triangles. So I'm going to add in four times my triangles.
But how do I calculate the area of a triangle? Well that's going to be base times height. So one is B and one is A, and then we have to divide that by two. This is A times B, divided by two.
So if I took a step up here what I'm saying is the area of the big square has to equal the area of the small square, which is the one with Side C, plus the area of your four triangles.
So that's the general theory behind what we're doing here. We're saying that the big square is equal to the sum of its parts. So if we clean this up, we should end up with our Pythagorean theorem, which says A^2+B^2=C^2.
Let's go back to algebra. If I square this binomial, I'm going to have A^2, my first term. Plus, I'm going to multiply these two together, times two - so that's going to be 2(AB) and then, I'm going to have my second term squared.
So all we did was expand this binomial being squared. We've got A^2+2(AB)+B^2. I'm just going to bring down the C^2. Nothing is going to happen with that.
Four divided by two is two, so we're going to have plus 2(AB).
So if I look at this equation right now, we are not quite at our Pythagorean theorem. So I need to manipulate this equation somehow.
So I'm going to grab a different colored marker. And I see over here that I have a 2(AB) and a 2(AB) on both sides.
So I'm going to subtract 2(AB), subtract 2(AB), and then, now, I can say that we have A^2.
2(AB) minus itself is zero, plus B^2.
Over here we have 2(AB) minus itself, so that's zero. And we end up with our Pythagorean theorem.
So this is one of the more common proofs of the Pythagorean theorem. It's the one that I do in my class.
And the key to this one was writing this first equation, which is the area of the big square has to equal the area of the small square, plus the area of the four triangles inside that big square. | http://www.brightstorm.com/math/geometry/pythagorean-theorem/pythagorean-theorem-proofs-problem-1/ | 13 |
56 | Here are the objectives for today's lesson.
Before you begin to study the lesson, take a few minutes to read the objectives and the study questions for this lesson.
Look for key words and ideas as you read. Use this study guide and follow it as you watch the program.
Some students find it helpful to make a note in the margin which pertains to a particular objective or a study question.
Be sure to read these objectives and refer to them as you study the lesson.
Focusing on the learning objectives will help you to study and understand the important concepts.
Compare the objectives with the study questions for the lesson to be sure that you have the concepts under control.
Before we're done with this lesson we will have learned of the important contributions of Gilbert and Bacon to the river of knowledge. We will learn Kepler's three laws of Planetary motion and use them to describe the orbits of the planets. We will learn the anatomy of the ellipse, and we will learn about the significance and implications of the laws.
Kepler's laws of planetary motion mark an important turning point in the transition from geocentrism to heliocentrism. They provide the first quantitative connection between the planets, including earth. But even more they mark a time when the important questions of the times were changing. By this time there were many intellectuals who favored the simplicity of a heliocentric system, but were unwilling to throw out the comfortable geocentrism of Ptolemy without good evidence. There remained the circular paradigm, lingering from the days of Plato. It was assumed that the heavenly motions were circular because that's the way it had always been.
The focus of the fundamental questions shifted from which system it "really" was, to what kinds of suppositions would required to justify a heliocentric reality.
What we mean is that it mattered less to keep the old theories just for the sake of keeping them. The evidence for and against both systems was being reworked and most thinkers of the time would have easily accepted heliocentrism if it was feasible.
The nagging thing that remained was the problem of motion. It's cause and its effects. Even if you accepted heliocentrism, there's still the problem of what makes the planets go in curved paths,regardless of the shapes of the orbits and the changing speeds of the planets as they move along their orbits.
Although the Copernican system made astronomical calculations easier, it was seen by most mathematicians as nothing more than another mathematical device. To nonscientists in Kepler's time, like most nonscientists today, who could care less about the mathematical simplicity, were found no serious objections to the geocentric theory and were unwilling to change.
There were two other individuals whose ideas added to the growing river of knowledge. William Gilbert was of Tycho's generation, and Francis Bacon was a contemporary of Kepler.
Gilbert was the Royal physician to Elizabeth I who was managing England's rise to world power status. He published a book, titled (De Magnete) which summarized everything known to date on the properties of magnetism and electricity. The concept which most intrigued Gilbert was the ability of the magnet to attract other magnets across empty space.
Among other things, Gilbert work was a treatise on lodestones and their use in navigation. He carved a piece of lodestone into a spherical shape, then used it as a model for earth to predict where the compass needle would point and how it would behave at different locations on a spherical earth.
Recall that, although the geocentric theory was still in favor, the flat earth idea had been lost forever when Magellan's fleet sailed around it earlier in the century.
Exactly what magnetism has to do with the planetary motions, we will get to in a little while.
Can you see a connection?
Regardless of that connection, Gilbert also made an important statement which went far in defining the new scientific paradigm that was about to bloom. In the preface to De Magnete he wrote:
Francis Bacon (not to be confused with Roger Bacon, see lesson 9) became Lord Chancellor under James I but in the same year pled guilty to accepting bribes and retired. Although his career as a statesman was tainted, his philosophical musings helped spurn the growth of experimental science in England.
His contributions to the river of knowledge were in his inductive approach to experimental science, later refined by Galileo, and in an essay titled The New Atlantis. This was a utopian society based on scientific principles. That such a society might exist was a completely new idea and helped to define what would become our modern scientific principles.
Induction means to go from the specific to the general. We will study this a couple of lessons down the road, but it's good to think ahead a little. You might want to look up "induction" in the dictionary. When you've done that, look up "deduction". Can you write a short essay comparing and contrasting the two terms, and using an example?
The modern method of trial and error in problem solving was stimulated by Bacon's statement, "Truth comes out of error more easily than out of confusion."
The laws are most simply stated in their modern form, largely because Kepler did not state them clearly. In fact, it is difficult to locate them in his writings, which ramble on and on about harmony, justification for abandoning the Ptolemaic system, and defense of Copernicus. To illustrate the style of writing and also the obscurity of the principles, I want to read you Kepler's laws in his own words.
Here's what Kepler wrote:
So everybody all together: What did Kepler say? I can't hear you. OK, seriously, this excerpt contains Kepler's first and second law. You do not have to repeat it, but you should look at this quote again after you have learned the laws and their meaning.
Can't you see why this work did not cause much stir. This is from De Harmonice Mundi, published in 1619, and is actually a restatement of the laws in a more concise form than in The New Astronomy, published ten years earlier.
It is a tribute to Newton's genius, as if he needed another, that he was able to see in this statement one piece of the solution to the gravitational puzzle.
One of the things you will note about this passage is that Kepler says it twice, very explicitly that the Sun is the source of planetary motion.
Well, I'll spare you Kepler's wording of third law, for now. But it was his favorite, and we'll come back to it after we examine the laws in more detail.
The planets, including Earth revolve around the sun in elliptical orbits. The sun is at one focus of the ellipse, the other is empty
This is such a simple statement, it is amazing that it was so difficult to produce.
In this picture we see that the planet is sometimes closer and sometimes further away from the sun. The second focus of the ellipse is a geometrical point of symmetry, but has no physical reality.
It would be useful at this point to take a closer look at the anatomy and properties of the ellipse.
The ellipse is a conic section, like the circle. We saw in the last lesson how the circle and the ellipse are related in terms of slicing or sectioning a right circular cone. Now we want to consider the properties from a different perspective.
The circle can be defined as the set of all points equidistant from a single point. In other words, all the points on the circle are the same distance from the center, which is a single point. That distance is called the radius of the circle.
What about the ellipse? The ellipse can be defined as the set of points equidistant from two points. Each of the two points is called a focus. Each focus plays the role for the ellipse that the center plays for the circle. We might make an analogy like this: The ellipse is to the circle as the rectangle is to the square. What does this mean?
It is not necessary for us to completely dissect the ellipse, as a mathematician might. But it is helpful to see how the ellipse is described and characterized as well as how it is constructed, and some of its properties.
There are two focuses, or foci, of the ellipse. The further apart the two foci, the more squashed the ellipse. The two foci are highly symmetrical; they are mirror images of each other.
Numerically, the focus is the distance from the center of the ellipse to one focus.
5.1.2. semi major axis
Unlike a circle which has a single radius, each ellipse has a long axis and a short axis. The axis is the length of a line cuts the ellipse in half. Any axis will pass through the center point of the ellipse.
The semi major axis is one half of the length of the ellipse, or the distance from the center to the furthest point on the ellipse..
5.1.3. semi minor axis
The semi minor axis is one half the width of the ellipse, or the distance from the center to the closest point on the ellipse.
Now that we have seen how the ellipse is described, let's look at the construction of an ellipse . . .
Watch the video program to see how to construct an ellipse.
But before we do that, let me remind you that you do not have to memorize and reiterate all the facts about an ellipse. The point is that the ellipse is a very Pythagorean figure. It has many interesting numerical and geometric properties. In the time of Plato the conic sections had not yet been described, and it was not known how similar the ellipse and circle really are. It wasn't until Euclid that we see the in depth study of curving plane figures such as the ellipse. Except for the circle, the classical Greek mathematicians considered mostly Polygon shapes.
It was not too much of a stretch for Kepler to consider substituting one geometric figure for another when the two were closely related. Seeing that it has these properties should help you to visualize the planetary motions and understand them better. In the same way that understanding how your car's engine works might make you a better driver, even if you can't take the engine apart and put it back together again.
You might want to try this at home. It's easy to do and it really helps to understand the ellipse.
Now that we have seen the construction of the ellipse we can look at some of its other properties. Hopefully you are beginning to understand why these shapes held so much fascination for the early mathematicians. Hopefully you are also beginning to ask questions like: Why do these shapes have mathematical properties? Were the Pythagoreans correct, is there magic in the number?
In constructing the ellipse I used the property than an ellipse is the set of all points equidistant from two points. Since the string is a fixed length and the distance between the foci is constant, then the ellipse is all of the points whose distance from the two foci is the same as the remainder of the string.
Why is this so? I don't know, and I don't think anyone else does either. It just is.
5.3.2 The Phthagorean Ellipse
The ellipse is a Pythagorean figure in more ways than one. It is not just a squashed circle, it is squashed in a very Pythagorean way.
If we call the semimajor axis a, the semiminor axis b, and the focus c, then the three numbers comprise a Pythagorean triplet, for all ellipses. You remember those? Three numbers which fit the Pythagorean relationship.
A line drawn from the focus to the point where the semiminor axis intersects the ellipse is exactly the same length as the semimajor axis. The three lengths form a Pythagorean triplet.
The measure of the degree of flattening of the ellipse is called the eccentricity. It is a number between zero and one which is focus divided by the semimajor axis.
For example, if the focus is zero then the eccentricity is zero, both foci occur at the center and the figure is a circle. So we can say a circle is really a special case of an ellipse with an eccentricity of zero.
On the other hand, if the focus is the same length as the semimajor axis, then the eccentricity is one and the figure is a straight line segment equal to the semimajor axis and the focus, and the semiminor axis is zero.
We can say the the straight line segment is a special case of an ellipse with eccentricity equal to one.
5.3.4. whispering gallery
A whispering gallery is an elliptical room with the two foci marked on the floor. The ellipse also has the property that any ray, like light or sound waves, which passes through one focus will pass through the other.
It's the equivalent of saying that if you had an elliptical pool table, then any time a ball passed over one spot (a focus) it would rebound off the bank (the ellipse) at such an angle so that it would roll over the other spot (focus).
So in the whispering gallery, a sound made by a person standing at one focus is reflected off the curved walls and focused at the other focus like sunlight through a magnifier.
As you might suspect, this is related to the constant distance of all points from the two foci.
In its motion around the sun, the line joining the planet and the sun sweeps out equal areas in equal time in all portions of its orbit.
The areas of the triangles A and B are equal everywhere in the planet's orbit.
Although A has a longer arc, its other legs are shorter. The two effects exactly counteract to give equal areas.
The area of a triangle is one half its base times its height. Although the planets move in curving arcs, for any short period of time the line is very nearly straight. So the area swept by the planet is one half the distance traveled in its arc times the distance from the planet to the sun.
Because the planet moves faster when it is closer to the sun, it moves through a larger arc in a given time. The larger arc is just offset by the shorter distance.
That the planets move at different speeds at different times is directly contrary to Plato's assertion that the speed of each planet must be constant as well as circular.
Click to see an animated movie of the ellptical orbit.
The third law is often called the harmonic law, for it is the most Pythagorean. The third law states that the planet's period and its average distance to the sun are related by the two-thirds power.
The period is the length of time for one revolution (the planet's year) and the average distance to the sun is the average of the semimajor and the semiminor axes.
There are several equivalent ways to state the third law.
a. The square of the time is proportional to the cube of the distance, where the time is the time for one period, and the distance is the average distance of the planet from the sun.
b. The ratio of the time squared to the distance cubed is the for all the planets, but the ratio for the moon is a different number.
c. The period and the average distance are related by the two-thirds power.
An easy memory device is to think of Times Square to associate the word "time" with the word "squared".
The table below contains the orbital numbers for the planets of the solar system, all of which revolve around the sun.
In this table T is the period in earth years, D is the distance from the sun in astronomical units (A.U.). The A.U. is a distance unit based on the earth's average distance from the sun. It is approximately equal to 93 million miles or 150 million kilometers.
Note that the numbers "T squared" and "D cubed" are almost identical for each of the planets. That is to say that the ratio of "T squared to "D cubed" is very nearly one for all the planets.
The numbers used are modern determinations and are somewhat more accurate than those available in Kepler's time.
Is the fact that they are not "exactly" the same, ie. the ratio is not "exactly" one, a significant contradiction to the law?
Here is a link to another site about Kepler's Laws.
As noted above, the laws are significant not only for their overthrow of the circular paradigm, although that is the sense in which we usually think of them. In terms of the continuity of ideas, just how radical was it really to break the circular paradigm? Was it really broken, or just bent?
Kepler spent much of his writing effort arguing that he was not really breaking anything. His writings contain logical arguments, similar to those of Ptolemy, but reaching the opposite conclusions in many cases.
It is interesting how, with good data to back one up, it is easy to argue away the objections to the moving earth which had been around since Aristotle's time.
The laws certainly supported the heliocentric theory, but that is the only way in which they were really Copernican. Kepler's orbits had no epicycles, no spheres within spheres, no other devices, moving or stationary. It took a total of one slightly flattened sphere per planet. Overall the elliptical orbits made for a much simpler understanding of the planetary motions, and more importantly, allowed a much easier and much more precise method of calculating their future whereabouts.
8.1.1. needed "reason" to stay in orbit
From the Scholastic perspective, the planets now needed a reason to keep moving, an way to keep them in orbit, and a way to explain how the elliptical orbits could remain stable.
8.1.2. central force concept
It is with Kepler that we see the beginning of the concept of central force, that is a force which acts continuously on the planets to keep them moving in closed, stable paths from one orbit to the next. It was apparent to Kepler that the force was directed towards the focus of the ellipse, but he could not describe the nature of the force.
8.1.3. Kepler guessed magnetism
He guessed it might be magnetism, largely because of Gilbert's De Magnete, published in 1600. Knowing that magnets can exert forces through empty space there was no reason to suppose that planets could not do likewise.
The sacred geometry of the universe is not violated. The planetary motions are describable in geometric terms, even if they were different terms that the ancients thought.
It is Pythagorean because it is harmonious.
It is Platonic because the ellipse is almost a circle so the circular paradigm is only bent, not broken.
It is Euclidian because it is a conic section, a family of shapes of which the circle is not only a member, but the exemplary member.
8.2.1. is Pythagorean
8.2.2. is Platonic
126.96.36.199. gave up circles but ellipse is "almost" a circle
188.8.131.52. doesn't violate circular paradigm, only bends it
8.2.3. is Euclidian
The mathematical relationships Kepler discovered made one further statement concerning the heliocentric/geocentric controversy. Kepler's critics could had argued that the ellipses were just one more device rather than a cosmology.
But from Kepler's third law, when we compare the dees squared and the tees cubed we find all of the planets have the same number which represents that ratio.
ALL of the planets, including earth.
Here, in the Pythagorean numbers was the proof that earth was a planet just like all the rest. You might say that the proof was in the Pythagorean pudding.
We might also note that the number representing the third law ratio is different for only one heavenly object.
Can you guess which one?
8.3.1. all planets have same constant, including Earth
8.3.2. moon is different from other planets
The last significant feature of Kepler's laws is that is was the first mathematical law which linked the motions of the planets together.
Aristotle's cosmology had claimed that the motions were linked, but it was a qualitative model, not a quantitative one. You recall that Ptolemy had given up on the concept of linking the motions because he found it unnecessary in order to calculate the motions. Well, the third law links them right back up again, but in a heliocentric framework, not a geocentric one.
8.4.1. previously math was for calculations only
Prior to Kepler's formulation of the laws, mathematics was used for calculations only, not for recognizing relationships. These laws were, in fact, the first general numerical relationships in physical science.
8.4.2. a quantitative connection requires an explanation
As far as Kepler's influences on Newton half a century later, it was the necessity for an explanation of some kind for the relationships he discovered, which stimulated Newton's curiosity and helped him to consider the motion of the planets in terms of the motion of the apple.
In this program we have summarized the influences of Gilbert on Kepler's work and we looked briefly at Francis Bacon whose preference for experiments would drive Galileo's investigation.
We saw that Kepler's laws of motion advanced a heliocentric view, but with the planets moving in elliptical rather than circular orbits, with the sun at a focus rather than at the center, sweeping out equal areas in equal times, and having all the same relationship between period and distance.
We also learned the properties of the ellipse in order to reinforce the Pythagorean nature of this conic section. | http://www2.honolulu.hawaii.edu/instruct/natsci/science/brill/sci122/Programs/p11/p11.html | 13 |
56 | The study of rockets is an excellent way for students
to learn the basics of forces and
the response of an object to external forces.
The easiest rocket to build and fly is the
which is often called a stomp rocket.
The system uses an air pump to
the rocket and the rocket coasts throughout the
rest of the
Stomp rockets have no engine to produce
thrust, so the
resulting flight is similar to the flight of shell
from a cannon, or a bullet from a gun. This type of
flight is called
and assumes that
is the only force acting on the rocket.
Stomp rockets generate a small amount of
and are not strictly ballistic. On this page we develop the
equations which describe the motion of a stomp rocket
including the effects of drag.
To simplify our analysis, we assume a perfectly vertical
launch. If the launch is inclined at some angle, we
the initial velocity into a vertical and horizontal component.
Unlike the ballistic flight equations, the horizontal
equation includes the action of aerodynamic drag on
the rocket. On this page, we assume that the horizontal force
is much less than the vertical.
For an object subject to only the forces of weight and drag,
there is a characteristic velocity which appears in many of the equations.
The characteristic velocity is called the
because it is the constant velocity that the object sustains during
the coasting descent.
Terminal velocity is noted by the symbol Vt.
During coasting descent,
the weight and drag of an object are equal and opposite.
There is no net force acting on the rocket and the vertical acceleration
a = 0
W = D
where a is the acceleration,
W is the weight, and D is the drag.
The weight of any object is given by the
W = m * g
where m is the mass of the object and g is the
gravitational acceleration equal to 32.2 ft/sec^2 or 9.8 m/sec^2
on the surface of the
The gravitational acceleration has different values on the
Moon and on
The drag is given by the
D = .5 * Cd * r * A * Vt^2
where r is the
Cd is the
drag coefficient which characterizes
the effects of shape of the rocket,
A is the cross-sectional
of the rocket, and Vt is the terminal velocity.
The gas density has different surface values on the Earth and
on Mars and varies with altitude. On the Moon the gas density
Combining the last three equations, we can determine the terminal
m * g = .5 * Cd * r * A * Vt^2
Vt = sqrt ( (2 * m * g) / (Cd * r * A) )
Now, turning to the ascent trajectory, the rocket is traveling
at an initial vertical velocity Vo. For the stomp rocket the
velocity is set by the
launch mechanism and
there is no thrust once the rocket is launched. With the positive
vertical coordinate denoted by y, the net vertical force Fnet
acting on the rocket is given by:
Fnet = -W -D
Because the weight of the object is a constant, we can use the
simple form of Newton's second law to solve for the vertical
Fnet = m a = -W - D
m a = - (m * g) - (.5 * Cd * r * A * v^2)
a = -g - (Cd * r * A * v^2) / (2 * m)
Multiply the last term by g/g and use the definition of the
terminal velocity to obtain:
a = -g * (1 + v^2 / Vt^2)
The acceleration is the time rate of change of velocity :
a = dv/dt = -g * (1 + v^2 / Vt^2)
Integrating this differential equation:
dv / (1 + v^2 / Vt^2) = -g dt
Vt * tan-1(v/Vt) = -g * t
where tan-1 is the inverse
function, and t is time..
The limits of integration for velocity v is from Vo to V
and the limits for time t is from 0 to t:
tan-1(V/Vt) - tan-1(Vo/Vt) = - g * t / Vt
tan-1(V/Vt) = tan-1(Vo/Vt) - g * t / Vt
Now take the tangent function of both sides of the equation
tan(a - b) = (tan(a) - tan(b))/(1 + tan(a)*tan(b))
on the right hand side to obtain:
V/Vt = (Vo/Vt - tan(g * t / Vt) / (1 + (Vo/Vt) * tan (g * t / Vt))
V/Vt = (Vo - Vt * tan(g * t / Vt) / (Vt + Vo * tan (g * t / Vt))
This is the equation for the velocity at any time during the coasting ascent.
At the top of the trajectory, the velocity is zero. We can solve the velocity
equation to determine the time when this occurs:
Vo/Vt = tan(g * t(v=o) / Vt)
t(v=o) = (Vt / g) * tan-1(Vo/Vt)
To determine the vertical location during the ascent, we have to use
another identity from differential calculus:
dv/dt = dv/dy * dy/dt
dv/dt = v * dv/dy
We previously determined that
dv/dt = -g * (1 + v^2 / Vt^2)
v * dv/dy = -g * (1 + v^2 / Vt^2)
(v /(1 + v^2 / Vt^2)) * dv = -g dy
Integrating both sides:
(Vt^2 / 2) * (ln (v^2 + Vt^2)) = - g * y
where ln is the natural logarithmic function.
The limits of integration for velocity v is from Vo to V
and the limits for direction y is from 0 to y:
(Vt^2 / 2) * (ln (V^2 + Vt^2) - ln (Vo^2 + Vt^2) = - g * y
Notice that the location equation is pretty messy! For a given time t,
we would have to find the local velocity V, and then plug that
value into the location equation to get the location y.
At the maximum height ymax, the velocity is equal to zero:
ymax = (Vt^2 / (2 * g)) * ln ((Vo^2 + Vt^2)/Vt^2)
Here's a Java calculator which solves the
equations presented on this page:
To operate the calculator, you first select the planet using the choice button
at the top left.
select the "Ignore Drag" option with the middle choice button.
For flight with drag, select "Include Drag" with the middle choice button.
You can perform the calculations in English (Imperial) or metric units.
Enter the initial velocity.
Since we are performing the calculation with drag, we must specify
the object's weight,
cross sectional area,
drag coefficient. The air density is
by the altitude, or it can be input directly.
Press the red "Compute" button to compute the maximum height and
the time to maximum height.
The program also outputs the
as described above.
We provide an on-line web page that contains only this
You can also download your own copy of the calculator for use off-line.
is provided as Fltcalc.zip. You must save this file on your hard drive
and "Extract" the necessary files from Fltcalc.zip. Click on "Fltcalc.html"
to launch your browser and load the program.
You can also study the flight characteristics of a ballistic
object with drag by using the on-line
Notice If you toggle the middle choice button between "Ignore Drag"
and "Include Drag", you will notice that the computed height is always
less when including the drag. The amount of the difference indicates the
importance of drag for certain flight conditions. Also consult the
web page for some warnings concerning cases with high terminal velocity.
If you hold the initial velocity constant, and increase only the weight,
you will notice that the maximum height gradually approaches the
ballistic flight value. | http://exploration.grc.nasa.gov/education/rocket/flteqs.html | 13 |
83 | Mathematics Grade 5
|Printable Version (pdf)|
(1) Students apply their understanding of fractions and fraction models to represent the addition and subtraction of fractions with unlike denominators as equivalent calculations with like denominators. They develop fluency in calculating sums and differences of fractions, and make reasonable estimates of them. Students also use the meaning of fractions, of multiplication and division, and the relationship between multiplication and division to understand and explain why the procedures for multiplying and dividing fractions make sense. (Note: this is limited to the case of dividing unit fractions by whole numbers and whole numbers by unit fractions.)
(2) Students develop understanding of why division procedures work based on the meaning of base-ten numerals and properties of operations. They finalize fluency with multi-digit addition, subtraction, multiplication, and division. They apply their understandings of models for decimals, decimal notation, and properties of operations to add and subtract decimals to hundredths. They develop fluency in these computations, and make reasonable estimates of their results. Students use the relationship between decimals and fractions, as well as the relationship between finite decimals and whole numbers (i.e., a finite decimal multiplied by an appropriate power of 10 is a whole number), to understand and explain why the procedures for multiplying and dividing finite decimals make sense. They compute products and quotients of decimals to hundredths efficiently and accurately.
(3) Students recognize volume as an attribute of three-dimensional space. They understand that volume can be measured by finding the total number of same-size units of volume required to fill the space without gaps or overlaps. They understand that a 1-unit by 1-unit by 1-unit cube is the standard unit for measuring volume. They select appropriate units, strategies, and tools for solving problems that involve estimating and measuring volume. They decompose three-dimensional shapes and find volumes of right rectangular prisms by viewing them as decomposed into layers of arrays of cubes. They measure necessary attributes of shapes in order to determine volumes to solve real world and mathematical problems.
Grade 5 Overview
Operations and Algebraic Thinking
Number and Operations in Base Ten
Number and Operations - Fractions
Measurement and Data
Core Standards of the Course
2. Write simple expressions that record calculations with numbers, and interpret numerical expressions without evaluating them. For example, express the calculation "add 8 and 7, then multiply by 2" as 2 x (8 + 7). Recognize that 3 x (18932 + 921) is three times as large as 18932 + 921, without having to calculate the indicated sum or product.
3. Generate two numerical patterns using two given rules. Identify apparent relationships between corresponding terms. Form ordered pairs consisting of corresponding terms from the two patterns, and graph the ordered pairs on a coordinate plane. For example, given the rule "Add 3" and the starting number 0, and given the rule "Add 6" and the starting number 0, generate terms in the resulting sequences, and observe that the terms in one sequence are twice the corresponding terms in the other sequence. Explain informally why this is so.
2. Explain patterns in the number of zeros of the product when multiplying a number by powers of 10, and explain patterns in the placement of the decimal point when a decimal is multiplied or divided by a power of 10. Use whole-number exponents to denote powers of 10.
- Read and write decimals to thousandths using base-ten numerals, number names, and expanded form, e.g., 347.392 = 3 × 100 + 4 × 10 + 7 × 1 + 3 × (1/10) + 9 × (1/100) + 2 × (1/1000).
- Compare two decimals to thousandths based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons.
6. Find whole-number quotients of whole numbers with up to four-digit dividends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.
7. Add, subtract, multiply, and divide decimals to hundredths, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used.
1. Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators. For example, 2/3 + 5/4 = 8/12 + 15/12 = 23/12. (In general, a/b + c/d = (ad + bc)/bd.)
2. Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to represent the problem. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers. For example, recognize an incorrect result 2/5 + 1/2 = 3/7, by observing that 3/7 < 1/2.
3. Interpret a fraction as division of the numerator by the denominator (a/b = a ÷ b). Solve word problems involving division of whole numbers leading to answers in the form of fractions or mixed numbers, e.g., by using visual fraction models or equations to represent the problem. For example, interpret 3/4 as the result of dividing 3 by 4, noting that 3/4 multiplied by 4 equals 3, and that when 3 wholes are shared equally among 4 people each person has a share of size 3/4. If 9 people want to share a 50-pound sack of rice equally by weight, how many pounds of rice should each person get? Between what two whole numbers does your answer lie?
- Interpret the product (a/b) × q as a parts of a partition of q into b equal parts; equivalently, as the result of a sequence of operations a × q ÷ b. For example, use a visual fraction model to show (2/3) × 4 = 8/3, and create a story context for this equation. Do the same with (2/3) × (4/5) = 8/15. (In general, (a/b) × (c/d) = ac/bd.)
- Find the area of a rectangle with fractional side lengths by tiling it with unit squares of the appropriate unit fraction side lengths, and show that the area is the same as would be found by multiplying the side lengths. Multiply fractional side lengths to find areas of rectangles, and represent fraction products as rectangular areas.
- Comparing the size of a product to the size of one factor on the basis of the size of the other factor, without performing the indicated multiplication.
- Explaining why multiplying a given number by a fraction greater than 1 results in a product greater than the given number (recognizing multiplication by whole numbers greater than 1 as a familiar case); explaining why multiplying a given number by a fraction less than 1 results in a product smaller than the given number; and relating the principle of fraction equivalence a/b = (n × a)/(n × b) to the effect of multiplying a/b by 1.
- Interpret division of a unit fraction by a non-zero whole number, and compute such quotients. For example, create a story context for (1/3) ÷ 4, and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that (1/3) ÷ 4 = 1/12 because (1/12) × 4 = 1/3.
- Interpret division of a whole number by a unit fraction, and compute such quotients. For example, create a story context for 4 ÷ (1/5), and use a visual fraction model to show the quotient. Use the relationship between multiplication and division to explain that 4 ÷ (1/5) = 20 because 20 × (1/5) = 4.
- Solve real world problems involving division of unit fractions by non-zero whole numbers and division of whole numbers by unit fractions, e.g., by using visual fraction models and equations to represent the problem. For example, how much chocolate will each person get if 3 people share 1/2 lb of chocolate equally? How many 1/3-cup servings are in 2 cups of raisins?
2. Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Use operations on fractions for this grade to solve problems involving information presented in line plots. For example, given different measurements of liquid in identical beakers, find the amount of liquid each beaker would contain if the total amount in all the beakers were redistributed equally.
- A cube with side length 1 unit, called a “unit cube,” is said to have “one cubic unit” of volume, and can be used to measure volume.
- A solid figure which can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units.
- Find the volume of a right rectangular prism with whole-number side lengths by packing it with unit cubes, and show that the volume is the same as would be found by multiplying the edge lengths, equivalently by multiplying the height by the area of the base. Represent threefold whole-number products as volumes, e.g., to represent the associative property of multiplication.
- Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems.
- Recognize volume as additive. Find volumes of solid figures composed of two non-overlapping right rectangular prisms by adding the volumes of the non-overlapping parts, applying this technique to solve real world problems.
1. Use a pair of perpendicular number lines, called axes, to define a coordinate system, with the intersection of the lines (the origin) arranged to coincide with the 0 on each line and a given point in the plane located by using an ordered pair of numbers, called its coordinates. Understand that the first number indicates how far to travel from the origin in the direction of one axis, and the second number indicates how far to travel in the direction of the second axis, with the convention that the names of the two axes and the coordinates correspond (e.g., x-axis and x-coordinate, y-axis and y-coordinate).
3. Understand that attributes belonging to a category of two-dimensional figures also belong to all subcategories of that category. For example, all rectangles have four right angles and squares are rectangles, so all squares have four right angles.
These materials have been produced by and for the teachers of the State of Utah. Copies of these materials may be freely reproduced for teacher and classroom use. When distributing these materials, credit should be given to Utah State Office of Education. These materials may not be published, in whole or part, or in any other format, without the written permission of the Utah State Office of Education, 250 East 500 South, PO Box 144200, Salt Lake City, Utah 84114-4200.
For more information about this core curriculum, contact the USOE Specialist, DAVID SMITH or visit the Mathematics - Elementary Home Page. For general questions about Utah's Core Curriculum, contact the USOE Curriculum Director, Sydnee Dickson . UEN Contact Info: 801-581-2999 | 800-866-5852 | Contact Us | http://www.uen.org/core/core.do?courseNum=5150 | 13 |
144 | The wave equation is an important second-order linear partial differential equation for the description of waves – as they occur in physics – such as sound waves, light waves and water waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics. Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange.
Wave equations are examples of hyperbolic partial differential equations, but there are many variations.
In its simplest form, the wave equation concerns a time variable t, one or more spatial variables x1, x2, …, xn, and a scalar function u = u (x1, x2, …, xn; t), whose values could model the displacement of a wave. The wave equation for u is
Solutions of this equation that are initially zero outside some restricted region propagate out from the region at a fixed speed in all spatial directions, as do physical waves from a localized disturbance; the constant c is identified with the propagation speed of the wave. This equation is linear, as the sum of any two solutions is again a solution: in physics this property is called the superposition principle.
The equation alone does not specify a solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the value and velocity of the wave. Another important class of problems specifies boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments.
The elastic wave equation in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion:
- λ and μ are the so-called Lamé parameters describing the elastic properties of the medium,
- ρ is the density,
- f is the source function (driving force),
- and u is the displacement vector.
Note that in this equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation.
Scalar wave equation in one space dimension
Derivation of the wave equation
From Hooke's law
The wave equation in the one dimensional case can be derived from Hooke's law in the following way: Imagine an array of little weights of mass m interconnected with massless springs of length h . The springs have a spring constant of k:
Here u(x) measures the distance from the equilibrium of the mass situated at x. The forces exerted on the mass m at the location x+h are:
The equation of motion for the weight at the location x+h is given by equating these two forces:
where the time-dependence of u(x) has been made explicit.
If the array of weights consists of N weights spaced evenly over the length L = Nh of total mass M = Nm, and the total spring constant of the array K = k/N we can write the above equation as:
Taking the limit N → ∞, h → 0 and assuming smoothness one gets:
(KL2)/M is the square of the propagation speed in this particular case.
General solution
changes the wave equation into
which leads to the general solution
In other words, solutions of the 1D wave equation are sums of a right traveling function F and a left traveling function G. "Traveling" means that the shape of these individual arbitrary functions with respect to x stays constant, however the functions are translated left and right with time at the speed c. This was derived by Jean le Rond d'Alembert.
Another way to arrive at this result is to note that the wave equation may be "factored":
These last two equations are advection equations, one left traveling and one right, both with constant speed c.
For an initial value problem, the arbitrary functions F and G can be determined to satisfy initial conditions:
The result is d'Alembert's formula:
In the classical sense if f(x) ∈ Ck and g(x) ∈ Ck−1 then u(t, x) ∈ Ck. However, the waveforms F and G may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left.
The basic wave equation is a linear differential equation and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components.
Scalar wave equation in three space dimensions
The solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the solution for a spherical wave. This result can then be used to obtain the solution in two space dimensions.
Spherical waves
The wave equation is unchanged under rotations of the spatial coordinates, because the Laplacian operator is invariant under rotation, and therefore one may expect to find solutions that depend only on the radial distance from a given point. Such solutions must satisfy
This equation may be rewritten as
the quantity ru satisfies the one-dimensional wave equation. Therefore there are solutions in the form
where F and G are arbitrary functions. Each term may be interpreted as a spherical wave that expands or contracts with velocity c. Such waves are generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as r increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions.
Monochromatic spherical wave
A point source is vibrating at a single frequency f with phase = 0 at t = 0 with a peak-to-peak magnitude of 2a. A spherical wave is propagated from the point. The phase of the propagated wave changes as kr where r is the distance travelled from the source. The magnitude falls off as 1/r since the energy falls off as r−2. The amplitude of the spherical wave at r is therefore given by:
Solution of a general initial-value problem
The wave equation is linear in u and it is left unaltered by translations in space and time. Therefore we can generate a great variety of solutions by translating and summing spherical waves. Let φ(ξ,η,ζ) be an arbitrary function of three independent variables, and let the spherical wave form F be a delta-function: that is, let F be a weak limit of continuous functions whose integral is unity, but whose support (the region where the function is non-zero) shrinks to the origin. Let a family of spherical waves have center at (ξ,η,ζ), and let r be the radial distance from that point. Thus
If u is a superposition of such waves with weighting function φ, then
the denominator 4πc is a convenience.
From the definition of the delta-function, u may also be written as
where α, β, and γ are coordinates on the unit sphere S, and ω is the area element on S. This result has the interpretation that u(t,x) is t times the mean value of φ on a sphere of radius ct centered at x:
It follows that
The mean value is an even function of t, and hence if
These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point P, given (t,x,y,z) depends only on the data on the sphere of radius ct that is intersected by the light cone drawn backwards from P. It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure. It is not satisfied in even space dimensions. The phenomenon of lacunas has been extensively investigated in Atiyah, Bott and Gårding (1970, 1973).
Scalar wave equation in two space dimensions
In two space dimensions, the wave equation is
We can use the three-dimensional theory to solve this problem if we regard u as a function in three dimensions that is independent of the third dimension. If
then the three-dimensional solution formula becomes
where α and β are the first two coordinates on the unit sphere, and dω is the area element on the sphere. This integral may be rewritten as an integral over the disc D with center (x,y) and radius ct:
It is apparent that the solution at (t,x,y) depends not only on the data on the light cone where
but also on data that are interior to that cone.
Scalar wave equation in general dimension and Kirchhoff's formulae
We want to find solutions to utt−Δu = 0 for u : Rn × (0, ∞) → R with u(x, 0) = g(x) and ut(x, 0) = h(x). See Evans for more details.
Odd dimensions
Assume n ≥ 3 is an odd integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn) for m = (n+1)/2. Let and let
- u ∈ C2(Rn × [0, ∞))
- utt−Δu = 0 in Rn × (0, ∞)
Even dimensions
Assume n ≥ 2 is an even integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn), for m = (n+2)/2. Let and let
- u ∈ C2(Rn × [0, ∞))
- utt−Δu = 0 in Rn × (0, ∞)
Problems with boundaries
One space dimension
The Sturm-Liouville formulation
A flexible string that is stretched between two points x = 0 and x = L satisfies the wave equation for t > 0 and 0 < x < L. On the boundary points, u may satisfy a variety of boundary conditions. A general form that is appropriate for applications is
where a and b are non-negative. The case where u is required to vanish at an endpoint is the limit of this condition when the respective a or b approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form
A consequence is that
The eigenvalue λ must be determined so that there is a non-trivial solution of the boundary-value problem
This is a special case of the general problem of Sturm–Liouville theory. If a and b are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for u and ut can be obtained from expansion of these functions in the appropriate trigonometric series.
Investigation by numerical methods
Approximating the continuous string with a finite number of equidistant mass points one gets the following physical model:
If each mass point has the mass m, the tension of the string is f, the separation between the mass points is Δx and ui, i = 1, ..., n are the offset of these n points from their equilibrium points (i.e. their position on a straight line between the two attachment points of the string) the vertical component of the force towards point i+1 is
and the vertical component of the force towards point i−1 is
Taking the sum of these two forces and dividing with the mass m one gets for the vertical motion:
As the mass density is
this can be written
The wave equation is obtained by letting Δx → 0 in which case ui(t) takes the form u(x, t) where u(x, t) is continuous function of two variables, takes the form and
where L is the length of the string takes in the discrete formulation the form that for the outermost points u1 and un the equation of motion are
while for 1 < i < n
If the string is approximated with 100 discrete mass points one gets the 100 coupled second order differential equations (5), (6) and (7) or equivalently 200 coupled first order differential equations.
Propagating these up to the times
using an 8-th order multistep method the 6 states displayed in figure 2 are found:
The red curve is the initial state at time zero at which the string is "let free" in a predefined shape with all . The blue curve is the state at time , i.e. after a time that corresponds to the time a wave that is moving with the nominal wave velocity would need for one fourth of the length of the string.
Figure 3 displays the shape of the string at the times . The wave travels in direction right with the speed without being actively constraint by the boundary conditions at the two extrems of the string. The shape of the wave is constant, i.e. the curve is indeed of the form f(x−ct).
Figure 4 displays the shape of the string at the times . The constraint on the right extreme starts to interfere with the motion preventing the wave to raise the end of the string.
Figure 5 displays the shape of the string at the times when the direction of motion is reversed. The red, green and blue curves are the states at the times while the 3 black curves correspond to the states at times with the wave starting to move back towards left.
Figure 6 and figure 7 finally display the shape of the string at the times and . The wave now travels towards left and the constraints at the end points are not active any more. When finally the other extreme of the string the direction will again be reversed in a way similar to what is displayed in figure 6
Several space dimensions
The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain D in m-dimensional x space, with boundary B. Then the wave equation is to be satisfied if x is in D and t > 0. On the boundary of D, the solution u shall satisfy
where n is the unit outward normal to B, and a is a non-negative function defined on B. The case where u vanishes on B is a limiting case for a approaching infinity. The initial conditions are
where f and g are defined in D. This problem may be solved by expanding f and g in the eigenfunctions of the Laplacian in D, which satisfy the boundary conditions. Thus the eigenfunction v satisfies
in D, and
In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary B. If B is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle θ, multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation.
Inhomogeneous wave equation in one dimension
The inhomogeneous wave equation in one dimension is the following:
with initial conditions given by
The function s(x, t) is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism.
One method to solve the initial value problem (with the initial values as posed above) is to take advantage of the property of the wave equation that its solutions obey causality. That is, for any point (xi, ti), the value of u(xi, ti) depends only on the values of f(xi+cti) and f(xi−cti) and the values of the function g(x) between (xi−cti) and (xi+cti). This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is c, then no part of the wave that can't propagate to a given point by a given time can affect the amplitude at the same point and time.
In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that casually affects point (xi, ti) as RC. Suppose we integrate the inhomogeneous wave equation over this region.
To simplify this greatly, we can use Green's theorem to simplify the left side to get the following:
The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute
In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus dt = 0.
For the other two sides of the region, it is worth noting that x±ct is a constant, namingly xi±cti, where the sign is chosen appropriately. Using this, we can get the relation dx±cdt = 0, again choosing the right sign:
And similarly for the final boundary segment:
Adding the three results together and putting them back in the original integral:
Solving for u(xi, ti) we arrive at
In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices (xi, ti) compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source.
Other coordinate systems
See also
- Acoustic wave equation
- Acoustic attenuation
- Electromagnetic wave equation
- Helmholtz equation
- Inhomogeneous electromagnetic wave equation
- Laplace operator
- Schrödinger equation
- Standing wave
- Vibrations of a circular drum
- Bateman transform
- Maxwell's equations
- Wheeler-Feynman absorber theory
- Cannon, John T.; Dostrovsky, Sigalia (1981). The evolution of dynamics, vibration theory from 1687 to 1742. Studies in the History of Mathematics and Physical Sciences 6. New York: Springer-Verlag. pp. ix + 184 pp. ISBN 0-3879-0626-6. GRAY, JW (July 1983). "BOOK REVIEWS". BULLETIN (New Series) OF THE AMERICAN MATHEMATICAL SOCIETY 9 (1). (retrieved 13 Nov 2012).
- Gerard F Wheeler. The Vibrating String Controversy, (retrieved 13 Nov 2012). Am. J. Phys., 1987, v55, n1, p33-37.
- For a special collection of the 9 groundbreaking papers by the three authors, see First Appearance of the wave equation: D'Alembert, Leonhard Euler, Daniel Bernoulli. - the controversy about vibrating strings (retrieved 13 Nov 2012). Herman HJ Lynge and Son.
- For de Lagrange's contributions to the acoustic wave equation, can consult Acoustics: An Introduction to Its Physical Principles and Applications Allan D. Pierce, Acoustical Soc of America, 1989; page 18.(retrieved 9 Dec 2012)
- Eric W. Weisstein. "d'Alembert's Solution". MathWorld. Retrieved 2009-01-21.
- D'Alembert (1747) "Recherches sur la courbe que forme une corde tenduë mise en vibration" (Researches on the curve that a tense cord forms [when] set into vibration), Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 3, pages 214-219.
- See also: D'Alembert (1747) "Suite des recherches sur la courbe que forme une corde tenduë mise en vibration" (Further researches on the curve that a tense cord forms [when] set into vibration), Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 3, pages 220-249.
- See also: D'Alembert (1750) "Addition au mémoire sur la courbe que forme une corde tenduë mise en vibration," Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 6, pages 355-360.
- RS Longhurst, Geometrical and Physical Optics, 1967, Longmans, Norwich
- The initial state for "Investigation by numerical methods" is set with quadratic splines as follows:
- M. F. Atiyah, R. Bott, L. Garding, "Lacunas for hyperbolic differential operators with constant coefficients I", Acta Math., 124 (1970), 109–189.
- M.F. Atiyah, R. Bott, and L. Garding, "Lacunas for hyperbolic differential operators with constant coefficients II", Acta Math., 131 (1973), 145–206.
- R. Courant, D. Hilbert, Methods of Mathematical Physics, vol II. Interscience (Wiley) New York, 1962.
- L. Evans, "Partial Differential Equations". American Mathematical Society Providence, 1998.
- "Linear Wave Equations", EqWorld: The World of Mathematical Equations.
- "Nonlinear Wave Equations", EqWorld: The World of Mathematical Equations.
- William C. Lane, "MISN-0-201 The Wave Equation and Its Solutions", Project PHYSNET.
- Francis Redfern. "Kinematic Derivation of the Wave Equation". Physics Journal. — a step-by-step derivation suitable for an introductory approach to the subject.
- Nonlinear Wave Equations by Stephen Wolfram and Rob Knapp, Nonlinear Wave Equation Explorer by Stephen Wolfram, and Wolfram Demonstrations Project.
- Mathematical aspects of wave equations are discussed on the Dispersive PDE Wiki.
- Graham W Griffiths and William E. Schiesser (2009). Linear and nonlinear waves. Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 | http://en.wikipedia.org/wiki/Wave_equation | 13 |
75 | |Part of a series on|
In physics (especially astrophysics), redshift happens when light or other electromagnetic radiation from an object moving away from the observer is increased in wavelength, or shifted to the red end of the spectrum. In general, whether or not the radiation is within the visible spectrum, "redder" means an increase in wavelength – equivalent to a lower frequency and a lower photon energy, in accordance with, respectively, the wave and quantum theories of light.
Redshifts are an example of the Doppler effect, familiar in the change in the apparent pitches of sirens and frequency of the sound waves emitted by speeding vehicles. A redshift occurs whenever a light source moves away from an observer. Cosmological redshift is seen due to the expansion of the universe, and sufficiently distant light sources (generally more than a few million light years away) show redshift corresponding to the rate of increase in their distance from Earth. Finally, gravitational redshifts are a relativistic effect observed in electromagnetic radiation moving out of gravitational fields. Conversely, a decrease in wavelength is called blueshift and is generally seen when a light-emitting object moves toward an observer or when electromagnetic radiation moves into a gravitational field.
Although observing redshifts and blueshifts have several terrestrial applications (such as Doppler radar and radar guns), redshifts are most famously seen in the spectroscopic observations of astronomical objects.
A special relativistic redshift formula (and its classical approximation) can be used to calculate the redshift of a nearby object when spacetime is flat. However, many cases such as black holes and Big Bang cosmology require that redshifts be calculated using general relativity. Special relativistic, gravitational, and cosmological redshifts can be understood under the umbrella of frame transformation laws. There exist other physical processes that can lead to a shift in the frequency of electromagnetic radiation, including scattering and optical effects; however, the resulting changes are distinguishable from true redshift and not generally referred as such (see section on physical optics and radiative transfer).
The history of the subject began with the development in the 19th century of wave mechanics and the exploration of phenomena associated with the Doppler effect. The effect is named after Christian Doppler, who offered the first known physical explanation for the phenomenon in 1842. The hypothesis was tested and confirmed for sound waves by the Dutch scientist Christophorus Buys Ballot in 1845. Doppler correctly predicted that the phenomenon should apply to all waves, and in particular suggested that the varying colors of stars could be attributed to their motion with respect to the Earth. Before this was verified, however, it was found that stellar colors were primarily due to a star's temperature, not motion. Only later was Doppler vindicated by verified redshift observations.
The first Doppler redshift was described by French physicist Hippolyte Fizeau in 1848, who pointed to the shift in spectral lines seen in stars as being due to the Doppler effect. The effect is sometimes called the "Doppler–Fizeau effect". In 1868, British astronomer William Huggins was the first to determine the velocity of a star moving away from the Earth by this method. In 1871, optical redshift was confirmed when the phenomenon was observed in Fraunhofer lines using solar rotation, about 0.1 Å in the red. In 1887, Vogel and Scheiner discovered the annual Doppler effect, the yearly change in the Doppler shift of stars located near the ecliptic due to the orbital velocity of the Earth. In 1901, Aristarkh Belopolsky verified optical redshift in the laboratory using a system of rotating mirrors.
The earliest occurrence of the term "red-shift" in print (in this hyphenated form), appears to be by American astronomer Walter S. Adams in 1908, where he mentions "Two methods of investigating that nature of the nebular red-shift". The word doesn't appear unhyphenated until about 1934 by Willem de Sitter, perhaps indicating that up to that point its German equivalent, Rotverschiebung, was more commonly used.
Beginning with observations in 1912, Vesto Slipher discovered that most spiral nebulae had considerable redshifts. Slipher first reports on his measurement in the inaugural volume of the Lowell Observatory Bulletin. Three years later, he wrote a review in the journal Popular Astronomy. In it he states, "[...] the early discovery that the great Andromeda spiral had the quite exceptional velocity of –300 km(/s) showed the means then available, capable of investigating not only the spectra of the spirals but their velocities as well." Slipher reported the velocities for 15 spiral nebulae spread across the entire celestial sphere, all but three having observable "positive" (that is recessional) velocities. Subsequently, Edwin Hubble discovered an approximate relationship between the redshifts of such "nebulae" (now known to be galaxies in their own right) and the distances to them with the formulation of his eponymous Hubble's law. These observations corroborated Alexander Friedmann's 1922 work, in which he derived the famous Friedmann equations. They are today considered strong evidence for an expanding universe and the Big Bang theory.
Measurement, characterization, and interpretation
The spectrum of light that comes from a single source (see idealized spectrum illustration top-right) can be measured. To determine the redshift, one searches for features in the spectrum such as absorption lines, emission lines, or other variations in light intensity. If found, these features can be compared with known features in the spectrum of various chemical compounds found in experiments where that compound is located on earth. A very common atomic element in space is hydrogen. The spectrum of originally featureless light shone through hydrogen will show a signature spectrum specific to hydrogen that has features at regular intervals. If restricted to absorption lines it would look similar to the illustration (top right). If the same pattern of intervals is seen in an observed spectrum from a distant source but occurring at shifted wavelengths, it can be identified as hydrogen too. If the same spectral line is identified in both spectra but at different wavelengths then the redshift can be calculated using the table below. Determining the redshift of an object in this way requires a frequency- or wavelength-range. In order to calculate the redshift one has to know the wavelength of the emitted light in the rest frame of the source, in other words, the wavelength that would be measured by an observer located adjacent to and comoving with the source. Since in astronomical applications this measurement cannot be done directly, because that would require travelling to the distant star of interest, the method using spectral lines described here is used instead. Redshifts cannot be calculated by looking at unidentified features whose rest-frame frequency is unknown, or with a spectrum that is featureless or white noise (random fluctuations in a spectrum).
Redshift (and blueshift) may be characterized by the relative difference between the observed and emitted wavelengths (or frequency) of an object. In astronomy, it is customary to refer to this change using a dimensionless quantity called z. If λ represents wavelength and f represents frequency (note, λf = c where c is the speed of light), then z is defined by the equations:
|Based on wavelength||Based on frequency|
After z is measured, the distinction between redshift and blueshift is simply a matter of whether z is positive or negative. See the formula section below for some basic interpretations that follow when either a redshift or blueshift is observed. For example, Doppler effect blueshifts (z < 0) are associated with objects approaching (moving closer to) the observer with the light shifting to greater energies. Conversely, Doppler effect redshifts (z > 0) are associated with objects receding (moving away) from the observer with the light shifting to lower energies. Likewise, gravitational blueshifts are associated with light emitted from a source residing within a weaker gravitational field as observed from within a stronger gravitational field, while gravitational redshifting implies the opposite conditions.
Redshift formulae
In general relativity one can derive several important special-case formulae for redshift in certain special spacetime geometries, as summarized in the following table. In all cases the magnitude of the shift (the value of z) is independent of the wavelength.
|Relativistic Doppler||Minkowski space (flat spacetime)||
for motion completely in the radial direction.
|Cosmological redshift||FLRW spacetime (expanding Big Bang universe)|
|Gravitational redshift||any stationary spacetime (e.g. the Schwarzschild geometry)||
(for the Schwarzschild geometry,
Doppler effect
If a source of the light is moving away from an observer, then redshift (z > 0) occurs; if the source moves towards the observer, then blueshift (z < 0) occurs. This is true for all electromagnetic waves and is explained by the Doppler effect. Consequently, this type of redshift is called the Doppler redshift. If the source moves away from the observer with velocity v, which is much less than the speed of light (), the redshift is given by
- (since )
where c is the speed of light. In the classical Doppler effect, the frequency of the source is not modified, but the recessional motion causes the illusion of a lower frequency.
A more complete treatment of the Doppler redshift requires considering relativistic effects associated with motion of sources close to the speed of light. A complete derivation of the effect can be found in the article on the relativistic Doppler effect. In brief, objects moving close to the speed of light will experience deviations from the above formula due to the time dilation of special relativity which can be corrected for by introducing the Lorentz factor γ into the classical Doppler formula as follows:
Since the Lorentz factor is dependent only on the magnitude of the velocity, this causes the redshift associated with the relativistic correction to be independent of the orientation of the source movement. In contrast, the classical part of the formula is dependent on the projection of the movement of the source into the line-of-sight which yields different results for different orientations. If θ is the angle between the direction of relative motion and the direction of emission in the observer's frame (zero angle is directly away from the observer), the full form for the relativistic Doppler effect becomes:
and for motion solely in the line of sight (θ = 0°), this equation reduces to:
For the special case that the light is approaching at right angles (θ = 90°) to the direction of relative motion in the observer's frame, the relativistic redshift is known as the transverse redshift, and a redshift:
is measured, even though the object is not moving away from the observer. Even when the source is moving towards the observer, if there is a transverse component to the motion then there is some speed at which the dilation just cancels the expected blueshift and at higher speed the approaching source will be redshifted.
Expansion of space
In the early part of the twentieth century, Slipher, Hubble and others made the first measurements of the redshifts and blueshifts of galaxies beyond the Milky Way. They initially interpreted these redshifts and blueshifts as due solely to the Doppler effect, but later Hubble discovered a rough correlation between the increasing redshifts and the increasing distance of galaxies. Theorists almost immediately realized that these observations could be explained by a different mechanism for producing redshifts. Hubble's law of the correlation between redshifts and distances is required by models of cosmology derived from general relativity that have a metric expansion of space. As a result, photons propagating through the expanding space are stretched, creating the cosmological redshift.
There is a distinction between a redshift in cosmological context as compared to that witnessed when nearby objects exhibit a local Doppler-effect redshift. Rather than cosmological redshifts being a consequence of relative velocities, the photons instead increase in wavelength and redshift because of a feature of the spacetime through which they are traveling that causes space to expand. Due to the expansion increasing as distances increase, the distance between two remote galaxies can increase at more than 3×108 m/s, but this does not imply that the galaxies move faster than the speed of light at their present location (which is forbidden by Lorentz covariance).
Mathematical derivation
To derive the redshift effect, use the geodesic equation for a light wave, which is
- is the spacetime interval
- is the time interval
- is the spatial interval
- is the speed of light
- is the time-dependent cosmic scale factor
- is the curvature per unit area.
For an observer observing the crest of a light wave at a position and time , the crest of the light wave was emitted at a time in the past and a distant position . Integrating over the path in both space and time that the light wave travels yields:
In general, the wavelength of light is not the same for the two positions and times considered due to the changing properties of the metric. When the wave was emitted, it had a wavelength . The next crest of the light wave was emitted at a time
The observer sees the next crest of the observed light wave with a wavelength to arrive at a time
Since the subsequent crest is again emitted from and is observed at , the following equation can be written:
The right-hand side of the two integral equations above are identical which means
For very small variations in time (over the period of one cycle of a light wave) the scale factor is essentially a constant ( today and previously). This yields
which can be rewritten as
Using the definition of redshift provided above, the equation
is obtained. In an expanding universe such as the one we inhabit, the scale factor is monotonically increasing as time passes, thus, z is positive and distant galaxies appear redshifted.
Using a model of the expansion of the universe, redshift can be related to the age of an observed object, the so-called cosmic time–redshift relation. Denote a density ratio as Ω0:
with ρcrit the critical density demarcating a universe that eventually crunches from one that simply expands. This density is about three hydrogen atoms per thousand liters of space. At large redshifts one finds:
Distinguishing between cosmological and local effects
For cosmological redshifts of z < 0.01 additional Doppler redshifts and blueshifts due to the peculiar motions of the galaxies relative to one another cause a wide scatter from the standard Hubble Law. The resulting situation can be illustrated by the Expanding Rubber Sheet Universe, a common cosmological analogy used to describe the expansion of space. If two objects are represented by ball bearings and spacetime by a stretching rubber sheet, the Doppler effect is caused by rolling the balls across the sheet to create peculiar motion. The cosmological redshift occurs when the ball bearings are stuck to the sheet and the sheet is stretched.
The redshifts of galaxies include both a component related to recessional velocity from expansion of the universe, and a component related to peculiar motion (Doppler shift). The redshift due to expansion of the universe depends upon the recessional velocity in a fashion determined by the cosmological model chosen to describe the expansion of the universe, which is very different from how Doppler redshift depends upon local velocity. Describing the cosmological expansion origin of redshift, cosmologist Edward Robert Harrison said, "Light leaves a galaxy, which is stationary in its local region of space, and is eventually received by observers who are stationary in their own local region of space. Between the galaxy and the observer, light travels through vast regions of expanding space. As a result, all wavelengths of the light are stretched by the expansion of space. It is as simple as that.... Steven Weinberg clarified, "The increase of wavelength from emission to absorption of light does not depend on the rate of change of a(t) [here a(t) is the Robertson-Walker scale factor] at the times of emission or absorption, but on the increase of a(t) in the whole period from emission to absorption."
Popular literature often uses the expression "Doppler redshift" instead of "cosmological redshift" to describe the redshift of galaxies dominated by the expansion of spacetime, but the cosmological redshift is not found using the relativistic Doppler equation which is instead characterized by special relativity; thus v > c is impossible while, in contrast, v > c is possible for cosmological redshifts because the space which separates the objects (for example, a quasar from the Earth) can expand faster than the speed of light. More mathematically, the viewpoint that "distant galaxies are receding" and the viewpoint that "the space between galaxies is expanding" are related by changing coordinate systems. Expressing this precisely requires working with the mathematics of the Friedmann-Robertson-Walker metric.
If the universe were contracting instead of expanding, we would see distant galaxies blueshifted by an amount proportional to their distance instead of redshifted.
Gravitational redshift
In the theory of general relativity, there is time dilation within a gravitational well. This is known as the gravitational redshift or Einstein Shift. The theoretical derivation of this effect follows from the Schwarzschild solution of the Einstein equations which yields the following formula for redshift associated with a photon traveling in the gravitational field of an uncharged, nonrotating, spherically symmetric mass:
- is the gravitational constant,
- is the mass of the object creating the gravitational field,
- is the radial coordinate of the source (which is analogous to the classical distance from the center of the object, but is actually a Schwarzschild coordinate), and
- is the speed of light.
The effect is very small but measurable on Earth using the Mössbauer effect and was first observed in the Pound-Rebka experiment. However, it is significant near a black hole, and as an object approaches the event horizon the red shift becomes infinite. It is also the dominant cause of large angular-scale temperature fluctuations in the cosmic microwave background radiation (see Sachs-Wolfe effect).
Observations in astronomy
The redshift observed in astronomy can be measured because the emission and absorption spectra for atoms are distinctive and well known, calibrated from spectroscopic experiments in laboratories on Earth. When the redshift of various absorption and emission lines from a single astronomical object is measured, z is found to be remarkably constant. Although distant objects may be slightly blurred and lines broadened, it is by no more than can be explained by thermal or mechanical motion of the source. For these reasons and others, the consensus among astronomers is that the redshifts they observe are due to some combination of the three established forms of Doppler-like redshifts. Alternative hypotheses and explanations for redshift such as tired light are not generally considered plausible.
Spectroscopy, as a measurement, is considerably more difficult than simple photometry, which measures the brightness of astronomical objects through certain filters. When photometric data is all that is available (for example, the Hubble Deep Field and the Hubble Ultra Deep Field), astronomers rely on a technique for measuring photometric redshifts. Due to the broad wavelength ranges in photometric filters and the necessary assumptions about the nature of the spectrum at the light-source, errors for these sorts of measurements can range up to δz = 0.5, and are much less reliable than spectroscopic determinations. However, photometry does at least allow a qualitative characterization of a redshift. For example, if a sun-like spectrum had a redshift of z = 1, it would be brightest in the infrared rather than at the yellow-green color associated with the peak of its blackbody spectrum, and the light intensity will be reduced in the filter by a factor of four, . Both the photon count rate and the photon energy are redshifted. (See K correction for more details on the photometric consequences of redshift.)
Local observations
In nearby objects (within our Milky Way galaxy) observed redshifts are almost always related to the line-of-sight velocities associated with the objects being observed. Observations of such redshifts and blueshifts have enabled astronomers to measure velocities and parametrize the masses of the orbiting stars in spectroscopic binaries, a method first employed in 1868 by British astronomer William Huggins. Similarly, small redshifts and blueshifts detected in the spectroscopic measurements of individual stars are one way astronomers have been able to diagnose and measure the presence and characteristics of planetary systems around other stars and have even made very detailed differential measurements of redshifts during planetary transits to determine precise orbital parameters. Finely detailed measurements of redshifts are used in helioseismology to determine the precise movements of the photosphere of the Sun. Redshifts have also been used to make the first measurements of the rotation rates of planets, velocities of interstellar clouds, the rotation of galaxies, and the dynamics of accretion onto neutron stars and black holes which exhibit both Doppler and gravitational redshifts. Additionally, the temperatures of various emitting and absorbing objects can be obtained by measuring Doppler broadening – effectively redshifts and blueshifts over a single emission or absorption line. By measuring the broadening and shifts of the 21-centimeter hydrogen line in different directions, astronomers have been able to measure the recessional velocities of interstellar gas, which in turn reveals the rotation curve of our Milky Way. Similar measurements have been performed on other galaxies, such as Andromeda. As a diagnostic tool, redshift measurements are one of the most important spectroscopic measurements made in astronomy.
Extragalactic observations
The most distant objects exhibit larger redshifts corresponding to the Hubble flow of the universe. The largest observed redshift, corresponding to the greatest distance and furthest back in time, is that of the cosmic microwave background radiation; the numerical value of its redshift is about z = 1089 (z = 0 corresponds to present time), and it shows the state of the Universe about 13.8 billion years ago, and 379,000 years after the initial moments of the Big Bang.
The luminous point-like cores of quasars were the first "high-redshift" (z > 0.1) objects discovered before the improvement of telescopes allowed for the discovery of other high-redshift galaxies.
For galaxies more distant than the Local Group and the nearby Virgo Cluster, but within a thousand megaparsecs or so, the redshift is approximately proportional to the galaxy's distance. This correlation was first observed by Edwin Hubble and has come to be known as Hubble's law. Vesto Slipher was the first to discover galactic redshifts, in about the year 1912, while Hubble correlated Slipher's measurements with distances he measured by other means to formulate his Law. In the widely accepted cosmological model based on general relativity, redshift is mainly a result of the expansion of space: this means that the farther away a galaxy is from us, the more the space has expanded in the time since the light left that galaxy, so the more the light has been stretched, the more redshifted the light is, and so the faster it appears to be moving away from us. Hubble's law follows in part from the Copernican principle. Because it is usually not known how luminous objects are, measuring the redshift is easier than more direct distance measurements, so redshift is sometimes in practice converted to a crude distance measurement using Hubble's law.
Gravitational interactions of galaxies with each other and clusters cause a significant scatter in the normal plot of the Hubble diagram. The peculiar velocities associated with galaxies superimpose a rough trace of the mass of virialized objects in the universe. This effect leads to such phenomena as nearby galaxies (such as the Andromeda Galaxy) exhibiting blueshifts as we fall towards a common barycenter, and redshift maps of clusters showing a Fingers of God effect due to the scatter of peculiar velocities in a roughly spherical distribution. This added component gives cosmologists a chance to measure the masses of objects independent of the mass to light ratio (the ratio of a galaxy's mass in solar masses to its brightness in solar luminosities), an important tool for measuring dark matter.
The Hubble law's linear relationship between distance and redshift assumes that the rate of expansion of the universe is constant. However, when the universe was much younger, the expansion rate, and thus the Hubble "constant", was larger than it is today. For more distant galaxies, then, whose light has been travelling to us for much longer times, the approximation of constant expansion rate fails, and the Hubble law becomes a non-linear integral relationship and dependent on the history of the expansion rate since the emission of the light from the galaxy in question. Observations of the redshift-distance relationship can be used, then, to determine the expansion history of the universe and thus the matter and energy content.
While it was long believed that the expansion rate has been continuously decreasing since the Big Bang, recent observations of the redshift-distance relationship using Type Ia supernovae have suggested that in comparatively recent times the expansion rate of the universe has begun to accelerate.
Highest redshifts
Currently, the objects with the highest known redshifts are galaxies and the objects producing gamma ray bursts. The most reliable redshifts are from spectroscopic data, and the highest confirmed spectroscopic redshift of a galaxy is that of UDFy-38135539 at a redshift of , corresponding to just 600 million years after the Big Bang. The previous record was held by IOK-1, at a redshift , corresponding to just 750 million years after the Big Bang. Slightly less reliable are Lyman-break redshifts, the highest of which is the lensed galaxy A1689-zD1 at a redshift and the next highest being . The most distant observed gamma ray burst was GRB 090423, which had a redshift of . The most distant known quasar, ULAS J1120+0641, is at . The highest known redshift radio galaxy (TN J0924-2201) is at a redshift and the highest known redshift molecular material is the detection of emission from the CO molecule from the quasar SDSS J1148+5251 at
Extremely red objects (EROs) are astronomical sources of radiation that radiate energy in the red and near infrared part of the electromagnetic spectrum. These may be starburst galaxies that have a high redshift accompanied by reddening from intervening dust, or they could be highly redshifted elliptical galaxies with an older (and therefore redder) stellar population. Objects that are even redder than EROs are termed hyper extremely red objects (HEROs).
The cosmic microwave background has a redshift of , corresponding to an age of approximately 379,000 years after the Big Bang and a comoving distance of more than 46 billion light years. The yet-to-be-observed first light from the oldest Population III stars, not long after atoms first formed and the CMB ceased to be absorbed almost completely, may have redshifts in the range of . Other high-redshift events predicted by physics but not presently observable are the cosmic neutrino background from about two seconds after the Big Bang (and a redshift in excess of ) and the cosmic gravitational wave background emitted directly from inflation at a redshift in excess of .
Redshift surveys
With advent of automated telescopes and improvements in spectroscopes, a number of collaborations have been made to map the universe in redshift space. By combining redshift with angular position data, a redshift survey maps the 3D distribution of matter within a field of the sky. These observations are used to measure properties of the large-scale structure of the universe. The Great Wall, a vast supercluster of galaxies over 500 million light-years wide, provides a dramatic example of a large-scale structure that redshift surveys can detect.
The first redshift survey was the CfA Redshift Survey, started in 1977 with the initial data collection completed in 1982. More recently, the 2dF Galaxy Redshift Survey determined the large-scale structure of one section of the Universe, measuring z-values for over 220,000 galaxies; data collection was completed in 2002, and the final data set was released 30 June 2003. (In addition to mapping large-scale patterns of galaxies, 2dF established an upper limit on neutrino mass.) Another notable investigation, the Sloan Digital Sky Survey (SDSS), is ongoing as of 2005 and aims to obtain measurements on around 100 million objects. SDSS has recorded redshifts for galaxies as high as 0.4, and has been involved in the detection of quasars beyond z = 6. The DEEP2 Redshift Survey uses the Keck telescopes with the new "DEIMOS" spectrograph; a follow-up to the pilot program DEEP1, DEEP2 is designed to measure faint galaxies with redshifts 0.7 and above, and it is therefore planned to provide a complement to SDSS and 2dF.
Effects due to physical optics or radiative transfer
The interactions and phenomena summarized in the subjects of radiative transfer and physical optics can result in shifts in the wavelength and frequency of electromagnetic radiation. In such cases the shifts correspond to a physical energy transfer to matter or other photons rather than being due to a transformation between reference frames. These shifts can be due to such physical phenomena as coherence effects or the scattering of electromagnetic radiation whether from charged elementary particles, from particulates, or from fluctuations of the index of refraction in a dielectric medium as occurs in the radio phenomenon of radio whistlers. While such phenomena are sometimes referred to as "redshifts" and "blueshifts", in astrophysics light-matter interactions that result in energy shifts in the radiation field are generally referred to as "reddening" rather than "redshifting" which, as a term, is normally reserved for the effects discussed above.
In many circumstances scattering causes radiation to redden because entropy results in the predominance of many low-energy photons over few high-energy ones (while conserving total energy). Except possibly under carefully controlled conditions, scattering does not produce the same relative change in wavelength across the whole spectrum; that is, any calculated z is generally a function of wavelength. Furthermore, scattering from random media generally occurs at many angles, and z is a function of the scattering angle. If multiple scattering occurs, or the scattering particles have relative motion, then there is generally distortion of spectral lines as well.
In interstellar astronomy, visible spectra can appear redder due to scattering processes in a phenomenon referred to as interstellar reddening – similarly Rayleigh scattering causes the atmospheric reddening of the Sun seen in the sunrise or sunset and causes the rest of the sky to have a blue color. This phenomenon is distinct from redshifting because the spectroscopic lines are not shifted to other wavelengths in reddened objects and there is an additional dimming and distortion associated with the phenomenon due to photons being scattered in and out of the line-of-sight.
For a list of scattering processes, see Scattering.
- See Feynman, Leighton and Sands (1989) or any introductory undergraduate (and many high school) physics textbooks. See Taylor (1992) for a relativistic discussion.
- See Binney and Merrifeld (1998), Carroll and Ostlie (1996), Kutner (2003) for applications in astronomy.
- See Misner, Thorne and Wheeler (1973) and Weinberg (1971) or any of the physical cosmology textbooks
- Doppler, Christian (1846). "Beiträge zur fixsternenkunde". Prag (Prag, Druck von G. Haase sohne) 69. Bibcode:1846QB815.D69......
- Maulik, Dev (2005). "Doppler Sonography: A Brief History". In Maulik, Dev; Zalud, Ivica. Doppler Ultrasound in Obstetrics And Gynecology. ISBN 978-3-540-23088-5.
- O'Connor, John J.; Roberston, Edmund F. (1998). "Christian Andreas Doppler". MacTutor History of Mathematics archive. University of St Andrews.
- Huggins, William (1868). "Further Observations on the Spectra of Some of the Stars and Nebulae, with an Attempt to Determine Therefrom Whether These Bodies are Moving towards or from the Earth, Also Observations on the Spectra of the Sun and of Comet II". Philosophical Transactions of the Royal Society of London 158: 529–564. Bibcode:1868RSPT..158..529H. doi:10.1098/rstl.1868.0022.
- Reber, G. (1995). "Intergalactic Plasma". Astrophysics and Space Science 227 (1–2): 93–96. Bibcode:1995Ap&SS.227...93R. doi:10.1007/BF00678069.
- Pannekoek, A (1961). A History of Astronomy. Dover. p. 451. ISBN 0-486-65994-1.
- Bélopolsky, A. (1901). "On an Apparatus for the Laboratory Demonstration of the Doppler-Fizeau Principle". Astrophysical Journal 13: 15. Bibcode:1901ApJ....13...15B. doi:10.1086/140786.
- Adams, Walter S. (1908). "Preliminary catalogue of lines affected in sun-spots". Contributions from the Mount Wilson Observatory / Carnegie Institution of Washington (Contributions from the Solar Observatory of the Carnegie Institution of Washington: Carnegie Institution of Washington) 22: 1–21. Bibcode:1908CMWCI..22....1A. Reprinted in Adams, Walter S. (1908). "Preliminary Catalogue of Lines Affected in Sun-Spots Region λ 4000 TO λ 4500". Astrophysical Journal 27: 45. Bibcode:1908ApJ....27...45A. doi:10.1086/141524.
- de Sitter, W. (1934). "On distance, magnitude, and related quantities in an expanding universe". Bulletin of the Astronomical Institutes of the Netherlands 7: 205. Bibcode:1934BAN.....7..205D. "It thus becomes urgent to investigate the effect of the redshift and of the metric of the universe on the apparent magnitude and observed numbers of nebulae of given magnitude"
- Slipher, Vesto (1912). "The radial velocity of the Andromeda Nebula". Lowell Observatory Bulletin 1: 2.56–2.57. Bibcode:1913LowOB...1b..56S. "The magnitude of this velocity, which is the greatest hitherto observed, raises the question whether the velocity-like displacement might not be due to some other cause, but I believe we have at present no other interpretation for it"
- Slipher, Vesto (1915). "Spectrographic Observations of Nebulae". Popular Astronomy 23: 21–24. Bibcode:1915PA.....23...21S.
- Slipher, Vesto (1915). "Spectrographic Observations of Nebulae". Popular Astronomy 23: 22. Bibcode:1915PA.....23...21S.
- Hubble, Edwin (1929). "A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae". Proceedings of the National Academy of Sciences of the United States of America 15 (3): 168–173. Bibcode:1929PNAS...15..168H. doi:10.1073/pnas.15.3.168. PMC 522427. PMID 16577160.
- Friedman, A. A. (1922). "Über die Krümmung des Raumes". Zeitschrift fur Physik 10 (1): 377–386. Bibcode:1922ZPhy...10..377F. doi:10.1007/BF01332580. English translation in Friedman, A. (1999). General Relativity and Gravitation 31 (12): 1991–2000. Bibcode:1999GReGr..31.1991F. doi:10.1023/A:1026751225741.)
- This was recognized early on by physicists and astronomers working in cosmology in the 1930s. The earliest layman publication describing the details of this correspondence is Eddington, Arthur (1933). The Expanding Universe: Astronomy's 'Great Debate', 1900–1931. Cambridge University Press. (Reprint: ISBN 978-0-521-34976-5)
- "Hubble census finds galaxies at redshifts 9 to 12". ESA/Hubble Press Release. Retrieved 13 December 2012.
- See, for example, this 25 May 2004 press release from NASA's Swift space telescope that is researching gamma-ray bursts: "Measurements of the gamma-ray spectra obtained during the main outburst of the GRB have found little value as redshift indicators, due to the lack of well-defined features. However, optical observations of GRB afterglows have produced spectra with identifiable lines, leading to precise redshift measurements."
- See for a tutorial on how to define and interpret large redshift measurements.
- Where z = redshift; v|| = velocity parallel to line-of-sight (positive if moving away from receiver); c = speed of light; γ = Lorentz factor; a = scale factor; G = gravitational constant; M = object mass; r = radial Schwarzschild coordinate, gtt = t,t component of the metric tensor
- H. Ives and G. Stilwell, An Experimental study of the rate of a moving atomic clock, J. Opt. Soc. Am. 28, 215–226 (1938)
- Freund, Jurgen (2008). Special Relativity for Beginners. World Scientific. p. 120. ISBN 981-277-160-3.
- Ditchburn, R (1961). Light. Dover. p. 329. ISBN 0-12-218101-8.
- See "Photons, Relativity, Doppler shift" at the University of Queensland
- The distinction is made clear in Harrison, Edward Robert (2000). Cosmology: The Science of the Universe (2 ed.). Cambridge University Press. pp. 306ff. ISBN 0-521-66148-X.
- Steven Weinberg (1993). The First Three Minutes: A Modern View of the Origin of the Universe (2 ed.). Basic Books. p. 34. ISBN 0-465-02437-8.
- Lars Bergström, Ariel Goobar (2006). Cosmology and Particle Astrophysics (2 ed.). Springer. p. 77, Eq.4.79. ISBN 3-540-32924-2.
- M.S. Longair (1998). Galaxy Formation. Springer. p. 161. ISBN 3-540-63785-0.
- Yu N Parijskij (2001). "The High Redshift Radio Universe". In Norma Sanchez. Current Topics in Astrofundamental Physics. Springer. p. 223. ISBN 0-7923-6856-8.
- Measurements of the peculiar velocities out to 5 Mpc using the Hubble Space Telescope were reported in 2003 by Karachentsev et al. Local galaxy flows within 5 Mpc. 02/2003 Astronomy and Astrophysics, 398, 479-491.
- Theo Koupelis, Karl F. Kuhn (2007). In Quest of the Universe (5 ed.). Jones & Bartlett Publishers. p. 557. ISBN 0-7637-4387-9.
- "It is perfectly valid to interpret the equations of relativity in terms of an expanding space. The mistake is to push analogies too far and imbue space with physical properties that are not consistent with the equations of relativity." Geraint F. Lewis et al. (2008). "Cosmological Radar Ranging in an Expanding Universe". Monthly Notices of the Royal Astronomical Society 388 (3): 960–964. arXiv:0805.2197. Bibcode:2008MNRAS.388..960L. doi:10.1111/j.1365-2966.2008.13477.x.
- Michal Chodorowski (2007). "Is space really expanding? A counterexample". Concepts Phys 4: 17–34. arXiv:astro-ph/0601171. Bibcode:2007ONCP....4...15C. doi:10.2478/v10005-007-0002-2.
- Bedran,M.L.(2002)http://www.df.uba.ar/users/sgil/physics_paper_doc/papers_phys/cosmo/doppler_redshift.pdf "A comparison between the Doppler and cosmological redshifts"; Am.J.Phys.70, 406–408 (2002)
- Edward Harrison (1992). "The redshift-distance and velocity-distance laws". Astrophysical Journal, Part 1 403: 28–31. Bibcode:1993ApJ...403...28H. doi:10.1086/172179.. A pdf file can be found here .
- Harrison 2000, p. 315.
- Steven Weinberg (2008). Cosmology. Oxford University Press. p. 11. ISBN 978-0-19-852682-7.
- Odenwald & Fienberg 1993
- Speed faster than light is allowed because the expansion of the spacetime metric is described by general relativity in terms of sequences of only locally valid inertial frames as opposed to a global Minkowski metric. Expansion faster than light is an integrated effect over many local inertial frames and is allowed because no single inertial frame is involved. The speed-of-light limitation applies only locally. See Michal Chodorowski (2007). "Is space really expanding? A counterexample". Concepts Phys 4: 17–34. arXiv:astro-ph/0601171. Bibcode:2007ONCP....4...15C. doi:10.2478/v10005-007-0002-2.
- M. Weiss, What Causes the Hubble Redshift?, entry in the Physics FAQ (1994), available via John Baez's website
- This is only true in a universe where there are no peculiar velocities. Otherwise, redshifts combine as
- Chant, C. A. (1930). "Notes and Queries (Telescopes and Observatory Equipment – The Einstein Shift of Solar Lines)". Journal of the Royal Astronomical Society of Canada 24: 390. Bibcode:1930JRASC..24..390C.
- Einstein, A (1907). "Über das Relativitätsprinzip und die aus demselben gezogenen Folgerungen". Jahrbuch der Radioaktivität und Elektronik 4: 411–462.
- Pound, R.; Rebka, G. (1960). "Apparent Weight of Photons". Physical Review Letters 4 (7): 337. Bibcode:1960PhRvL...4..337P. doi:10.1103/PhysRevLett.4.337.. This paper was the first measurement.
- Sachs, R. K.; Wolfe, A. M. (1967). "Perturbations of a cosmological model and angular variations of the cosmic microwave background". Astrophysical Journal 147 (73): 73. Bibcode:1967ApJ...147...73S. doi:10.1086/148982.
- Dieter Brill, “Black Hole Horizons and How They Begin”, Astronomical Review (2012); Online Article, cited Sept.2012.
- When cosmological redshifts were first discovered, Fritz Zwicky proposed an effect known as tired light. While usually considered for historical interests, it is sometimes, along with intrinsic redshift suggestions, utilized by nonstandard cosmologies. In 1981, H. J. Reboul summarised many alternative redshift mechanisms that had been discussed in the literature since the 1930s. In 2001, Geoffrey Burbidge remarked in a review that the wider astronomical community has marginalized such discussions since the 1960s. Burbidge and Halton Arp, while investigating the mystery of the nature of quasars, tried to develop alternative redshift mechanisms, and very few of their fellow scientists acknowledged let alone accepted their work. Moreover, Goldhaber et al. 2001; "Timescale Stretch Parameterization of Type Ia Supernova B-Band Lightcurves", ApJ, 558:359–386, 2001 September 1 pointed out that alternative theories are unable to account for timescale stretch observed in type Ia supernovae
- For a review of the subject of photometry, consider Budding, E., Introduction to Astronomical Photometry, Cambridge University Press (September 24, 1993), ISBN 0-521-41867-4
- The technique was first described by Baum, W. A.: 1962, in G. C. McVittie (ed.), Problems of extra-galactic research, p. 390, IAU Symposium No. 15
- Bolzonella, M.; Miralles, J.-M.; Pelló, R., Photometric redshifts based on standard SED fitting procedures, Astronomy and Astrophysics, 363, p.476–492 (2000).
- A pedagogical overview of the K-correction by David Hogg and other members of the SDSS collaboration can be found at astro-ph.
- The Exoplanet Tracker is the newest observing project to use this technique, able to track the redshift variations in multiple objects at once, as reported in Ge, Jian; Van Eyken, Julian; Mahadevan, Suvrath; Dewitt, Curtis; Kane, Stephen R.; Cohen, Roger; Vanden Heuvel, Andrew; Fleming, Scott W. et al. (2006). "The First Extrasolar Planet Discovered with a New‐Generation High‐Throughput Doppler Instrument". The Astrophysical Journal 648: 683. arXiv:astro-ph/0605247. Bibcode:2006ApJ...648..683G. doi:10.1086/505699.
- Libbrecht, Keng. (1988). "Solar and stellar seismology". Space Science Reviews 47 (3–4): 275–301. Bibcode:1988SSRv...47..275L. doi:10.1007/BF00243557.
- In 1871 Hermann Carl Vogel measured the rotation rate of Venus. Vesto Slipher was working on such measurements when he turned his attention to spiral nebulae.
- An early review by Oort, J. H. on the subject: Oort, J. H. (1970). "The formation of galaxies and the origin of the high-velocity hydrogen". Astronomy and Astrophysics 7: 381. Bibcode:1970A&A.....7..381O.
- Asaoka, Ikuko (1989). "X-ray spectra at infinity from a relativistic accretion disk around a Kerr black hole". Astronomical Society of Japan 41 (4): 763–778. Bibcode:1989PASJ...41..763A. ISSN 0004-6264.
- Rybicki, G. B. and A. R. Lightman, Radiative Processes in Astrophysics, John Wiley & Sons, 1979, p. 288 ISBN 0-471-82759-2
- "Cosmic Detectives". The European Space Agency (ESA). 2013-04-02. Retrieved 2013-04-25.
- An accurate measurement of the cosmic microwave background was achieved by the COBE experiment. The final published temperature of 2.73 K was reported in this paper: Fixsen, D. J.; Cheng, E. S.; Cottingham, D. A.; Eplee, R. E., Jr.; Isaacman, R. B.; Mather, J. C.; Meyer, S. S.; Noerdlinger, P. D.; Shafer, R. A.; Weiss, R.; Wright, E. L.; Bennett, C. L.; Boggess, N. W.; Kelsall, T.; Moseley, S. H.; Silverberg, R. F.; Smoot, G. F.; Wilkinson, D. T.. (1994). "Cosmic microwave background dipole spectrum measured by the COBE FIRAS instrument", Astrophysical Journal, 420, 445. The most accurate measurement as of 2006 was achieved by the WMAP experiment.
- Peebles (1993).
- Binney, James; and Scott Treimane (1994). Galactic dynamics. Princeton University Press. ISBN 0-691-08445-9.
- M.D.Lehnert, et al.; Nesvadba, NP; Cuby, JG; Swinbank, AM; Morris, S; Clément, B; Evans, CJ; Bremer, MN et al. (2010). "Spectroscopic Confirmation of a galaxy at redshift z = 8.6". Nature 467 (7318): 940–942. arXiv:1010.4312. Bibcode:2010Natur.467..940L. doi:10.1038/nature09462. PMID 20962840.
- Masanori Iye, et al. (2006). "A galaxy at a redshift z = 6.96". Nature 443 (7108): 186–188. arXiv:astro-ph/0609393. Bibcode:2006Natur.443..186I. doi:10.1038/nature05104. PMID 16971942.
- Bradley, L.., et al., Discovery of a Very Bright Strongly Lensed Galaxy Candidate at z ~ 7.6, The Astrophysical Journal (2008), Volume 678, Issue 2, pp. 647-654. [http://adsabs.harvard.edu/abs/2008ApJ...678..647B
- Egami, E., et al., Spitzer and Hubble Space Telescope Constraints on the Physical Properties of the z~7 Galaxy Strongly Lensed by A2218, The Astrophysical Journal (2005), v. 618, Issue 1, pp. L5–L8 .
- Salvaterra, R. et al. (2009). "GRB 090423 reveals an exploding star at the epoch of re-ionization". Nature 461 (7268): 1258–60. arXiv:0906.1578. Bibcode:2009Natur.461.1258S. doi:10.1038/nature08445. PMID 19865166.
- Scientific American, "Brilliant, but Distant: Most Far-Flung Known Quasar Offers Glimpse into Early Universe", John Matson, 29 June 2011
- Klamer, I. J.; Ekers, R. D.; Sadler, E. M.; Weiss, A.; Hunstead, R. W.; De Breuck, C. (2005). "CO (1-0) and CO (5-4) Observations of the Most Distant Known Radio Galaxy atz = 5.2". The Astrophysical Journal 621: L1. arXiv:astro-ph/0501447v1. Bibcode:2005ApJ...621L...1K. doi:10.1086/429147.
- Walter, Fabian; Bertoldi, Frank; Carilli, Chris; Cox, Pierre; Lo, K. Y.; Neri, Roberto; Fan, Xiaohui; Omont, Alain et al. (2003). "Molecular gas in the host galaxy of a quasar at redshift z = 6.42". Nature 424 (6947): 406–8. arXiv:astro-ph/0307410. Bibcode:2003Natur.424..406W. doi:10.1038/nature01821. PMID 12879063.
- Smail, Ian; Owen, F. N.; Morrison, G. E.; Keel, W. C.; Ivison, R. J.; Ledlow, M. J. (2002). "The Diversity of Extremely Red Objects". The Astrophysical Journal 581 (2): 844–864. arXiv:astro-ph/0208434. Bibcode:2002ApJ...581..844S. doi:10.1086/344440.
- Totani, Tomonori; Yoshii, Yuzuru; Iwamuro, Fumihide; Maihara, Toshinori; Motohara, Kentaro (2001). "Hyper Extremely Red Objects in the Subaru Deep Field: Evidence for Primordial Elliptical Galaxies in the Dusty Starburst Phase". The Astrophysical Journal 558 (2): L87–L91. arXiv:astro-ph/0108145. Bibcode:2001ApJ...558L..87T. doi:10.1086/323619.
- Lineweaver, Charles; Tamara M. Davis (2005). "Misconceptions about the Big Bang". Scientific American. Retrieved 2008-11-06.
- Naoz, S.; Noter, S.; Barkana, R. (2006). "The first stars in the Universe". Monthly Notices of the Royal Astronomical Society: Letters 373: L98–L102. arXiv:astro-ph/0604050. Bibcode:2006MNRAS.373L..98N. doi:10.1111/j.1745-3933.2006.00251.x.
- Lesgourgues, J; Pastor, S (2006). "Massive neutrinos and cosmology". Physics Reports 429 (6): 307–379. arXiv:astro-ph/0603494. Bibcode:2006PhR...429..307L. doi:10.1016/j.physrep.2006.04.001.
- Grishchuk, Leonid P (2005). "Relic gravitational waves and cosmology". Physics-Uspekhi 48 (12): 1235–1247. arXiv:gr-qc/0504018. Bibcode:2005PhyU...48.1235G. doi:10.1070/PU2005v048n12ABEH005795.
- M. J. Geller & J. P. Huchra, Science 246, 897 (1989). online
- See the official CfA website for more details.
- Shaun Cole et al. (The 2dFGRS Collaboration) (2005). "The 2dF galaxy redshift survey: Power-spectrum analysis of the final dataset and cosmological implications". Mon. Not. Roy. Astron. Soc. 362 (2): 505–34. arXiv:astro-ph/0501174. Bibcode:2005MNRAS.362..505C. doi:10.1111/j.1365-2966.2005.09318.x. 2dF Galaxy Redshift Survey homepage
- SDSS Homepage
- Marc Davis et al. (DEEP2 collaboration) (2002). "Science objectives and early results of the DEEP2 redshift survey". Conference on Astronomical Telescopes and Instrumentation, Waikoloa, Hawaii, 22–28 Aug 2002. arXiv:astro-ph/0209419.
- Odenwald, S. & Fienberg, RT. 1993; "Galaxy Redshifts Reconsidered" in Sky & Telescope Feb. 2003; pp31–35 (This article is useful further reading in distinguishing between the 3 types of redshift and their causes.)
- Lineweaver, Charles H. and Tamara M. Davis, "Misconceptions about the Big Bang", Scientific American, March 2005. (This article is useful for explaining the cosmological redshift mechanism as well as clearing up misconceptions regarding the physics of the expansion of space.)
Book references
- Nussbaumer, Harry; and Lydia Bieri (2009). Discovering the Expanding Universe. Cambridge University Press. ISBN 978-0-521-51484-2.
- Binney, James; and Michael Merrifeld (1998). Galactic Astronomy. Princeton University Press. ISBN 0-691-02565-7.
- Carroll, Bradley W. and Dale A. Ostlie (1996). An Introduction to Modern Astrophysics. Addison-Wesley Publishing Company, Inc. ISBN 0-201-54730-9.
- Feynman, Richard; Leighton, Robert; Sands, Matthew (1989). Feynman Lectures on Physics. Vol. 1. Addison-Wesley. ISBN 0-201-51003-0.
- Grøn, Øyvind; Hervik, Sigbjørn (2007). Einstein's General Theory of Relativity. New York: Springer. ISBN 978-0-387-69199-2.
- Kutner, Marc (2003). Astronomy: A Physical Perspective. Cambridge University Press. ISBN 0-521-52927-1.
- Misner, Charles; Thorne, Kip S. and Wheeler, John Archibald (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0-7167-0344-0.
- Peebles, P. J. E. (1993). Principles of Physical Cosmology. Princeton University Press. ISBN 0-691-01933-9.
- Taylor, Edwin F.; Wheeler, John Archibald (1992). Spacetime Physics: Introduction to Special Relativity (2nd ed.). W.H. Freeman. ISBN 0-7167-2327-1.
- Weinberg, Steven (1971). Gravitation and Cosmology. John Wiley. ISBN 0-471-92567-5.
- See also physical cosmology textbooks for applications of the cosmological and gravitational redshifts.
|Wikimedia Commons has media related to: Redshift|
- Ned Wright's Cosmology tutorial
- Cosmic reference guide entry on redshift
- Mike Luciuk's Astronomical Redshift tutorial
- Animated GIF of Cosmological Redshift by Wayne Hu
- SIXTψ SYMBΦLS: Videos about the symbols of physics and astronomy, "Z Redshift" from the University of Nottingham | http://en.wikipedia.org/wiki/Red_shift | 13 |
50 | “WHAT IS THEORY OF RELATIVITY?” (Concluding part):
“The theory of Relativity resembles a building, consisting of two separate stories, the Special Theory and the General Theory. The Special Theory, on which the General Theory rests, applies to all physical phenomena with the exception of Gravitation; The General Theory provides the law of Gravitation and its relation to the other forces of Nature”
- Albert Einstein thus consolidates the principle behind the General Theory of Relativity which forms the basis of this article.
First let us see how the General theory is an extension of special theory and some of its applications.
Special theory deals with the frames of reference moving with uniform velocity. Bodies moving under the sole influence of a gravitational field acquire an acceleration that does not depend upon the material or the physical state of the body. Einstein realised that this property of gravitational field implied equivalence, which became the basis of his General theory of Relativity.
Two Time measuring devices, if placed at the centre and at the edge of the rotating disc will show a remarkable result. With respect to K(disc at centre) the devices would be both at rest. But with respect to K’ (edge) the motion of the disc will show a TIME DILATION at the edge with respect to time measured at the centre. The clock at the periphery runs permanently at a slower rate than that at the centre. As observed from K. Hence, in a gravitational field, a timing device will run at different rates depending on where it is situated. Standard measuring rods placed tangentially around the circumference C of the disc will all be contracted in length due to the relativistic length contraction. Measuring such rods from K across the diameter will not show any such contraction (as against the contraction in tangential position). Hence the value of ‘Pi’ i.e. C/D will be more. (Because C, the circumference increases and D, diameter remains the same). This is an extension of the concept of of elongation of length in special theory to curved space in general theory.
Normally the value of Pi, that is Circumference/diameter will be 3.14159. but in the case of above circular motion it is larger. Euclidean geometry does not hold good in an accelerating frame, by the principle of relativity within a gravitational field. Spaces in which propositions of Euclid are not valid are sometimes called CURVED SPACES. For example, the sum of the internal angles of a triangle drawn on a flat sheet of paper will be 180 degrees, but a triangle drawn on a curved surface of a sphere will not follow this Euclidean rule. Einstein realised that rays of light would be perceived as curving in an accelerated frame. So, the final conclusion was, rays of light are propagated curvilinearly in gravitational fields. Experimental proof for this was available by means of photographs taken of stars during the solar eclipse on 29th May, 1919. The deflection of starlight around the Sun’s mass was confirmed.
In General Relativity, material bodies follow lines of shortest distance called GEODESICS. The motions of the material bodies are therefore determined by the curvature of space in the region through which they pass. This is the essence of general theory of relativity. The curvature of space and bending of light had been proved in all the cases of observed stellar activities.
Application to Super-string theory: The proposal of Curvature of Space and Time has both Scientific and Philosophical significance. When any object or observable event enters from non observable Universe to observable Universe it enters the field of Space and time which is curved. Einstein’s this observation is a fore runner for the super string theory, which leads to spin 2 particles, which are known as ‘ GRAVITONS ‘. Also the super string theory contains a quantum theory of the gravitational interactions, which is the domain of General Theory.
The important difference between the special and general theories, as explained above is that in special theory, movements of bodies are with uniform velocity and without involving acceleration whereas General theory deals with bodies moving with acceleration, especially under the influence of accelaration due to gravity. Let us see how this gives rise to new concepts of inertial and gravitational masses having great Philosophical significance too, explaining the concept of LIFE itself.Let us imagine an elevator, so far removed from stars and other large masses that there is no appreciable gravitational field. The observer will feel weightless. If a rope attached to the top of the elevator were to be pulled with a constant acceleration of 981 m/sec2 the observer would detect this acceleration as a force reaction on the floor. The experiences of the observer in an elevator are equivalent to the experiences of an observer in an elevator in the Earth’s gravitational field of strength 9.81 N/ kg. The force reaction at the feet of the observer in the accelerating frame is due to the observer’s INERTIAL MASS (the mass that represents the reluctance of the observer’s body to accelerate under the influence of a force) which is exactly equal to gravitational mass.
The equivalence of inert mass and gravitational mass has significant implications for the nature of space and time under the influence of a gravitational field. This, in fact, relates to circular motions and so connected with centrifugal forces. Suppose an observer A takes measurements from a Galilean (non-relativistic) frame K of non-Galilean (relativistic) frame K’ which is a rotating disc inhabited by an observer B. A notes that B is in circular motion and experiences a centrifugal acceleration. This acceleration is produced by a force, which may be interpreted as an effect of B’s inertial mass. But as per the above results of experiment 1, B may contend that he is actually at rest under the influence of a radially directed gravitational field which once again proves the equivalence of the two.
Let us see how Gravitation is connected to LIFE.
The theory that INERT MASS is equal and opposite to GRAVITATIONAL MASS is in itself an explanation to life on Earth. The inert mass is LIFE ENERGY and the Gravitational mass is the Earthly challenges it faces. Till the time they are equal, the life manages to exist. But, as explained in the article ” What is life? “, the ENTROPY i.e. disorder goes on increasing which is compensated by NEGATIVE-ENTROPY supplied by medicine etc. But once entropy overpowers the body i.e. inert mass, the body is unable to fight the gravitation in the negative side, and this process is called death.
Thus, a beautiful co-relation is brought between LIFE and GRAVITATION. Life contains all the forces of Nature in the body, on one hand they are used for Life and on the other they are used to conquer Nature itself as Maxwell’s electromagnetic forces do in fighting the original e.m.f which created them. Life contains several matter like carbon, Iron,sulphur, Iodine etc and forms of energy such as Heat, Electricity. Magnetism, Sound etc. But the most fundamental force life uses is Gravitation.
All the above forces are building blocks of a living being. The same forces help in opposing and conquering the Physical world.
Hence, life is essentially a balancing act with Gravitation. Thoughts inscribed in the brain cells come to the fore one after the other in series as per the energy supplied to them. The most important thing is the feeling of “I”. This comes when the embryo starts movement inside the ovary i.e. when it FEELS the Gravitation. From that moment onward the process of struggle for subsistence starts (against the force of gravitation) as surviving the challenges put forward by the surroundings.
In part I, we saw how Special theory of Relativity explained Light as the Universal energy connecting matter and energy. On the Philosophical side, we saw that there is another Light inside the body known as ‘Inner Light” which is the guiding force and replica of the Master Intelligent Plan which is beyond Space, Time and conservation laws. Intelligence is the capacity to throw light in the fields of Space, time conservation laws and beyond.
In this II part, the special theory has been extended to the General Theory. The basis of the argument was Inert mass is equal to the Gravitational mass. In a relative motion nearly at speed of light, the acceleration gives raise to increase in mass and the time dilation which results in curvature in space. Objects move in a four dimensional curved space. On the Philosophical side, it was discussed that life is a negating force to gravitation, and that force too is nothing but gravitation as proved in the equality of Inert and Gravitational mass. Also a reference has been given to curvature of Space and time relating it to Super string theory.
Advantages of this article:
Various aspects of relativity, including various technical terms are explained in a very simple manner. The contents of these two articles are scarcely available in school/college text books. Any student of Science when he goes through these articles would be able to understand the rudiments thoroughly and able to follow difficult aspects thoroughly and easily. Philosophers, not accustomed to technical terms also can understand and apply them to relativistic Philosophy.
Wishing the readers all success.
Dr B.Sathyanarayanan (65) is an experienced administrator, teacher and writer. He is M.Sc(Physics) from Annamalai University with spectroscopy as special subject. He worked as a PHYSICS LECTURER for 2 years (1969-1971). Later, due to family circumstances, he had to take up the lucrative bank job and continued Physics and Philosophy research privately. At the age of 50, he got voluntary retirement from banking service with pension benefits. Now he finds more time to spend for social,educational and research activities. He returned to teaching profession and taught SPOKEN ENGLISH and GRAMMAR as adult education to several students. In 2005, he took up Physics teaching once again and is continuously teaching for the past 8 years as a regular professor of Physics.
He got his PhD in Psychological counselling in 2000 and is counselling on HIV/AIDS matters. He conducted several intervention programmes. He is a well known writer in English in fiction and article writing. His writing is recognised internationally by listing in the directory of World Philosophers, Bowling Green State University, U.S.A.
All along his life so far, he remained a scientific philosopher in thought and deeds. He considers Albert Einstein as his role model in Science and J.Krishnamurti, in Philosophy. His first book ‘The Simple Truth”, a comparative study of Religion and Science, was published in 1987. He published the annual magazine ‘Philosophy of Science’ for five years. He founded Holistic Philosophy Society for the study of Physics and Philosophy. His latest book ‘Glimpses of Holistic Philosophy’ has been widely acclaimed. He recently conducted a “Two days seminar on Religion, Science and Social Services” in Chennai, India which was attended by senior Professors of Physics and Philosophy. Important topics covered in this inter-disciplinary meet will be sent soon under a different title. | http://www.gdlnunud.org/general-theory-of-relativity-is-built-on-special-theory-and-establishes-the-supremacy-of-gravitation.html | 13 |
69 | Our current belief in the remote gravitational tug from distant planets even gives rise to periodic predictions of unusual increases in tidal effects on Earth. These predictions arise from an expected increase in the “gravitational pull” upon the Earth when a number of planets are due to align with the Earth in their orbits about the Sun; yet, the predicted effects never seem to materialize. The reason nothing happens, of course, is because there is no gravitational force emanating from these planets to affect the Earth, and it is unlikely that the Earth has additional internal wobbles that would cause changes in our tidal forces to coincide with such arbitrary planetary alignments. Yet, there are still other observations that are commonly attributed to “gravitational tidal forces.” What are we to make of these claims now that numerous flaws have been pointed out in today’s gravitational theory, and such remote forces reaching across space do not even exist in Expansion Theory? Let’s now take a closer look at a widely reported example of such an apparent tidal-force effect in our solar system.
One of the most well known and widely reported examples of apparent tidal forces involves the comet Shoemaker-Levy 9, which plummeted into Jupiter in 1994. The comet was actually composed of a number of separate pieces as it headed toward Jupiter, making a number of spectacular impacts when it struck the planet.
It is widely believed that the original comet must have been initially torn apart by Jupiter’s tremendous gravity on an earlier close approach to the planet. This is considered to be an example of a gravitational tidal force at work since Jupiter’s gravitational force would theoretically pull stronger on the near side of the comet and weaker on the far side, thus pulling it apart. Yet, Expansion Theory states that there is no such thing as a gravitational force emanating from a planet to pull on distant orbiting objects. The comet would simply have coasted past Jupiter several years earlier at a rapid enough speed to overcome Jupiter’s expansion, swinging past the planet due to the pure geometry of the situation but experiencing no “gravitational forces.” And a closer look at this event shows that the gravitational explanation has a fatal flaw – again, in addition to the lack of scientific viability of such a force:
Jupiter’s Gravity Did Not Pull Shoemaker-Levy 9 Apart
It is commonly believed that the gravitational field of Jupiter pulled the comet Shoemaker-Levy 9 apart as it swung by on an earlier close approach; however, there is a clear flaw in this belief. To see this, consider the space shuttle, which circles the Earth roughly every 90 minutes. If the shuttle were truly constrained in orbit by a gravitational force, like a rock swung on a string, it might seem that there should be sizable stresses across the shuttle as it is so rapidly flung around the planet and continually forced into a circular orbit. Certainly an object swung rapidly on a string would experience such stresses, yet there is no sign of such a powerful force pulling on the shuttle. This is currently explained by the belief that gravity would permeate the shuttle, pulling on every atom so that the near and far sides of the shuttle would both experience nearly the same pull, with only a slightly weaker pull on the side farther from the planet. Therefore, unlike a rock that undergoes great stress as it is pulled by an externally attached string, all of the atoms composing the shuttle are presumably immersed in the attracting gravitational field, resulting in only a slight differential strain across the shuttle.
If this explanation were true, then this small differential strain across the shuttle would be very tiny indeed. No signs of such a strain on the shuttle and its contents have ever been measured or noted – even after presumably acting for a week or more during a typical shuttle mission. Even free-floating objects show no sign of being even slightly disturbed by any such internal stresses pulling across the shuttle due to this slight differential pull of gravity. Therefore, it would be quite reasonable, if not generous, to say that if such a tiny differential force was actually pulling across the shuttle, it would be no greater than perhaps the force felt by the weight of a feather on Earth. Although the lack of evidence of any such force can be seen as a clear sign that the shuttle is actually on a natural force-free orbital trajectory as explained by Expansion Theory, let’s see what happens when we apply this gravitational analysis to the scenario of comet Shoemaker-Levy 9.
When the comet was first discovered in 1993 it was already fragmented. Attempts were made to determine how the comet broke apart by re-examining past observations. Although the evidence is sketchy, it is still commonly reported that the comet was pulled apart by Jupiter’s gravity during an earlier approach at a distance of roughly 1.3 planetary radii from Jupiter’s center. That is, the distance of the comet above the surface of Jupiter as it flew past was roughly equivalent to one-third of the planet’s radius. A standard calculation of the reduction in gravitational strength with distance – according to Newton’s theory – shows that, at that distance, the comet would have experienced a gravitational pull that was 40% weaker than at Jupiter’s surface. To put this in perspective, this represents a force on the comet that is only 50% stronger than the gravitational force that is theoretically constraining the space shuttle as it orbits the Earth (remember, no such force has ever actually been felt or measured).
Now, since we know that the net stresses across the shuttle in near-Earth orbit are imperceptible even when supposedly acting continually for days, it is difficult to justify that a stress only 50% greater across the comet Shoemaker-Levy 9 during a brief flyby would have torn it apart. The situation does not change even if we consider there would have been a greater gravitational difference across the 2-km comet than if it were the size of the much smaller space shuttle. Each shuttle-sized segment of the comet’s diameter would still have experienced a pulling force across it of no more than the weight of a feather, as mentioned earlier. Even with a hundred such segments across the comet this total force of no more than the weight of a handful of feathers across a 2-km comet is many thousands, if not millions of times too weak to tear it apart.
So, we are left with the mystery that Newton’s gravitational force, even if it did exist, could not possibly have been responsible for the breakup of the comet Shoemaker-Levy 9. This widely held belief demonstrates the powerful confirmation bias fallacy that exists in our science, presenting such clearly false evidence as solid support for today’s gravitational theories.
In stark contrast, there are no forces at all upon the comet according to Expansion Theory. However, this is not a complete mystery, as there are numerous additional explanations. Jupiter is known to have an immense magnetic field, which could have played a role in the comet’s breakup. Alternatively, the comet could have collided with other space debris orbiting about Jupiter. Also, the comet would have undergone sizable alternate heating and cooling as it approached then receded from the Sun during its travels, perhaps experiencing sizable blasts of plasma from sunspot activity as well. The comet could even have had a pre-existing fragmentation that was impossible to clearly resolve in earlier photos containing it as a faint blur by chance prior to its official discovery. Regardless, in the list of possible causes, it is clear that being torn apart by a “gravitational tidal force” could not be among them.
These discussions of tidal effects show that there is no clear evidence for the existence of “gravitational tidal forces” acting at a distance between orbiting bodies. In particular, the example of comet Shoemaker-Levy 9 shows how easily such verifiably false explanations of observations can nevertheless become widely accepted in our science, eventually becoming unquestioned fact. Many of the ideas we have inherited as a scientific legacy from centuries past have become so firmly ingrained in our thinking and belief system that they are often unquestioned in situations where they clearly cannot possibly apply. Due to this process it is now readily accepted that an endless gravitational force reaches out into space, tearing comets apart and inducing ocean tides and volcanic activity on orbiting moons and planets. However, Expansion Theory allows us to take a second look at our inherited beliefs, and in the process, to see the clear physical causes at work that have been masked by such largely unquestioned beliefs as Newton’s gravitational force or Einstein’s warped space-time abstraction.
One of the most compelling phenomena used in our space programs is that of the so-called “gravity-assist” maneuver, also often called the Slingshot Effect. This is a maneuver where a spacecraft catches up to an orbiting planet from behind, swings by the planet in a partial orbit, and then is flung away on a new trajectory at a faster speed. This is currently believed to be the result of the planet’s gravity accelerating the spacecraft toward it, towing the spacecraft along briefly while swinging it around, then releasing it off into space again at an increased overall speed. This is a very real effect that many space missions rely upon to give fuel-free speed boosts to spacecraft that are sent across the solar system. Let’s now take a closer look at this effect.
As with falling and orbiting objects, there is no question that the observed effect of the “gravity-assist” maneuver does occur; the question, though, is whether the current explanation in our science is at least logically sound – and further, whether it is scientifically viable and consistent with other celestial observations. The discussions so far have repeatedly shown that the concept of a gravitational force at work behind many of our observations violates the laws of physics, while presenting alternate, scientifically viable explanations for these observations according to Expansion Theory. This means that a “gravitational force” explanation for “gravity-assist” maneuvers, if actually true, would now stand alone as quite a mystery, based on a proposed gravitational force that has been otherwise shown to be scientifically unexplained if not even verifiably false.
Therefore, even prior to deeper investigation, it can already be said that the current gravitational explanation for this effect is not scientifically viable, nor would it even be consistent with other observations such as falling objects, orbits and tidal forces – for which the gravitational-force explanation is highly questionable. The only remaining question is whether today’s explanation for “gravity-assists” is at least feasible in principle, regardless of the additional problems that arise with the “gravitational force” explanation. The analysis to follow shows that even the logic within the current explanation in our science does not stand up to scrutiny.
Flaw in Gravity-Assist Logic
The basic idea of being pulled-in then flung off into space at a faster speed by gravity is a fundamentally flawed concept, since Newton’s gravitational force is considered to be a purely attracting force. In order for the spacecraft to be flung off into space at an increased speed, the planet’s gravity would have to “let go” of the spacecraft somehow, after pulling it in. Otherwise, the situation would be somewhat as if an elastic band were stretched between the planet and the spacecraft. The elastic band would pull the spacecraft in, accelerating it toward the planet, but then would decelerate the spacecraft again as it attempted to speed away. In somewhat similar fashion, the same gravitational force that supposedly accelerates a spacecraft throughout its approach to a planet would also continually decelerate it as it traveled away, returning the spacecraft to its original approach speed as it leaves.
Yet, since spacecraft are clearly observed to depart with greater speed than on approach when this maneuver is performed in practice, logical justifications have been arrived at in an attempt to explain this effect from the only practical viewpoint available today – Newton’s gravitational theory. The typical explanation in today’s science often does acknowledge the “gravitational elastic band” problem just mentioned, but claims that there is an additional effect in practice when moving planets are involved – an effect where the spacecraft is said to “steal momentum” from the orbiting planet.
This concept begins with the idea that as a spacecraft catches up to and is pulled toward a planet that is orbiting the Sun, the spacecraft would also pull the planet backward slightly. This would slow the planet in its orbit while the spacecraft gets a large speed boost forward due to its far smaller mass, essentially transferring momentum from the orbiting planet to the passing spaceship. Then, although it is acknowledged that the planet’s gravity would pull back on the spacecraft as it leaves, slowing it back to the same relative speed it had with the planet before the maneuver, the spacecraft still leaves with a net increase in speed. This is said to occur because the planet is now traveling slightly slower in its orbit about the Sun after being pulled backward, with this lost momentum now transferred to the spacecraft, speeding up the much lighter spacecraft by far more than the massive planet was slowed. Essentially, this explanation says that the spacecraft reaches ahead via gravity and pulls on the planet to speed ahead while slightly slowing the planet in exchange, thus permanently stealing momentum from the massive planet to give the tiny spacecraft a sizable lasting speed boost.
Although this explanation may seem feasible on first read, a closer examination shows that it suffers from the same fatal flaw mentioned earlier, where the gravity of a stationary planet would pull back on the departing spacecraft, canceling any speed increase that may have occurred on approach. The “momentum stealing” explanation simply creates the illusion that the situation is different when the planet is moving in its orbit. Let’s now take a good look at this illusion.
First, taking the simpler scenario of a stationary planet approached by the spaceship, clearly a “gravitational elastic band” accelerating the spacecraft toward the planet would also equally decelerate it as it leaves, giving no net speed increase. This is what Newtonian gravitational theory would predict. The more complex scenario is that of a moving planet approached from behind by the spacecraft. Here, however, it is claimed there is something fundamentally different simply because the planet is moving. It is claimed that the planet is pulled backward and permanently slowed in its orbit, giving a lasting “momentum transfer” and speed boost to the spacecraft that pulled itself ahead. This is where the illusion is created from flawed logic.
In actuality, there could be nothing fundamentally different with a moving planet – there would still be no net speed changes. To see this, we simply need to imagine ourselves coasting along with the moving planet, in which case the planet is no longer moving relative to us, and it is easier to see that the situation is essentially the same as with the stationary planet. Recall that it is now widely recognized that all motion is purely relative – there is no absolute reference anywhere – so there can be no fundamental difference between a stationary planet and one that is merely stationary relative to us. This logical flaw in the current explanation is often overlooked because the additional issue of the planet being pulled backward in its orbit is typically only mentioned for the moving planet, making it appear as if a moving planet presents a fundamentally different situation than a stationary one. But in actuality, a stationary planet would be pulled backward in the same manner by the “gravitational elastic band” as the spacecraft approached (Fig. 3-23); it is simply easier to overlook this fact with the stationary planet since the focus is on the motion of the spacecraft.
Fig. 3-23 Today’s Gravity-Assist Explanation: No Net Acceleration
As today’s gravitational force-based explanation in Figure 3-23 shows, the spacecraft would be accelerated forward by the gravity of the stationary planet, but would also pull the planet backward slightly in the process – just as commonly stated for the moving planet. Then, the situation would completely reverse itself after the spacecraft passed the planet. The planet would be pulled forward to its original position as it pulls on the departing spacecraft, slowing the spacecraft to its original approach speed as well. And, once again, there is no reason to expect this final situation to be any different with a moving planet – both the planet and the spacecraft would have no lasting speed change according to Newtonian gravitational theory.
A simple way to visualize this is to picture the whole diagram in Figure 3-23 moving to the left across the page. This would be entirely equivalent to the planet moving in its orbit, with the spacecraft initially catching up to the planet from behind. It is clear to see that nothing fundamentally changes simply because the overall diagram moves across the page. Both the planet and the spacecraft still end up with no net speed changes. Likewise, nothing fundamentally changes in the “momentum stealing” explanation of “gravity-assist” maneuvers simply because the planet moves along in its orbit. Whether the planet is moving or not, there would be no lasting slowing of the planet according to Newtonian gravitational theory, and no net speed increase imparted to the spacecraft – in short, no “momentum stealing” by the spacecraft.
According to Newtonian gravitational theory,
gravity-assist maneuvers are impossible.
The belief that we understand the physics of this maneuver is a myth perpetuated by this flawed “momentum stealing” logic, which has simply been repeated, uncorrected for decades. This has occurred because we have come to believe unquestioningly in Newton’s gravitational force, and at this age of advanced science and technology it is almost inconceivable that a maneuver at the core of our space programs could be a completely unexplained – and unexplainable – mystery. Instead, we have simply learned to exploit a mysterious effect that obviously does occur, while attempting to invent logical justifications for it rather than allowing this mystery to stand in plain view, pointing to a deeper physical truth awaiting discovery.
The “Gravity-Assist” or Slingshot Effect is a Purely Geometric Effect.
Since it has just been shown that the “gravity-assist” maneuver cannot be explained using today’s gravitational theory, the following explanation from the perspective of Expansion Theory will refer to this maneuver by another commonly used term – the Slingshot Effect – to make a clear distinction between the two explanations. This is also a more appropriate term to use in a discussion that shows this maneuver to actually be a purely geometric effect that does not involve any type of gravitational force upon the spacecraft as its speed effectively increases.
First, we must consider what a trip through the solar system means from the perspective of Expansion Theory. Just as every atom, object, and planet must all expand at the same universal atomic expansion rate to remain the same relative size, so must the orbits of the planets around the Sun. This is not to say that empty space itself expands, as if it were a material object composed of expanding atoms, but that the speed and trajectory of orbiting objects continually moves them away from the expanding body they are orbiting, resulting in their orbit essentially expanding in step.
Our Moon, for example, coasts past our spherical planet, whose shape rapidly curves away as the Moon travels past, but is immediately counteracted by the planet’s expansion toward the Moon, maintaining a constant distance between them and a stable lunar orbit. The Moon continues coasting past and away and the Earth continues expanding in balance, so the overall stable Earth-Moon orbital system continually expands in step. The same is true of all planetary orbits about the Sun as well. If this weren’t the case, the planets and their orbits would not maintain their relative sizes, and their orbital distances from the Sun would effectively either continually increase or decrease, depending on whether the planets or their orbits were expanding at the greater rate. Therefore, the solar system could be thought of as a very large expanding “object” composed of equally expanding planetary rings centered on the Sun, each maintaining a constant relative distance from each other as they expand. And so traveling across the solar system actually involves the geometry of rising in orbit about the expanding orbital rings of the planets. Let’s see how this occurs.
The fundamentals of this principle can be seen even in the scenario of a spacecraft launched into orbit about the Earth. This is done by first rocketing vertically away from the ground, then slowly arcing toward a horizontal trajectory as the spacecraft is inserted into a coasting orbit around the planet. If the speed of the spacecraft exceeds the orbital speed for that particular altitude when it turns to fly horizontally into an orbit, it will continue to coast upward in a rising orbit, eventually settling into a stable orbit further out. And, if the spacecraft is traveling fast enough, it will actually rise into a trajectory that escapes the planet entirely. In this case, it does not simply coast straight off into deep space, but moves into a rising orbit about Earth’s enormous expanding orbital ring around the Sun (Fig. 3-24).
Fig. 3-24 Falling, Orbiting Earth, & Orbiting Earth’s Orbital Ring
Although it would take tremendous speed to overcome the expansion of such an enormous “object” as the Earth’s orbital ring around the Sun, the spacecraft, of course, was already traveling fast enough to equal this enormous expansion even before launch. The Earth essentially “skims the surface” of its huge expanding orbital ring as it orbits the Sun, speeding past at a rate that matches the outward expansion of this ring. Therefore, every object on the planet already has the required speed to equal the expansion of Earth’s orbital ring, and at the equator of our spinning planet objects even exceed this speed, which is why launches generally occur at the equator.
So the spaceship merely needs to fly fast enough relative to our planet to escape the Earth’s expansion, at which point it far exceeds the expansion of the Earth’s larger orbital ring, and now effectively continues away from the Sun as it actually rises in orbit about this large orbital ring. This is similar to one of the most common orbital maneuvers in our space programs today, known as the Hohmann Transfer Orbit, except that today’s terminology assumes our spacecraft travel across the solar system by literally rising in orbit about the Sun based on a mass-based gravitational force. Expansion Theory, on the other hand, shows this to actually be a rising orbit about the nearest inner orbital ring for purely geometric reasons instead.
The spacecraft’s turn toward a speeding horizontal trajectory in Figure 3-24 altered its fate from slowing in its vertical climb and falling back to Earth (actually the Earth’s expansion catching up to it), to one where it continues to coast in a rising orbit. From the perspective of Expansion Theory, this very same principle is involved in traveling across the solar system, with the spacecraft continuing in rising orbits about the orbital rings of successive planets.
Getting to Jupiter, for example, would first involve rocketing away from the expanding Earth, turning to rise rapidly in orbit about the planet, and soon escaping the planet’s expansion and moving on to a rising orbit about the Earth’s enormous orbital ring. Then, as the spacecraft coasted toward Mars, it would effectively lose speed as the Earth’s orbital ring continued its accelerating outward expansion toward the spacecraft. However, just like the spacecraft that turns horizontally and enters a rising orbit about the Earth to avoid falling back to the ground, our interplanetary spacecraft encounters Mars, taking a similar turn as it effectively accelerates in a partial orbit around Mars, as shown earlier in Figure 3-23. But unlike Figure 3-23, the interplanetary spaceship does not have a decelerating trajectory relative to Mars as it departs; instead, before this can occur, the spaceship is effectively accelerated and launched into a rising orbit about Mars’ orbital ring, as in Figure 3-24.
This occurs because the partial orbit about Mars defines a geometry where the spaceship is effectively accelerated as it heads toward the expanding planet and swings around it, though no forces are involved in this effective acceleration. This is not unlike the effective acceleration of a dropped object due to the planet actually expanding toward the object. However, this effective increase in speed causes the spacecraft to exceed Mars’ expansion and escape into a definition as an object in a rising orbit about Mars’ orbital ring, much like the initial escape from Earth (again, Figure 3-24).
Remember that neither Newton’s First Law of Motion nor Newton’s “gravitational force” actually exists. Celestial dynamics are entirely defined by the relative geometry of expanding objects. If this geometry defines an effective acceleration toward Mars, which immediately becomes a rapid escape from Mars into a rising orbit about its orbital ring, then this is what occurs. This is the natural way events proceed in the solar system, and there is no reason this should not be the case. It is only our Newtonian thinking – with absolute momentum possessed by objects and unexplained gravitational forces – that turns this situation into an unexplainable “gravity-assist” maneuver. Instead, Expansion Theory shows that it is simply a natural geometric slingshot effect, just as there is a Natural Orbit Effect as explained earlier. There is no “gravitational elastic band” and no “momentum stealing” that we must attempt to justify. The spacecraft simply continues on from this effective acceleration and launch into a new rising orbit, coasting onward toward Jupiter (Fig. 3-25).
Fig. 3-25 Redefined Momentum: Expansion Theory Slingshot Effect
As further evidence of this, the Voyager 2 spacecraft reported no forces, stresses or strains due to the “gravity-assists” it received as it traveled through the solar system, even when such maneuvers accelerated it to over twice its original speed. Although such stress-free acceleration is impossible in classical physics, it is a natural and expected result in Expansion Theory, since this is actually an effective acceleration due to the purely geometric Slingshot Effect involving no forceful acceleration upon the spacecraft.
Once again, Newton’s First Law of Motion is not the literal truth – objects do not literally possess absolute momentum or speed, but only that which is defined by the expansionary geometry of the moment. This was seen earlier in the gravitationally unexplainable change from a parabolic plummet toward the ground, to a circular orbit about the planet simply because the geometry changed to one that continually overcame the Earth’s expansion once it passed a certain threshold in speed. Similarly, the Slingshot Effect changes the geometry from that of a slowing escape from the accelerating expansion of the Earth’s orbital ring, to an accelerating partial orbit about Mars, which then immediately becomes a rapidly rising orbit about Mars’ orbital ring on the way to Jupiter. Without the understanding that the dynamics in the solar system are purely due to changes in relative geometry as everything expands, we are left with today’s unexplained boosts in the absolute speed of spacecraft by a scientifically impossible gravitational force as they pass planets.
The above discussion highlights the stark difference between Newton’s universe of absolute speeds and forces, and that of Expansion Theory, which deals only with expanding relative geometry. The concept of our expanding solar system that was just introduced also helps to resolve an issue that has been an unanswered mystery for NASA scientists for well over a decade. This mystery has been widely published and discussed in journals and popular science magazines, becoming commonly known as the “Pioneer Anomaly.”
An Unexplained “Gravitational Anomaly”
The discussions throughout this chapter have shown that the current gravitational explanations of celestial events in our science today may serve as useful models, but cannot be the literal description of our observations. Therefore, since these models do not truly describe the underlying physics, it might be expected that difficulties and inconsistencies would arise that do not fit within these models. The inability of science to provide a viable explanation for the Slingshot Effect is one such example, though this has been hidden behind flawed logical justifications; however, one example recognized as a clear mystery is the anomalous behavior of spacecraft crossing our solar system.
The complexities and course corrections involved in traveling among moons and planets tend to mask subtle anomalies that may exist in the behavior of spacecraft compared to standard gravitational theory. Recently though, NASA has noted unexplained course anomalies in five spacecraft passing Earth (Cassini, Galileo, Near Shoemaker, Rosetta and Messenger spacecraft). We have also had a unique opportunity to see such effects much more clearly since the Pioneer 10 and 11 spacecraft sped past Pluto and left the solar system well over a decade ago. Since there are no longer any moons or planets for the Pioneer spacecraft to encounter, any anomalies that may exist in their motions since leaving the solar system would stand out clearly and consistently over time.
And indeed NASA scientists have noted and widely published observations of an unexplained additional pull on both Pioneer spacecraft back toward the Sun, exceeding the expected pull of gravity at that distance. This effect has been consistently recorded ever since the spacecraft left the solar system, having a constant unexplained decelerating effect on the spacecraft. Attempts to explain this effect using all known or proposed theories have so far been unsuccessful.
However, when we look at this mystery from the perspective of Expansion Theory, these spacecraft journeys take on a very different quality. The situation now changes from that of spacecraft being pulled back by an unexplained additional attracting force, to that of an expanding solar system and the effect it has on spacecraft motion and the signals they send back to us.
In many fixed-distance situations our determination of speed and distance already has atomic expansion built-in, being defined within the context of our planet and solar system where expansion underlies the apparently fixed reference points all around us. It appears as if reference points on the ground, or the orbits of moons and planets, are fixed distances apart, when they are actually expanding apart but appear unchanging as everything else also expands equally. So, signal blips arriving here from a transmitter sitting on Pluto, for example, are considered to indicate a fixed orbital distance, even though the entire solar system, including Pluto’s orbital ring, is expanding outward.
But spacecraft that are not tied to this essentially fixed dynamic – those not in stable orbits about the Sun but which freely roam the solar system and beyond – often exhibit “anomalies” since their motion is open to the geometry of the underlying expansion dynamics, which we currently do not recognize. So as spacecraft head out well beyond Pluto the geometry of their motion becomes less like a distant planetary orbit, for which we now have well-refined models, and more like motion away from an expanding solar system, for which we do not. Also, as discussed earlier, today’s models of orbital mechanics do not actually use Newton or Einstein’s gravitational theories (though they are commonly thought to), so we do not normally put these theories to direct test on missions.
The behavior of spacecraft departing the vicinity of one planet or arriving at the next during such missions would proceed roughly as planned and expected, since our refined models and techniques would apply to most orbit departure and approach scenarios. Deviations in between are not uncommon and have been noted, but the complexities of maneuvering between various moons and planets, combined with expected minor course corrections along the way, typically mask either the existence or meaning of such anomalies. But once the Pioneer spacecraft left the solar system, coasting smoothly away, consistent deviations from our models and theories became apparent.
So then, as our spacecraft fly freely beyond Pluto, our solar system advances toward them, reducing the travel time of their signals heading back to Pluto. But since this effect is due to the size and expansion of our overall solar system while today’s space missions use the mass of the Sun and its gravitational pull on our spacecraft, discrepancies emerge from missions in such untested territory beyond Pluto.
Although the rest of the signal’s journey follows our usual models and expectations between Pluto and Earth, the initial discrepancy in the expected travel time from the spaceship to Pluto means the overall travel time to Earth differs from expectations (Fig. 3-26). This is the likely cause of discrepancies reported for spacecraft traveling beyond Pluto, represented as an “anomalous and mysterious additional pulling force toward the Sun” in today's scientific language. This underlying expansion-based effect, which necessarily differs from today’s gravitational overlay that has been erroneously superimposed on observations, is the likely answer to the currently unexplained “Pioneer Anomaly” noted by NASA.
Fig. 3-26 Expanding Solar System Differs From Newtonian Gravity
These issues and far more are addressed and resolved in Expansion Theory by Mark McCutcheon. We need a credible new Theory of Everything including a new theory of gravity. See these previous articles and excerpts:
Expansion Theory - Our Best Candidate for a Final Theory of Everything
Dark-Matter, Dark-Energy and the Big-Bang All Finally Resolved
Cosmology in Crisis (excerpt by Mark McCutcheon upon which the article above is based)
Breakthrough in Faster-Than-Light Travel and Communication, and the Search for Extraterrestrial Intelligence (SETI)
Gravity Breakthrough: Springing into a Gravitational Revolution
The Final Theory by Mark McCutcheon - Chapter 1 - Investigating Gravity
Roland Michel Tremblay
Standard Theory and Expansion Theory Maps
Please use horizontal scroll bar if you do not see entire images. | http://www.themarginal.com/pioneer_anomaly.html | 13 |
51 | Science Fair Project Encyclopedia
- For other meanings of density, see density (disambiguation)
Density (symbol: ρ - Greek: rho) is a measure of mass per unit of volume. The higher an object's density, the higher its mass per volume. The average density of an object equals its total mass divided by its total volume. A denser object (such as iron) will have less volume than an equal mass of some less dense substance (such as water).
- ρ is the object's density (measured in kilograms per cubic metre)
- m is the object's total mass (measured in kilograms)
- V is the object's total volume (measured in cubic metres)
Various types of density
Under specified conditions of temperature and pressure, density of a fluid is defined as described above. However, the density of a solid material can be different, depending on exactly how it is defined. Take sand for example. If you gently fill a container with sand, and divide the mass of sand by the container volume you get a value termed loose bulk density. If you took this same container and tapped on it repeatedly, allowing the sand to settle and pack together, and then calculate the results, you get a value termed tapped or packed bulk density. Tapped bulk density is always greater than or equal to loose bulk density. In both types of bulk density, some of the volume is taken up by the spaces between the grains of sand. If you are interested in the density of the grain of sand itself you need to measure either envelope density or absolute density.
Density in terms of the SI base units is expressed in terms of kilograms per cubic metre (kg/m³). Other units fully within the SI include grams per cubic centimetre (g/cm³) and megagrams per cubic metre (Mg/m³). Since both the litre and the tonne or metric ton are also acceptable for use with the SI, a wide variety of units such as kilograms per litre (kg/L) are also used. Imperial units or U.S. customary units, the units of density include pounds per cubic foot (lb/ft³), pounds per cubic yard (lb/yd³), pounds per cubic inch (lb/in³), ounces per cubic inch (oz/in³), pounds per gallon (for U.S. or imperial gallons) (lb/gal), pounds per U.S. bushel (lb/bu), in some engineering calculations slugs per cubic foot, and other less common units.
The maximum density of pure water at a pressure of one standard atmosphere is 999.972 kg/m³; this occurs at a temperature of about 3.98 °C (277.13 K).
From 1901 to 1964, a litre was defined as exactly the volume of 1 kg of water at maximum density, and the maximum density of pure water was 1.000 000 kg/L (now 0.999 972 kg/L). However, while that definition of the litre was in effect, just as it is now, the maximum density of pure water was 0.999 972 kg/dm3. During that period students had to learn the esoteric fact that a cubic centimetre and a millilitre were slightly different volumes, with 1 mL = 1.000 028 cm3. (often stated as 1.000 027 cm3 in earlier literature).
Measurement of density
A common device for measuring fluid density is a pycnometer. A device for measuring absolute density of a solid is a gas pycnometer .
Density of substances
Perhaps the highest density known is reached in neutron star matter (see neutronium). The singularity at the centre of a black hole, according to general relativity, does not have any volume, so its density is undefined.
A table of densities of various substances:
|Substance||Density in kg/m3|
|any gas||0.0446 times the average molecular mass, hence between 0.09 and ca. 10 (at standard temperature and pressure)|
|For example air||1.2|
See also density of gas
Note the low density of aluminium compared to most other metals. For this reason, aircraft are made of aluminium. Also note that air has a nonzero, albeit small, density. Aerogel is the world's lightest solid.
|Impact of temperature|
|°C||c in m/s||ρ in kg/m³||Z in N·s/m3|
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | http://all-science-fair-projects.com/science_fair_projects_encyclopedia/Density | 13 |
166 | In 1920, A. A. Michelson and F. G. Pease measured the angular diameter of Betelgeuse, α Orionis, with the 100-inch reflector at Mount Wilson. From its distance, they directly inferred its diameter, confirming that it was a huge star. This was the first direct measurement of stellar diameter; all other methods had been indirect and subject to uncertainty. Attempts to enlarge the phase interferometer to make the method applicable to a larger number of stars undertaken by Pease in later years were unsuccessful. In 1956 Hanbury Brown and Twiss applied a method they had devised for radio astronomy to visual astronomy and measured the angular diameter of Sirius, α Canis Majoris. This was the intensity interferometer, which removed most of the limitations of the phase interferometer, allowing measurements on a much larger sample of stars. A large interferometer was built at Narrabri, NSW, Australia, which finally provided a number of accurate stellar diameters by direct measurement after 1968.
These important and interesting developments are largely ignored in astronomy textbooks. The Michelson experiment is usually only briefly acknowledged, and the Hanbury Brown inteferometer is not mentioned at all. One reason for this is the difficulty of explaining the method, especially without mathematics. Optics texts usually give a reasonable explanation, because the measurements are valuable examples of the wave nature of light, but the astronomy is slighted. For these reasons, I will attempt to give a thorough explanation of the methods of measuring stellar diameters by interferometry, together with the important dusky corners that they illuminate. First, we must review how stellar distances are found, which is a fundamental task of astronomy.
When we look at the stars through the years, we are impressed by their fixity, at least on human scales of time. Since Ptolemy looked at the stars, they have retained their places, except perhaps for slight differences in the case of a few stars such as Arcturus, which has moved about 1° since then. The lack of movement through the year, as the earth orbits the sun, and in secular time, can be considered evidence either of the fixity of the earth (if the stars are considered scattered in space) or that they are all at the same distance, or that they are extremely distant. Most old cosmologies put the stars on a spherical surface, the firmament, with heaven beyond, so their fixity was no problem. Other old cosmologies, which had stars scattered in space, interpreted the fixity as due to the fixity of the earth, though a few philosophers thought the stars greatly distant, a most unpalatable thought for most thinkers. There was no way to select between the alternatives.
As soon as the revolution of the earth about the sun was accepted, and the firmament banished, evidence for the apparent displacement of the nearer stars relative to the more distant, called parallax, was sought. Parallax is illustrated in the diagram at the right, together with some other definitions. A and B are the positions of the earth at times six months apart, so the base line AB is two astronomical units, 2a = 2 AU. The parallax is traditionally measured in arc-seconds, often written p" to emphasize the fact. The number 206,265 is the number of seconds in a radian. The distance from the sun O to the star S is d, measured in parsecs or light years. The light-year was originally for public consumption, to emphasize the inconceivably great distances involved, but remains a vivid and common unit.
Parallax, however, was not observed--a most unsatisfactory condition. William Herschel looked for it in vain. Not until 1838 was parallax finally found. Bessel thought that a star with large proper motion might be relatively close, so he chose 61 Cygni, a fifth-magnitude star in Cygnus, which appeared on a dense background of Milky Way stars that could form a good reference. By visual observation, using a special measuring instrument, he found a parallax of 0.296", which corresponded to a distance of 3.4 psc or 11 l.y. In the same year, Struve found 0.124" for Vega, α Lyrae, and Henderson 0.743" for α Centauri. These were huge distances, but were only to the apparently nearest stars, which surprised everyone. Henderson actually chose one of the nearest stars of all. Only dim, red Proxima Centauri has a larger parallax, at 0.786". It is about 3 l.y. distant.
The introduction of photography made it much easier to measure parallaxes. It was only necessary to compare plates taken 6 months apart, and special intstruments were developed to facilitate the task. When the two plates were presented alternately to the eye, nearby stars would jump back and forth, while the distant ones remained unmoved. In this way a large list of trigonometric parallaxes were determined, but they covered only the stars close to us, up to perhaps a parallax of 0.1", or a distance of 33 l.y.. These are merely our close neighbors in the vastness of space.
A star appears dimmer the farther it is away, according to m = M + 5 + 5 log p", where m is the visual magnitude of the star (smaller numbers mean brighter). Putting in p" = 0.1, we find m = M, called the absolute magnitude, the apparent magnitude if the star were viewed at a standard distance of 10 parsec. If by some magic we could infer the absolute magnitude M of any star we observed to have an apparent magnitude m, then the parallax could be determined by this formula based on the inverse-square law. There are many uncertainties here, the major ones the effect of interstellar absorption and, above all, the determination of M.
The spectroscope shows that the spectra of stars can be classified into a regular series based principally on surface temperature: the types O, B, A, F, G, K, M and C, each quite recognizable. The Sun has a type G spectrum, and a surface temperature of 5800-6000K, for example. By classifying the spectra of the sample of stars whose distances we know by trigonometrical parallax, it was found by Hertzsprung and Russell that they roughly followed a single path on a plot of absolute magnitude M against spectral type, called the main sequence. Many stars did not follow this path, but were usually easy to recognize as different. If a star was presumed to be on the main sequence by all available evidence, then a look at its spectrum established its spectral class, and the Hertzsprung-Russell diagram gave an estimate of its absolute magnitude. Then its parallax, and so its distance, could be estimated. This is called a spectroscopic parallax, which extended our knowledge of stellar distances to really large distances, albeit approximately.
Any good astronomy text will show how larger and larger distances were estimated by other methods, such as Cepheid variables (the period gave a clue to the absolute magnitude, and these were very bright stars that could be seen a long ways) and the red shift, which extended distance knowledge far into the realm of the galaxies. However, trigonometric and spectroscopic parallaxes are sufficient for our present purposes. Accurate determination of distance is essential to accurate determination of stellar diameter.
It is worth remembering that no star has a parallax as large as 1", and that spectroscopic parallaxes are uncertain and subject to revision. For example, the parallax of Betelgeuse, α Orionis, was taken as 0.018" in 1920, but now a figure of 0.0055" is accepted. Of course, the contemplation of the vast distances in space is an essential part of the appreciation of astronomy, so very different from the views popularized by fiction, in which the Enterprise flits about space like a Portuguese trader in the Indian Ocean. Betelgeuse is 411 l.y. away. To get there about now, at the speed of light, you would have had to have departed when Shakespeare was born.
The other important preparation for our task is the understanding of inteference, interferometry and coherence. This is a big job, which requires reference to Optics texts for a thorough attack. Here we can only present the fundamentals in an abbreviated form. This should be sufficient for our purpose, however.
Observation teaches that if one candle gives a certain amount of illumination, then two candles give twice as much. The energy comes out on the light rays and just piles up where it is received, like so much snow. This reasonable and common-sense view is, of course, totally wrong, like the concept that matter is continuous, like cheese. It is useful in practice, but does not lead to understanding, only off into the weeds of speculation.
The truth is that light has an amplitude that moves on propagating wavefronts from its source. The amplitudes from different sources add at any point, and the energy received is the average value of the square of the resultant amplitude. Any effects caused by the adding of amplitudes are traditionally called interference, but amplitudes do not "interfere" with each other in the usual sense of the word, but seem blissfully independent of each other. We find that the wavelength of the amplitudes is quite small, only 500 nm for green visible light. Combined with the large velocity of light, 3 x 108 m/s, it turns out that the frequency is 0.606 x 1015 Hz, a really large value. This makes it impossible to observe the amplitude directly. All we can measure are time averages of functions of the amplitude. If A is the amplitude, then the intensity I = <AA*> is one such function. The angle brackets imply a time average over some suitable interval, much longer than the period of the oscillation of the amplitude. There are units to be considered, which introduce numerical factors, but we shall usually neglect them.
An amplitude can be represented by A(t) = Ae2πiνt aeiφe2πiνt, where the complex amplitude A has been given in polar form, with an amplitude a and phase φ. Unfortunately, we have to use the same word for what we have called a generalized amplitude and the modulus of a complex number. It would have been better to call a the modulus, but this is not usually done. The two meanings for "amplitude" are not easy to confuse, fortunately. This form of the amplitude is not typical of most light sources, and is a kind of idealization. However, it is approached closely by laser light, so it is easy for us to experience. When we do use laser illumination, the light-as-snow illusion is shattered, and there are fringes and spots everywhere. These are, of course, results of interference, and show that our amplitude picture is correct. Complex values are the easiest way to reflect the phase properties of an amplitude (in the general sense), and we need not be dismayed by their appearance.
Let's suppose we have two amplitudes, A = aei0 and B = be2πix/λ, where λ is the wavelength λ = c/ν, and x is a linear distance. When x = 0, the two amplitudes will be in phase, and the net amplitude will be A + B = a + b. The intensity I = (a + b)2, while the intensities in the unmixed beams are a2 and b2. The intensity in the mixed beam is the sum of the intensities in each beam alone, plus the amount 2ab, the interference term. If the two beams have equal amplitudes, then when the two beams fall together, the total intensity is four times the intensity of one beam, or (one candle) + (one candle) = (four candles), or 1 + 1 = 4. We never see this with candles, but we do with lasers, so the strange mathematics is quite valid. Energy is conserved, of course, so this extra intensity must come from somewhere else, where the intensity is less
If x = λ/2, then A = a and B = beiπ = -b. Now when we superimpose the two beams, the resultant amplitude is a - b, and the intensity is I = (a - b)2. The intensity is the sum of the separate intensities plus the interference term -2ab. If a = b, the intensity I = 0. Here, we have (one candle) + (one candle) = (zero candles), or 1 + 1 = 0. It is clear where the intensity came from for x = 0. As x increases steadily, the intensity forms bright fringes for x = 0, λ, 2λ, etc. and dark fringes for x = λ/2, 3λ/2, etc. If the amplitudes of the two beams are equal, the dark fringes are black, and the bright fringes are 4 times the average value. This gives the maximum contrast or visibility to the fringes. If b is less than a, the maxima are not as bright and the minima are not as dark. If b = 0, then the fringes disappear, and their visibility is zero. Michelson defined the visibility of fringes as V = (Imax - Imin)/(Imax + Imin), which ranges from 0 to 1.
Fringes had been observed since the early 17th century, when objects were illuminated by light coming from pinholes. Newton's rings are only one of the interference phenomena discovered by him. It was the explanation of the fringes that was lacking. No explanation was satisfactory until Thomas Young's experiments around 1801. Young did not discover fringes, but explained them in the current manner, which was no small accomplishment. He observed the interference of two beams, of the type just described, and measured the wavelength of light in terms of the fringe spacing. His experiment, in an abstract form, is a standard introduction to interference. It is not an easy experiment, especially when performed with a candle. Young actually used a wire, not two slits, which would have given an impossibly low illumination. When he held it before a pinhole through which a candle shone, fringes were seen in the shadow of the wire by direct observation, and their spacing could be compared with the diameter of the wire.
You can reproduce the experiment with an LED, a piece of #22 wire (diameter 0.6439 mm), and a hand lens of about 100 mm focal length, as shown in the diagram at the left. I used a yellow high-intensity LED in a clear envelope, viewed from the side where the source is practically a pinhole. The shield only reduces glare. I did not actually count the fringes (a micrometer eyepiece would make this easy) because I was holding the wire by hand, but the fine fringes were quite visible. The center fringe was a bright one.
This experiment uses some little-known characteristics of diffraction. If you look at a wire held at some distance from a pinhole, with your eye in the shadow of the wire, two short bright lines will be seen at the top and bottom edges of the wire. These act a line sources of light, producing two beams that intefere to make fringes in the shadow. There are also fringes outside the shadow, with different phase relations (the pattern is not continuous at the shadow edges), but are difficult to see in the glare. This gives a much larger intensity than two slits would in the same places, and made the experiment possible for Young. Of course, one could use two clear lines scratched carefully on a blackened photographic emulsion, as is done in schools, but the effect is not as good.
The geometry of a two-beam interference experiment is shown at the right. The source S is behind a pinhole that makes the illumination beyond the pinhole spatially coherent. That is, it all comes from the same direction and meets the two apertures with equal amplitudes. There may also be filtering that makes the light monochromatic, or temporally coherent, so that it approximates the model that we have been using. The distance D must be sufficient if the light at the apertures is to be coherent, something we will have much to say about below. However, the fringe spacing does not depend on D in any way. The screen is at a distance f from the apertures, presumed much larger than the separation a of the apertures. A lens of focal length f placed at its focal length from the screen makes the geometry exact in a short distance, and prevents too much spreading of the illumination. The fringe spacing is fλ/a, a useful relation to remember.
For the suggested Young's experiment, λ = 600 nm (roughly), f = 100 mm, and a = 0.6439 mm, giving a spacing of 0.093 mm, which seems roughly in agreement with observation. The wire will be approximately 7 fringes wide as seen through the lens. This is a way of measuring the wavelength if you know the wire diameter, or the wire diameter if you know the wavelength. Try #30 wire and observe that the fringes are not only wider, but fewer fit into the shadow.
An interference problem of interest to us is what happens with a circular aperture of diameter 2a. We must now superimpose the amplitudes from each area element of the aperture, and this calls for a double integral, using polar coordinates. Such problems are called diffraction, but there is no difference in principle with interference. The amplitude from each element of area will be the same, but its phase will differ depending on the distance from the element to the screen. This integral is done in all texts on physical optics, and the result is what is most important to us. This result is I(r) = I(0)[2J1(z)/z]2, where z = 2π[r/(2λf/d)], where d is the diameter of the aperture. We see the same factor λf/a as for the two apertures, where now a = d/2. The function J1(x) is the Bessel function of order 1, which behaves like x/2 - (x/2)3/2 + (x/2)5/12 - ... for small x. This is the famous Airy pattern, first derived by G. B. Airy, Astronomer Royal, on the basis of Fresnel's new wave theory. The intensity is strongly concentrated in the central maximum. 91% of the intensity is in the central maximum and the first bright ring surrounding it. If the intensity distribution across the disc is not uniform, the diffraction pattern will change slightly, but the general characteristics will be the same.
The Bessel function involved is zero when z = 3.83, so the radius of the first dark ring a = [(2)(3.83)/(2π)](λf/d) = 1.22λf/d. The angle (in radians) subtended by this radius at the aperture is θ = 1.22λ/d. The image of a star in a perfect telescope is an Airy disc. The d = 100" (2.54 m) Mount Wilson reflector has a Cassegrain focal length f = 40.84 m. If the effective wavelength is 575 nm, then θ = 2.76 x 10-7 rad = 0.057". Since neither the telescope nor the seeing can be perfect, this is a limit that can only be approached more or less closely. As we shall see, it is of the order of the angle subtended by the diameters of the largest stars, so actually seeing the disc of a star in a telescope is a vain hope. There are now some larger telescopes, including the 200" Hale reflector and the 5 m reflector in Russia, but this does not change the situation very significantly. Photographs have been taken with the 4 m telescope at Kitt Peak that with special processing have seemed to show some details of Betelgeuse. Direct observation of stellar discs seems just beyond practicality, unfortunately.
Stellar diameters are such an important parameter in theories that some way to estimate them before they could be directly measured was sought. The usual method was to estimate the total radiated energy from the absolute magnitude. Magnitudes relative to total radiated energy are called bolometric magnitudes, and can often be obtained by adding a (negative) correction to the visual magnitude. The observed magnitudes may be affected by interstellar extinction as well. The total radiation was set equal to the known rate of radiation from a black body at the surface temperature of the star, which could be inferred from its spectrum. The effective temperature is defined in terms of Stefan's Law with unit emissivity, which bypasses the problem of emissivity without solving it. Since the emission of energy is proportional to the fourth power of the effective absolute temperature T, and the area is proportional to the square of the diameter of the star, we have the diameter D proportional to the square root of the luminosity L (an exponential function of the bolometric magnitude) and the square of the effective temperature T. If D', L' and T' = 5800K are the same quantities for the Sun, then D/D' = √(L/L')(5800/T)2. The diameter of the Sun is D' = 1.392 x 106 km, and its luminosity L' = 3.90 x 1033 erg/s.
Let's consider Betelgeuse, α Orionis. This M2-spectrum red star is said to have a luminosity 13,500 times that of the sun (it is variable, but this is a typical value at maximum). Its surface temperature is about 3000K. This gives D/D' = 434, or a diameter of 6.04 x 108 km, or 376 x 106 miles. A star as bright and as cool as Betelgeuse has to be large.
Actual light disturbances are not as simple as the sinusoidal variations with constant amplitude and phase that we have discussed above. Laser light may approximate such disturbances, but not the light from candles or stars. This light is the resultant of a multitude of amplitudes from individual atomic emissions, which take place independently. Two signals of different frequencies get out of step quickly, the more quickly the more they are different in frequency. Random phase changes between two signals of the same frequency cause interference fringes to move. The light from thermal sources--candles and stars--is in the nature of a noise signal, with a wide frequency spectrum and constantly fluctuating phase. It is no surprise that we do not observe interference fringes in the usual conditions. What is surprising is that we can devise arrangements in which fringes appear. To do this, we must arrange that the phase relations between signals coming from the same atomic sources are constant. One way of doing this was described above, where we used a pinhole to define the source, and a filter to reduce the bandwidth. When we illuminated the two apertures with this light, stable fringes were then produced.
The light from two different thermal sources cannot be made to produce fringes. Two different lasers can produce fringes, but the experiment is rather difficult even for such ideal sources. Fringes can be made in white light, but only a few colored fringes are seen near the point where the phase difference is zero.
When the light from two points can be made to form fringes, the signals are said to be coherent. When no fringes are seen, the signals are called incoherent. In the two-aperture experiment, if we make the pinhole larger and larger, the fringes lose contrast or visibility, and eventually disappear. The light falling on the apertures becomes less and less coherent as this takes place. This simple observation shows the basis for determining stellar diameters by interferometry. We only have to find the limits of the region where the light from the star is coherent, using interference, and this is directly related to the apparent angular extent of the source. We take apertures farther and farther apart, and find out where the fringes disappear.
To analyze this quantitatively, we introduce a quantity called the degree of coherence, γ12 = γ(P1,P2,τ) = γ(r12,τ). P1 and P2 are the two points considered, r12 is the distance between them, and τ is the time difference in arrival at the screen (observing position). The degree of coherence is a complex number, though we usually consider its modulus, |γ 12|. The modulus of the degree of coherence varies between 0 (incoherent) to 1 (completely coherent).
To find out how γ is defined in terms of the light disturbances, we return to the two-aperture experiment. If A1 and A2 are the complex time-dependent signals from the two apertures, then the signal at the observation point Q is K1A1 + K2A2, where the K's are propagators that describe the changes in amplitude and phase as we go from an aperture to the screen. They are of the form K = ie2πi(t - t1)/r, the form typically used in diffraction integrals. We will not use these expressions explicitly, so do not worry about them. The curious nonintuitive factor "i" makes the phases come out properly. To find the intensity at Q, we multiply the signal by its complex conjugate and take the time average. The intensity of beam 1 alone is I1 = <A1A*1>, with a similar expression for I2. We find I = |K1|2I1 + |K2|2I2 + 2 Re[K1K2*<A1A2*>], where Re stands for "real part." If z is a complex number, Re(z) = (z + z*)/2.
We now define Γ12 = <A1(t + τ)A2*(t)> and call it the mutual coherence of the light signal at the two points. In statistical language, it is the cross-correlation of the two signals. The complex degree of coherence is simply the normalized value of this quantity, γ12 = Γ12/√(I1I2). Using Schwartz's Inequality, we find that 0 ≤ |γ| ≤ 1.
Now, using the intensities at Q (including the K's) we have the interference formula I(Q) = I1 + I2 + 2√(I1I2)Re[γ12(τ)], where τ is the time difference (s2 - s1)/c between the paths P1Q and P2Q. The visibility of the fringes, V = |γ|, so if we measure V, we then know |γ|.
To better understand what this means, let the two signals at Q be ae2πiνt and be2πiν(t + τ). Then, Γ12 = abe-2πiτ and γ12 = e-2πiντ, so Re(γ) = cos(2πντ). This gives I(Q) = I1 + I2 + 2√(I1I2)cos(2πΔs/λ), where Δs is the path difference. This is just the formula we found earlier for the two-aperture problem. We see that the degree of coherence is unity here.
We also see that the phase of γ has a rapidly-varying part for monochromatic light, 2πντ. Ordinary narrow-band or quasimonochromatic light is very much like narrow-band noise. The frequency varies randomly over a small range centered on the average value, while the amplitude varies up and down irregularly. For such signals, it is useful to separate the rapidly varying part (at the average frequency) from the more slowly varying part. Hence, we write γ = |γ(&tau)|ei[α(&tau) - δ], where δ is the part we have just looked at that involves the path difference, and α(τ) is the rest. The dependence on τ reflects what is often called the temporal coherence, while the dependence on the location of the source points reflects spatial coherence. In either case, we remember that coherence is the ability to produce stable interference fringes.
A signal with a frequency bandwidth Δν shows incoherence after a time interval of the order of Δτ. Visible light has an effective bandwidth of roughly 500 nm to 600 nm (not the extremes of visual sensitivity, of course), so that Δν = 1 x 1014 Hz, and Δτ = 1 x 10-14 s. The light has barely time to wiggle once before coherence is destroyed. It is no wonder that fringes are not seen in white light except in special cases, and even then only one or two. This is a result of temporal coherence alone. By restricting the frequency bandwidth, the coherence time Δτ can be increased to more comfortable values.
This theorem tells us the dependence of the coherence on distance for an extended source, such as a pinhole or a star. We will apply it only to a circular source of uniform brightness, but it can also be used for much more general sources.
The theorem states that: "The complex degree of coherence between P1 and P2 in a plane illuminated by an extended quasi-monochromatic source is equal to the normalized complex amplitude in the diffraction pattern centered on P2 that would be obtained by replacing the source by an aperture of the same size and illuminating it by a spherical wave converging on P2, the amplitude distribution proportional to the intensity distribution across the source." We have already discussed diffraction from a circular aperture, which is precisely the diffraction pattern we require in the case of a uniformly bright disc. The coherence is 1 when P1 and P2 coincide, and decreases according to J1(z)/z as P1 moves outwards, becoming zero where the diffraction pattern has its first dark ring.
The distance for γ = 0 is, therefore, given by a = 1.22λ/θ, where θ is the whole angle subtended by the source at P2, measured in radians. If we take the effective wavelength as 575 nm, and measure the angle in seconds, we find a = 0.145/θ" m. This holds for pinholes and stars. The smaller the angle subtended by the source, the larger the radius of coherence a. A pinhole 0.1 mm in diameter subtends an angle of 69" at a screen 300 mm distant, so a = 2.1 mm. The Sun or Moon subtend an angle of about 0.5° or 1800" at the surface of the Earth, so a = 0.08 mm. Venus subtends an angle between 9" and 60" (its distance from the Earth varies greatly), so a = 2.3 mm to 14.4 mm. Venus is often not a disc, especially when closest to the Earth, but this gives an idea of its radius of coherence. Jupiter subtends an angle about the same as the maximum for Venus, so the radius of coherence of its light is 2 mm or so. On the other hand, Betelgeuse subtends an angle of 0.047" when it is at its largest, so a = 3.1 m or about 10 ft. The discs of most stars subtend much smaller angles, so for all stars a > 3 m, and often hundreds of metres.
We now have all the theory we need to understand stellar interferometry. It is clear that we are looking for the radius at which the illumination has zero coherence, and this radius gives us the angular diameter. The linear diameter is obtained by multiplying by the distance. If we knew the diameter to start with, inverting this method would give us the distance.
One subject we can take up before describing stellar interferometers is the familiar phenomenon of twinkling, or scintillation of celestial bodies. This is caused by slight variations in the density of the atmosphere due to turbulence, wind shear and other causes, and is closely related to the "heat shimmer" seen on summer days over hot surfaces. The stars seem to jitter in position, become brighter or dimmer, and show flashes of color. The amount of scintillation varies greatly depending on elevation and weather, and sometimes is almost absent. Scintillation is the cause of good or poor telescopic "seeing." When the seeing is bad, stellar images wobble and jump, and the resolving power is reduced.
It is often observed that the planets do not scintillate to the same degree as the stars, but the effect is variable. Venus scintillates and shows flashes of color when a thin crescent and closest to us, but mostly the planets show a serene and calm face even when nearby stars are twinkling. The reason for this difference is often ascribed to the fact that the planets show an apparent disc, while the stars do not. However, even the edges of the planet images do not wiggle and jump, so this is probably not the reason for the difference.
It is much more reasonable that since a planet's light is incoherent over any but a very small distance, interference effects do not occur. The image may still move slightly depending on its refraction by changes in density. With a star, however, the area of coherence may include whole turbulence cells, and the randomly deflected light may exhibit interference, causing the variations in brightness and the colors. So scintillation does depend on the apparent size of the body, but in a more esoteric way. Exactly the same effects occur for terrestrial light sources, though they must be quite distant to create large areas of coherence.
A. A. Michelson was led to the stellar interferometer through his experiences with using his original interferometer, now named after him, with light consisting of a narrow line or line, such as sodium light with its D line doublet. The fringes go through cycles of visibility as the path length is varied, and from the variations the structure of the line can be unraveled. Doing this with sodium light was a popular laboratory exercise in optics.
If the end of the telescope tube is closed with a mask with two apertures, fringes are produced at the focus, showing that the light is coherent, if the two apertures are close enough together. With the Sun, or Jupiter, fringes would not appear at all because of the small coherence radius. With stars, however, the decrease in fringe visibility would be evident, and from the separation of the apertures for zero visibility the angular diameter could be found.
No telescope was large enough in aperture to give γ = 0 for even the stars with the largest angular diameters, even giant Betelgeuse, so light had to be collected from a greater separation by means of mirrors mounted on a transverse beam. A 20 ft beam was selected for the initial experiments, which should be sufficient for measuring Betelgeuse. The telescope selected was the 100-inch reflector at Mount Wilson, not because of its large aperture, but because of its mechanical stability. A 20-ft steel beam of two 10-inch channels is certainly not light, although its weight was reduced as much as possible by removing superfluous metal. The two outer mirrors directed the light to two central mirrors 45 in. apart, which then sent the light toward the paraboloidal mirror.
The 100-inch (2.54 m) telescope could be used at a prime (Newtonian) focus at a focal length of 45 ft (13.72 m), the beam diverted to the side by a plane mirror near the top, or at a Cassegrain focus at a focal length of 134 ft. (40.84 m) after reflection from a hyperboloidal mirror also near the top, and diversion to the side near the bottom of the telescope. The latter was chosen to give greater magnification, 1600X with a 1" efl eyepiece. The average wavelength for Betelgeuse was taken as 575 nm. With the 45 in. separation of the apertures and 40.84 m focal length, the fringe spacing is 0.02 mm, as you can easily check from these figures. The fringes were observed visually, and were easily seen, even when the image was unsteady but could still be followed by the eye.
The path length from each of the outer mirrors to the center mirrors had to be carefully adjusted for equality, because of the limited temporal coherence of the light (as mentioned above; these are white-light fringes). This was done by glass wedges in one beam. A direct-vision prism to observe the fringes in a restricted bandwidth made them easier to locate. The interferometer was very difficult to align, but satisfactory fringes were seen. When the outer mirrors were moved to cause the fringes in the image of Betelgeuse to disappear, it still had to be verified that other stars gave fringes under the same conditions, in case the absence of fringes was due to some other cause. One star chosen for this test was Sirius, which gave prominent fringes.
For Betelgeuse, a separation of 121 in (3.073 m) caused disappearance of the fringes, so the angular diameter was 0.047". At the time, the parallax of Betelgeuse was thought to be 0.018", but it now seems to be closer to 0.0055", for a distance of 182 psc or 593 l.y.. From these numbers we find the diameter of Betelgeuse to be 1.28 x 109 km, or 797 x 106 miles. This is about twice as large as the diameter estimated from luminosity, which implies that the star is cooler than expected, or its emissivity is lower for some reason. It is often stated that Betelgeuse would fit within the orbit of Mars. This was from the older figures; it actually would extend halfway to Jupiter, well within the asteroid belt. The angular diameter of Betelgeuse varies from 0.047" at maximum down to about 0.034" at minimum, since the star pulsates irregularly.
The verification of the large size of Betelgeuse was one of the principal results of the 20-ft interferometer. This makes its average density quite small, but of course it becomes more concentrated toward the center where the thermonuclear reactions are taking place. The star long ago exhausted the hydrogen in its core, and began burning hydrogen in an expanding shell as it swelled and cooled to a red supergiant. Now the pulsations show that it is beginning to light its helium fire at the center, which is fighting with the hydrogen reactions further up. The helium will be consumed in a relatively short time, and the star will shrink and cool to a white dwarf. That seems to be the history according to the stellar theorists, at least.
The diameters of seven stars in all were measured by the 20-ft interferometer, down to an angular diameter of 0.020", where some extrapolation had to be made. All these stars were red supergiants with spectra from K1 to M6, including the remarkable ο Ceti, Mira, that pulsates more deeply and regularly than Betelgeuse. The angular diameter of Mira at maximum was .047", the same as Betelgeuse's, but it is five times closer, so its linear diameter is about 160 x 106 miles, so Venus could revolve within it.
In hopes of measuring smaller diameters, perhaps even those of main-sequence stars, a larger interferometer was designed and built by Pease, with a 50-foot beam and mounted on the 200-inch Hale telescope at Palomar. This instrument was very difficult to operate, but the measurements on Betelgeuse and Arcturus agreed with those from the earlier instrument, while those on Antares differed significantly. Little was added by the new instrument, but it showed that a limit had been reached, largely because of the difficulty of keeping the two paths equal and the bad effects of scintillation. Modern techniques might overcome these limitations to some degree, but no great improvement is to be expected.
The correlation or intensity stellar interferometer was invented in about 1954 by two remarkable investigators, R. Hanbury Brown and R. Q. Twiss. A large interferometer was completed in 1965 at Narrabri, Australia, and by the end of the decade had measured the angular diameters of more than 20 stars, including main sequence stars, down to magnitude +2.0. This interferometer was equivalent to a 617-foot Michelson stellar interferometer, was much easier to use, and gave repeatable, accurate results.
Hanbury Brown was a radio astronomer at the University of Manchester's Jodrell Bank observatory, and Twiss was at the U.K. Services Electronics Research Laboratory at Baldock. They united the resources necessary to conceive and execute the project between them. They seem to have been not very much appreciated in their native country, but prospered in Australia, which offered them the opportunity to develop their ideas.
The intensity interferometer was introduced as a new type of interferometer for radio astronomy, but it was soon realized that it could be applied to the problem of stellar angular diameters as a successor to the Michelson interferometer of thirty years before. It works on the same fundamental principle of determining the coherence of starlight as a function of the distance between two points, but the means of finding the coherence is totally different, and relies on some esoteric properties of quasi-monochromatic light. The diameter of Sirius, the first main-sequence star whose diameter was measured, was determined in preliminary tests at Jodrell Bank in 1956, under difficult observing conditions. This was not an accurate result, but it was a milestone. To explain the interferometer, the best way to start is to look at its construction.
A diagram of the Narrabri interferometer is shown at the right. The two mirrors direct the starlight to the photomultipliers PM (RCA Type 8575, and others). Each mirror is a mosaic of 252 small hexagonal mirrors, 38 cm over flats, with a three-point support and an electrical heater to eliminate condensation. They are aluminized and coated with SiO. The focal lengths are selected from the range naturally produced by the manufacturing processes to make the large mirrors approximate paraboloids. Great accuracy is not necessary, since a good image is not required, only that the starlight be directed onto the photocathodes. The starlight is filtered through a narrow-band interference filter. The most-used filter is 443 nm ± 5 nm. The photocathode is 42 mm diameter, and the stellar image is about 25 x 25 mm. This is all of the optical part of the interferometer; all the rest is electronics.
The mirrors are mounted on two carriages that run on a circular railway of 188 m diameter and 5.5 m gauge. At the southern end of the circle is the garage where the mirrors spend the day and maintenance can be carried out. A central cabin is connected to the carriages by wires from a tower. This cabin contains the controls and the electronics. Note that the separation of the mirrors can be varied from 10 m up to 188 m. The mirrors rotate on three axes to follow the star. One of the small mirrors in each large mirror is devoted to the star-guiding system, that consists of a photocell and a chopper. This keeps the mirrors locked on the star under study, without moving the mirror carriages. The light-gathering power of the 6.5 m diameter mirrors is much greater than that of the small mirrors in the Michelson stellar interferometer, allowing the Narrabri interferometer to operate down to magnitude +2.0. The available baseline distances permit measurements of angular diameters from 0.011" to 0.0006".
The photocurrent, which is about 100 μA, is a measure of the total intensity, required for normalizing the correlation coefficient. This is measured and sent to the data-handling devices (connections not shown). The photocurrent is sent to a wide-band amplifier, then through a phase-reversing switch, and then through a wide-band filter that passes 10-110 MHz. This bandwidth excludes scintillation frequencies, eliminating their effects. The signals from the two photomultipliers then are multiplied in the correlator. The phase of one of the photocurrents is reversed at a 5 kHz rate, which makes the correlation signal change sign at the same rate, but leaving the noise unchanged. A tuned 5 kHz amplifier at the output of the multiplier selects just this signal, which is then synchronously rectified. This is a standard method of increasing the signal-to-noise ratio in situations such as these. The signal-to-noise ratio in the photocurrent is about 1 to 105. The other channel is reversed at a much slower rate, once every 10 seconds, and the correlation for each state is separately recorded. When these values are subtracted, the changes in gain and other effects are eliminated, and the result is the desired correlation.
The electrical bandwidth of 100 MHz implies that the signal paths from the photomultipliers to the correlator must be equal to about 1 ns to avoid loss of correlation due to temporal coherence. This seems like a very tight requirement at first view, but it is much easier to equalize electrical transmission lines that optical paths. The 1 ns corresponds to about 1 ft in length, which now does not seem as bad. In the case of the Michelson stellar interferometer, the paths must be equal to a wavelength or so, and this was the most important factor limiting its size.
Small lamps in the photomultiplier housings can be turned on when the shutters are closed. These lamps give uncorrelated light, so any correlation that is recorded when they are on is false. In another test, perfectly correlated noise is supplied to both channels from a wide-band noise generator for measuring the gain of the correlator. These and other tests are carried out during an observing session. The correlator is the most critical part of the interferometer, and most of the effort went in to making it as accurate and reliable as possible.
Skylight is allowed for by measuring the intensity and correlation with the mirrors pointing to the sky near the star. One contribution to the correlation was anticipated, that of the Cherenkov radiation from cosmic rays. This is a faint blue streak of light (that both mirrors would see simultaneously, and would thus correlate) that is produced when the cosmic ray is moving at greater than the speed of light (c/n) in the atmosphere. This proved to be unobservable. Meteors would have the same effect, but they are so rare that this is ruled out. Observations were not carried out when the Moon increased the skylight to an unacceptable level.
The theory of how the correlation in this case is related to the degree of coherence is similar to what we explained in connection with the Michelson instrument, but happens to be more involved, so only the idea will be sketched here. The filtered starlight is a quasi-monochromatic signal, in which the closely-space frequency components can be considered to beat against one another to create fluctuations in intensity, <AA*>. This is a general and familiar aspect of narrow-band noise. There are also accompanying fluctuations in phase, but these are not important here. The correlation measured in the intensity interferometer is proportional to <ΔI1ΔI2>, where ΔI = I = Iav is the fluctuation in I. If expressions for the quantities are inserted in terms of the amplitudes, it is found that the normalized correlation is proportional to |γ12|2, the square of the fringe visibility in the Michelson case. The phase information is gone, but the magnitude of the degree of coherence is still there, and that is enough for the measurement of diameters.
Advantages of the Brown and Twiss interferometer include: larger light-gathering capacity permitting use on dimmer stars; ease of adjusting the time delays of the channels to equality; electronic instead of visual observation; immunity to scintillation; much larger practical separations; and the elimination of the need for a large, sturdy telescope as a mount.
The photoelectric effect has long been evidence for what has been called the "particle" nature of light. Einstein demonstrated that the probability of emission of a photoelectron was proportional to the average intensity of the light, what we have represented by <AA*>, that the kinetic energy of the emitted electron was E = hν - φ, where φ is the work function, and that the emission of photoelectrons occurred instantaneously, however feeble the illumination. It was seen as a kind of collision of a "photon" with an electron, ejecting the electron as the photon was absorbed. A photocathode is called a square law detector because of its dependence on the square of the amplitude. Actually, all this is perfectly well described in quantum mechanics, and there are no surprises. What is incorrect is thinking of photons as classical particles (even classical particles obeying quantum mechanics) instead of constructs reflecting the nature of quantum transitions. Those who thought of photons as marbles, and there were many, thought Brown and Twiss were full of rubbish, since whether the light was coherent or not, the random emission of electrons by "photon collisions" would erase all correlations. The photocurrents of two separate detectors would be uncorrelated whether they were illuminated coherently or incoherently. One would simply have the well-known statistics of photoelectrons. If Brown and Twiss were correct, then quantum mechanics "would be in need of thorough revision," or so they thought.
The experiment that Brown and Twiss performed to verify that correlations could be measured between the outputs of two photomultipliers is shown at the right. The source was a mercury arc, focused on a rectangular aperture, 0.13 x 0.15 mm. The 435.8 nm line was isolated by filters. The photocathodes were 2.65 m from the source, and masked by a 9.0 x 8.5 mm aperture. Since the illumination had reasonable temporal coherence, the two light paths were only made equal to about 1 cm. A horizontal slide allowed one photomultiplier to be moved so that the cathode apertures could be superimposed or separated as seen from the source, varying the degree of coherence from 1 to 0. The electrical bandwidth, determined by the amplifiers, was 3-27 MHz. The output of the multiplier was integrated for periods of about one hour. If repeated today, the experiment could not be done with a laser, because the source incoherence is essential to the effect. The experiment clearly showed that correlation was observed when the cathodes were superimposed, which disappeared when they were separated.
A similar experiment was performed by Brannen and Ferguson in which the coincidences of photoelectrons emitted from two cathodes were observed. No extra coincidences, or correlation, were observed when the cathodes were illuminated coherently, and this, it seemed, proved that the Brown and Twiss interferometer could not work (although, of course, it confounded them by working anyway). Some thought maybe light wasn't described well by quantum mechanics at all, and that the classical theory predicted what was observed. This is very nearly true, since the amplitudes of wave theory include a lot of quantum mechanical characteristics by their very nature. However, light is quite properly and correctly described by quantum mechanics when it is done properly, and not by naive intuition.
With the concurrence of E. M. Purcell, Brown and Twiss showed that the coincidence experiment was much too insensitive to show the effect as the experiment was designed, and instead would have required years of data to show any correlation by photon counting. They later demonstrated correlations using photon counting, resolving the problem. Of course, their method using electronic correlation, as in the stellar interferometer, was much more efficient and gave much better results than photon counting.
Information on this interesting controversy can be found in the References.
The best test of the interferometer would be the measurement of a star of known diameter. However, there are no such stars. Therefore, the only tests are the consistency of repeated measurements. The interferometer measures the angular diameter directly, and the linear diameter depends on knowing the distance, which in many cases is uncertain. All astronomical data is subject to error, revision and misinterpretation, though the current quoted figures always look firm and reliable enough.
The problem with using a terrestrial source for a test is seen from the fact that a source of diameter a mm has an angular diameter of 0.2a" at a distance of 1 km. The maximum angular diameter that the Narrabri interferometer can measure is 0.011", so a source diameter of only 0.05 mm would be required at 1 km, or 5 mm at 100 km. It would be very difficult to push enough light to be seen through such a small aperture!
The angular diameter can be used directly to find the exitance (energy emitted by the stellar surface per unit area) without knowing the distance, and the exitance can be used to find the temperature. Therefore, the interferometer data has been used to refine the temperature scale of the stars, which previously was estimated only from the spectrum. The monochromatic flux F at the surface of a star is related to the monochromatic flux received outside the Earth's atmosphere by F = 4f/θ2, where θ is the angular diameter, as illustrated in the diagram. This does not include corrections for interstellar extinction. Then, &integ;F d&lambda = σTe4, where σ is Stefan's constant.
By 1967, measurements had been made on 15 stars from spectral type B0 to F5, including a number of main sequence stars, including Regulus (3.8), Sirius (1.76), Vega (3.03), Fomalhaut (1.56), Altair (1.65) and Procyon (2.17), for which reliable parallaxes were known. The number in parentheses is the diameter in solar diameters. Measurements could not be made on Betelgeuse, since the mirrors could not be brought closer than 10 m apart, and besides the 6.5 m mirrors would themselves resolve the star, reducing the correlation to a trifle.
E. Hecht and A. Zajac, Optics (Reading, MA: Addison-Wesley, 1974). Section 12.4 covers the application of coherence theory to stellar interferometry. This is also a good reference for the other optical matters discussed above.
M. Born and E. Wolf, Principles of Optics (London: Pergamon Press, 1959). Chapter X treats partial coherence. Section 10.4 is especially relevant to our subject. The Michelson stellar interferometer is covered in Section 7.3.6, pp 270-276, with a mention of the intensity interferometer, which was quite new when this book was written.
A. A. Michelson and F. G. Pease, "Measurement of the Diameter of α Orionis With The Interferometer," Astrophysical J. 53, 249-259 (1920).
R. Hanbury Brown and R. Q. Twiss, "A New Type of Interferometer for Use in Radio Astronomy," Philosophical Magazine (7)45, 663 (1954).
R. Hanbury Brown and R. Q. Twiss, "A Test of a New Type of Stellar Interferometer on Sirius," Nature 178, 1046-1048 (1956).
E. Brannen and H. I. S. Ferguson, "The Question of Correlation Between Photons in Coherent Light Rays," Nature 178, 481-482 (1956).
R. Hanbury Brown and R. Q. Twiss, "The Question of Correlation Between Photons in Coherent Light Rays," Nature 178, 1447-1448 (1956).
E. M. Purcell, (Same title as previous reference) Nature 178, 1449-1450 (1956).
R. Q Twiss and A. G. Little, and R. Hanbury Brown, "Correlation Between Photons, in Coherent Beams of Light, Detected by a Coincidence Counting Technique," Nature 180, 324-326 (1957).
R. Hanbury Brown, "The Stellar Interferometer at Narrabri Observatory," Sky and Telescope, 27(2) August 1964, 64-69.
R. Hanbury Brown, J. Davis and L. R. Allen, "The Stellar Interferometer at Narrabri Observatory, I and II," Monthly Notices of the Royal Astronomical Society, 137, 375-417 (1967).
R. Hanbury Brown, "Measurement of Stellar Diameters," Annual Reviews of Astronomy and Astrophysics, 1968, 13-38. With bibliography.
_________, "Star Sizes Measured," Sky and Telescope 38(3), March 1968, 1 and 155.
Composed by J. B. Calvert
Created 25 September 2002
Last revised 19 November 2008 | http://mysite.du.edu/~jcalvert/astro/starsiz.htm | 13 |
73 | On 1 October 1942, the Bell XP-59A, America's first jet plane, took to the air over a remote area of the California desert. There were no official NACA representatives present. The NACA, in fact, did not even know the aircraft existed, and the engine was based entirely on a top secret British design. After the war, the failure of the United States to develop jet engines, swept wing aircraft, and supersonic designs was generally blamed on the NACA. Critics argued that the NACA, as America's premier aeronautical establishment (one which presumably led the world in successful aviation technology) had somehow allowed leadership to slip to the British and the Germans during the late 1930s and during World War II.
In retrospect, the NACA record seems mixed. There were some areas, such as gas turbine technology, in which the United States clearly lagged, although NACA researchers had begun to investigate jet propulsion concepts. There were other areas, such as swept wing designs and supersonic aircraft, in which the NACA had made important forward steps. Unfortunately, the lack of advanced propulsion systems, such as jet engines, made such investigations academic exercises. The NACA's forward steps undeniably trailed the rapid strides made in Europe.
During the 1930s, aircraft speeds of 300-350 MPH represented the norm and designers were already thinking about planes able to fly at 400-450 MPH. At such speeds, the prospect of gas turbine propulsion became compelling. With a piston engine, the efficiency of the propeller began to fall off at high speeds, and the propeller itself represented a significant drag factor. The problem was to obtain sufficient research and development funds for what seemed to be unusually exotic gas turbine power plants.
In England, RAF officer Frank Whittle doggedly pursued research on gas turbines through the 1930s, eventually acquiring some funding through a private investment banking firm after the British Air Ministry turned him down. Strong government support finally materialized on the eve of World War II, and the single-engine Gloster experimental jet fighter flew in the spring of 1941. English designers leaned more toward the centrifugal-flow jet engine, a comparatively uncomplicated gas-turbine design, and a pair of these power plants equipped the Gloster Meteor of 1944. Although Meteors entered RAF squadrons before the end of the war and shot down German V-1 flying bombs, the only jet fighter to fly in air-to-air combat came from Germany --the Me-262. Hans von Ohain, a researcher in applied physics and aerodynamics at the University of Gottingen, had unknowingly followed a course of investigation that paralleled Whittle's work and took out a German patent on a centrifugal engine in 1934. Research on gas turbine engines evolved from several other sources shortly thereafter, and the German Air Ministry, using funds from Hitler's rearmament program, earmarked more money for this research. Although a centrifugal type powered the world's first gas turbine aircraft flight by the He-178 in 1939, the axial-flow jet, more efficient and capable of greater thrust, was used in the Me-262 fighters that entered service in the autumn of 1944.
In America, the idea of jet propulsion had surfaced as early as 1923, when an engineer at the Bureau of Standards wrote a paper on the subject, which was published by the NACA. The paper came to a negative conclusion: fuel consumption would be excessive; compressor machinery would be too heavy; high temperatures and high pressures were major barriers. These were assumptions that subsequent studies and preliminary investigations seemed to substantiate into the 1930s. By the late 1930s, the Langley staff became interested in the idea of a form of jet propulsion to augment power for military planes for takeoff and during combat. In 1940, Eastman Jacobs and a small staff came up with a jet propulsion test bed they called the "Jeep." This was a ducted-fan system, using a piston engine power plant to combine the engine's heat and exhaust with added fuel injection for brief periods of added thrust, much like an afterburner. A test rig was in operation during the spring of 1942. By the summer, however, the Jeep had grown into something else --a research aircraft for transonic flight. With Eastman Jacobs again, a small team made design studies of a jet plane having the ducted fan system completely closed within the fuselage, similar to the Italian Caproni-Campini plane that flew in 1942. Although work on the Jeep and the jet plane design continued into 1943, these projects had already been overtaken by European developments.
During a tour to Britain in April 1941, General H. H. "Hap" Arnold, Chief of the U.S. Army Air Forces, was dumbfounded to learn about a British turbojet plane, the Gloster E28/39. The aircraft had already entered its final test phase and, in fact, made its first flight the following month. Fearing a German invasion, the British were willing to share the turbojet technology with America. That September, an Air Force Major, with a set of drawings manacled to his wrist, flew from London to Massachusetts, where General Electric went to work on an American copy of Whittle's turbojet. An engine, along with Whittle himself, followed. Development of the engine and design of the Bell XP-59 was so cloaked in secrecy that the NACA learned nothing about them until the summer of 1943. Moreover, design of the Lockheed XP-80, America's first operational jet fighter, was already under way.
General Arnold may have lost confidence in the NACA's potential for advanced research when he stumbled onto the British turbojet plane. It may be that British and American security requirements were so strict that the risks of sharing information with the civilian agency, where the risk of leaks was magnified, justified Arnold's decision to exclude the NACA. The answers were not clear. In any case, the significance of turbojet propulsion and rising speeds magnified the challenges of transonic aerodynamics. This was an area where the NACA had been at work for some years, though not without influence from overseas.
Shaping New Wings
As information on advanced aerodynamics began to trickle out of defeated
Germany, American engineers were impressed. Photographs of some of the
startling German aircraft, like the bat-like Me-163 rocket powered interceptor
and the improbable Junkers JU-287 jet bomber, with its forward swept wings,
prompted critics to ask why American designs appeared to lag behind the
Germans. It seemed to be the story of the turbojet again. The vaunted NACA
had let advanced American flight research fall precariously behind during
the war. True, the effect of wartime German research made an impact on
postwar American development of swept wings, leading to high performance
jet bombers like the Boeing B-47 and the North American F-86 jet fighter.
It is also the case that American engineers, including NACA personnel,
had already made independent progress along the same design path when the
German hardware and drawings were turned up at the end of World War II.
|The North American F-86 Sabre featured swept wing and tail surfaces. The plane shown here was fitted with special instrumentation for transonic flight research conducted by the Ames Laboratory.|
Like several other chapters in the story of high speed flight, the story began in Europe, where an international conference on high speed flight--the Volta Congress--met in Rome during October 1935. Among the participants was Adolf Busemann, a young German engineer from Lubeck. As a youngster, he had watched innumerable ships navigating Lubeck's harbor, each vessel moving within the V-shaped wake trailing back from the bow. As an aeronautical engineer, this image was a factor that led him to consider designing an airplane with swept wings. At supersonic speeds, the wings would function effectively inside the shock waves stretching back from the nose of an airplane at supersonic speeds. In the paper Busemann presented at the Rome conference, he analyzed this phenomenon and predicted that his "arrow wing" would have less drag than straight wings exposed to the shock waves.
There was polite discussion of Busemann's paper, but little else, since propeller-driven aircraft of the 1930s lacked the performance to merit serious consideration of such a radical design. Within a decade, the evolution of the turbojet dramatically changed the picture. In 1942, designers for the Messerschmitt firm, builders of the remarkable Me-262 jet fighter, realized the potential of swept wing aircraft and studied Busemann's paper more intently. Following promising wind tunnel tests, Messerschmitt had a swept wing research plane under development, but the war ended before the plane was finished.
In the United States, progress toward swept wing design proceeded independently of the Germans, although admittedly behind them. The American chapter of the swept wing story originated with Michael Gluhareff, a graduate of the Imperial Military Engineering College in Russia during World War I. He fled the Russian revolution and gained aeronautical engineering experience in Scandinavia. Gluhareff arrived in the United States in 1924 and joined the company of another Russian compatriot, Igor Sikorsky. By 1935, he was chief of design for Sikorsky Aircraft and eventually became a major figure in developing the first practical helicopter. In the meantime, Gluhareff became fascinated by the possibilities of low-aspect ratio tailless aircraft and built a series of flying models in the late 1930s. In a memo to Sikorsky in 1941, he described a possible pursuit-interceptor having a delta-shaped wing swept back at an angle of 56 degrees. The reason, he wrote, was to achieve "a considerable delay in the action (onset) of the compressibility effect. The general shape and form of the aircraft is, therefore, outstandingly adaptable for extremely high speeds."
Eventually, a wind tunnel model was built; initial tests were encouraging. But the Army declined to follow up due to several other unconventional projects already under way. Fortunately, a business associate of Gluhareff kept the concept alive by using the Dart design, as it was called, as the basis for an air-to-ground glide bomb in 1944. This time, the Army was intrigued and asked the NACA to evaluate the project. Thus, a balsa model of the Dart, along with some data, wound up on the desk of Robert T. Jones, a Langley aerodynamicist.
Jones was a bit of a maverick. A college dropout, he signed on as a mechanic for a barnstorming outfit known as the Marie Meyer Flying Circus. Jones became a self-taught aerodynamicist who couldn't find a job during the 1930s depression. He moved to Washington, D.C., and worked as an elevator operator in the Capitol. There he met a congressman who paid Jones to tutor him in physics and mathematics. Impressed by Jones's abilities, the legislator got him into a Works Projects Administration program that led to a job at Langley in 1934. With his innate intelligence and impressive intuitive abilities, Jones quickly moved ahead in the NACA hierarchy.
Studying Gluhareff's model, Jones soon realized that the lift and drag figures for the Dart were based on outmoded calculations for wings of high-aspect ratio. Using more recent theory for low-aspect ratio shapes, backed by some theoretical work done by Max Munk, Jones suddenly had a breakthrough. Within the shock cone created at supersonic speeds, he realized that the Dart's swept wing would remain free of shock waves at given speeds. The flow of air around the wings remained subsonic; compressibility effects would occur at higher Mach numbers than previously thought (Mach 1 equals the speed of sound; the designation is named after the Austrian physicist, Ernst Mach).
The concept of wings with subsonic sweep came to Jones in January 1945, and he eagerly discussed it with Air Force and NACA colleagues during the next few weeks. Finally, he was confident enough to make a formal statement to the NACA chieftains. On 5 March 1945, he wrote to the NACA's director of research, George W. Lewis. "I have recently made a theoretical analysis which indicates that a V-shaped wing traveling point foremost would be less affected by compressibility than other planforms," he explained. "In fact, if the angle of the V is kept small relative to the Mach angle, the lift and center of pressure remain the same at speeds both above and below the speed of sound."
So much for theory. Only testing would provide the data to make or break Jones's theory. Langley personnel went to work, fabricating two small models to see what would happen. Technicians mounted the first model on the wing of a P-51 Mustang. The plane's pilot took off and climbed to a safe altitude before nosing over into a high-speed dive towards the ground. In this attitude, the accelerated flow of air over the Mustang's wing was supersonic, and the instrumented model on the plane's wing began to generate useful data. For wind tunnel tests, the second model was truly a diminutive article, crafted of sheet steel by Jones and two other engineers. Langley's supersonic tunnel had a 9-inch throat, so the model had a 1.5-inch wingspan, in the shape of a delta. The promising test results, issued 11 May 1945, were released before Allied investigators in Europe had the opportunity to interview German aerodynamicists on delta shapes and swept wing developments.
Jones was already at work on variations of the delta, including his own version of the swept wing configuration. Late in June 1945, he published a summary of this work as NACA Technical Note Number 1033. Jones suggested that the proposed supersonic plane under development should have swept wings, but designers opted for a more conservative approach. Other design staffs were fascinated by the promise of swept wings especially after the appearance of the German aerodynamicists in America.
The Germans arrived courtesy of "Operation Paperclip," a high-level government plan to scoop up leading German scientists and engineers during the closing months of World War II. Adolf Busemann eventually wound up at NACA's Langley laboratory, and scores of others joined Air Force, Army, and contractor staffs throughout the United States. Information from the research done by Robert Jones had begun to filter through the country's aeronautical community before the Germans arrived. Their presence, buttressed by the obvious progress represented by advanced German aircraft produced by 1945, bestowed the imprimatur of proof to swept wing configurations. At Boeing, designers at work on a new jet bomber tore up sketches for a conventional plane with straight wings and built the B-47 instead. With its long, swept wings, the B-47 launched Boeing into a remarkably successful family of swept wing bombers and jet airliners. At North American, a conventional jet fighter with straight wings, the XP-46, went through a dramatic metamorphosis, eventually taking to the air as the famed F-86 Sabre, a swept wing fighter that racked up an enviable combat record during the Korean conflict in the 1950s.
Nonetheless, America had been demonstrably lagging in jets and swept wing aircraft in 1945, and the NACA was the target of criticism from postwar Congressional and Air Force committees. It may have been that the NACA was not as bold as it might have been or that the agency was so caught up in immediate wartime improvements that crucial areas of basic research received short shrift. There were administrative changes to respond to these issues. In any case, as historian Alex Roland noted in his study of the NACA, Model Research (1985), its shortcomings "should not be allowed to mask its real significant contributions to American aerial victory in World War II." Moreover, the NACA's postwar achievements in supersonic research and rapid transition into astronautics reflected a new vigor and momentum.
The Sonic Barrier
During World War II, the increasing speeds of fighter aircraft began to create new problems. The Lockheed P-38 Lightning, for example, could exceed 500 MPH in a dive. In 1941, a Lockheed test pilot died when shock waves from the plane's wings (where the air flow over the wings reached 700 MPH) created turbulence that tore away the horizontal stabilizer, sending the plane into a fatal plunge. From wind tunnel tests, researchers knew something about the shock waves occurring at Mach 1, the speed of sound. The phenomenon was obviously attended by danger. Pilots and aerodynamicists alike muttered about the threatening dimensions of what came to be called the sound barrier.
Researchers faced a dilemma. In wind tunnels, with models exposed to near-sonic velocities, shock waves began bouncing from the tunnel walls, the "choking" phenomenon, resulting in questionable data. In the meantime, high speed combat maneuvers brought additional reports of control loss due to turbulence and, in several cases, crashes involving planes whose tails had wrenched loose in a dive. Since data from wind tunnels remained unreliable, researchers proposed a new breed of research plane to probe the sound barrier. Two of the leaders were Ezra Kotcher, a civilian on the Air Force payroll, and John Stack, on the NACA staff at Langley.
By 1944, John Stack and his NACA research team proposed a jet powered aircraft, a conservative, safe approach to high speed flight tests. Kotcher's group wanted a rocket engine which was more dangerous, with explosive fuels aboard, but more likely to achieve the high velocity to reach the speed of sound. The Air Force had the funds, so Stack and his colleagues agreed. The next problem involved design and construction of the rocket plane.
Eventually, the contract went to Bell Aircraft Corporation in Buffalo, New York. The company had a reputation for unusual designs, including the first American jet, the XP-59A Airacomet. The designer was Robert J. Woods, who had worked with John Stack at Langley in the 1920s before he joined Bell Aircraft. Woods had close contacts with the NACA as well as the Air Force. During a casual visit to Kotcher's office at Wright Field, Woods agreed to design a research plane capable of reaching 800 MPH at an altitude of 35,000 feet. Woods then called his boss, Lawrence Bell, to break the news. "What have you done?" Bell lamented, only half in jest.
The Bell design team worked closely with the Air Force and the NACA. This was the first time that the Langley staff had been involved in the initial design and construction of a complex research plane. Even with the Air Force bearing the cost and sharing the research load, this sort of collaboration marked a significant departure in NACA procedures. For the most part, design issues were amicably resolved, although some questions caused heated exchanges. The wing design was one such controversy.
There was general agreement that the wings would be thinner than normal in order to delay the formation of shock waves. In conventional designs, this was expressed as a numerical figure (usually between 12 to 15) which was the ratio of the wing's thickness to its chord. One group of NACA researchers advocated a 10 percent wing for the new plane, while others argued for an 8 percent thickness in order to forestall the effect of shock waves even more. One of Langley's resident experts on wing design finally made a thorough analysis of the issue and advised the 8 percent thickness as the most promising to achieve supersonic speed. As the design of the plane progressed, Bell's engineers came up with a plane that measured only 31 feet long with a wingspan of just 28 feet. Stresses on the remarkably short wing were estimated at twice the levels for high performance fighters of the day. Fortunately, Bell's designers realized that thickening the aluminum skin of the wings would result in a robust structure. Consequently, the skin thickness at the wing root measured .5 inch compared to .10-inch thick wing skin on a conventional fighter.
Research at Langley influenced other aspects of the design. Realizing that turbulence from the wing might create control problems around the tail, John Stack advised Bell to place the horizontal stabilizer on the fin, above the turbulent flow. He also recommended a stabilizer that was thinner than the wing, ensuring that shock waves would not form on the wing and tail at the same time, thereby improving the pilot's control over the accelerating aircraft. In making these decisions, the design team recognized that not much was known about the flight speeds for which the plane was intended. On the other hand, there was some interesting aerodynamic information available on the .50 caliber bullet, so the fuselage shape was keyed to ballistics data from this unlikely source. The cockpit was installed under a canopy that matched the rounded contours of the fuselage, since a conventional design atop the fuselage created too much drag.
The engine was one of the few really exotic aspects of the supersonic plane. Jet engines under development fell far short of the required thrust to reach Mach 1, forcing designers to consider rocket engines, a radical new technology for that time. The original engine candidate came from a small Northrop design for a flying wing. The propellants, red fuming nitric acid and aniline, ignited spontaneously when mixed. Curious about this volatile combination, some Bell engineers obtained some samples, put the stuff in a pair of bottles taped together, found some isolated rocks outside the plant, and tossed the bottles into them. They were aghast at the fierce eruption that followed. Considering the consequences to the plane and its pilot in case of a landing accident or a fuel leak, a different propulsion system seemed imperative. They settled on a rocket engine supplied by an outfit aptly named Reaction Motors, Incorporated. The engine burned a mixture of alcohol and distilled water along with liquid oxygen to produce a thrust of 1500 pounds from each of four thrust chambers. Due to limited propellant capacity of the research plane, the design team decided to use a Boeing B-29 Superfortress to carry it to about 25,000 feet. After dropping from the B-29 bomb bay, the pilot would ignite the rocket engine for a high-speed dash; with all its fuel consumed, the plane would have to glide earthward and make a deadstick landing. By this time, the plane was designated the XS-1, for Experimental Sonic 1, soon shortened to X-1 by those associated with it.
Early in 1946, flight trials began. The rocket engine was not ready, so the test crew moved into temporary quarters at Pinecastle Field, near Orlando, Florida. The X-1, painted a bright orange for high visibility, was carried aloft for a series of drop tests. By autumn, the X-1 was transferred to a remote air base in California's Mojave Desert--Muroc Army Air Field, familiarly known as Muroc,1 after a small settlement on the edge of Rogers Dry Lake. This was the Air Force flight test center, an area of 300 square miles of desolation in the California desert northwest of Los Angeles. Originating as an Air Force bombing and gunnery range, Muroc was a suitably remote location; the concrete-hard lake bed was highly suited for experimental testing. Test aircraft not infrequently made emergency landings, and the barren miles of Rogers Dry Lake allowed these unscheduled approaches from almost any direction. This austere, almost surrealistic desert setting made an appropriate environment for a growing roster of exotic planes based there in the postwar years.
The X-1 arrived under a cloud of gloom from overseas. The British had also been developing a plane to pierce the sound barrier, the de Havilland D.H. 108 Swallow, a swept wing, jet propelled, tailless airplane. Geoffrey, a son of the firm's founder, died during a high-speed test of the sleek aircraft in September 1946. The barrier was deadly.
Through the end of 1946 and into the autumn of 1947, one test flight after another took the X-1 to higher speeds, past Mach .85, the region where statistics on subsonic flight more or less faded away. On the one hand, the X-1 test crew felt increasing confidence that their plane could successfully make the historic run. On the other hand, NACA engineers like Walt Williams grudgingly admitted "a very lonely feeling as we began to run out of data."
The Air Force and the NACA put considerable trust in the piloting skills of Captain Charles "Chuck" Yeager, a World War II fighter ace. During the test sequences, he learned to keep his exuberance under control and to acquire a thorough knowledge of the X-1's quirks. On the morning of 14 October 1947, the day of the supersonic dash, Yeager's aggressive spirit helped him overcome the discomfort of two broken ribs, legacy of a horseback accident a few days earlier. A close friend helped the wincing Yeager into the cramped cockpit, then slipped him a length of broom handle so that he could secure the safety latch with his left hand, since the broken ribs on his right side made it too painful to use his right hand. The latch secure, Yeager reported he was ready to go. At 20,000 feet above the desert, the X-1 dropped away from the B-29.
Yeager fired up the four rocket chambers and shot upwards to 42,000 feet. Leveling off, he shut down two of the chambers while making a final check of the plane's readiness. Already flying at high speed, Yeager fired a third chamber and watched the instruments jump as buffeting occurred. Then the flight smoothed out; needles danced ahead as the X-1 went supersonic. Far below, test personnel heard a loud sonic boom slap across the desert. The large data gap mentioned by Walt Williams had just been filled in.
A need for high-speed wind tunnel tests still existed. In the 7 x 10-foot tunnel at Langley, technicians built a hump in the test section; as the air stream accelerated over the hump, models could be tested at Mach 1.2 before the "choking" phenomenon occurred. A research program came up with the idea of absorbing the shock waves by means of longitudinal openings, or slots, in the test section. The slotted-throat tunnel became a milestone in wind tunnel evolution, permitting a full spectrum of transonic flow studies. In another high-speed test program, Langley used rocket-propelled models, launching them from a new test facility at Wallops Island, north of Langley on the Virginia coast. This became the Pilotless Aircraft Research Division (PARD), established in the autumn of 1945. During the next few years, PARD used rocket boosters to make high-speed tests on a variety of models representing new planes under development. These included most of the subsonic and supersonic aircraft flown by the armed services during the decades after World War II. In the 1960s, PARD facilities supported the Mercury, Gemini, and Apollo programs as well.
As full-sized aircraft took to the air, new problems inevitably cropped up. Researchers soon realized that a sharp increase in drag occurred in the transonic region. Slow acceleration through this phase of flight consumed precious fuel and also created control problems. At Langley, Richard T. Whitcomb became immersed in the problem of transonic drag. In the course of his analysis, Whitcomb developed a hunch that the section of an airplane where the fuselage joined the wing was a key to the issue. After listening to some comments by Adolph Busemann on airflow characteristics in the transonic regime, Whitcomb hit upon the answer to the drag problem- -the concept of the area rule.
Essentially, the area rule postulated that the cross-section of an airplane
should remain reasonably constant from nose to tail, minimizing disturbance
of the air flow and drag. But the juncture of the wing root to the fuselage
of a typical plane represented a sudden increase in the cross-sectional
area, creating the drag that produced the problems encountered in transonic
flight. Whitcomb's solution was to compensate for this added wing area
by reducing the area of the fuselage. The result was the "wasp-waisted"
look, often called the "Coke bottle" fuselage. Almost immediately, it proved
its value. A new fighter, Convair's XF-102, was designed as a supersonic
combat plane but repeatedly frustrated the efforts of test pilots and aerodynamicists
to achieve its design speed. Rebuilt with an area rule fuselage, the XF-102
sped through the transonic region like a champion; the Coke bottle fuselage
became a feature on many high performance aircraft of the era: the F-106
Delta Dart (successor to the F-102), Grumman F-11, the Convair B-58 Hustler
bomber, and others.
|This Group portrait displays typical high-speed research aircraft that made headlines at Muroc Flight Center in the 1950s. The Bell X-1A (lower left) had much the same configuration as the earlier X-1. Joining the X-1A were (clockwise): the Douglas D-558-I Skystreak; Convair XF92-A, Bell X-5 with variable sweepback wings, Douglas D-558-II Skyrocket; Northrop X-4; and (center) the Douglas X-3|
A succession of X aircraft, designed primarily for flight experiments,
populated the skies above Muroc in a continuous cycle of research and development
(R&D). Two more X-1 aircraft were ordered by the Air Force, followed
by the X-1A and the X-1B, which investigated thermal problems at high speeds.
The Navy used the Muroc flight test area for the subsonic jet-powered Douglas
Skystreak, accumulating air-load measurements unobtainable in early postwar
wind tunnels. The Skystreak was followed by the Douglas Skyrocket, a swept
wing research jet (later equipped with a rocket engine that would surpass
twice the speed of sound for the first time in 1953). The Douglas X-3,
which fell short of expectation for further flight research in the Mach
2 range, nevertheless yielded important design insights on the phenomenon
of inertial coupling (solving a control problem for the North American
F-100 Super Sabre), the structural use of titanium (incorporated in the
X-15 and other subsequent supersonic fighter designs), and data applied
in the design of the Lockheed F-104 Starfighter. The NACA kept involved
throughout these programs. In a number of ways, the X aircraft contributed
substantially to the solution of a variety of high-speed flight conundrums
and enhanced the design of future jet airliners, establishing a record
of consistent progress aside from the speed records that so fascinated
|This photo taken from below the Grumman F-11 Navy fighter illustrates the way in which the area-ruled fuselage was adapted to production aircraft.|
Although much of the NACA's work in this era had to do with military aviation, a good number of aerodynamic lessons were applicable to nonmilitary research planes and to civil aircraft. In the late 1950s, the Air Force began developing the North American XB-70, an unusually complex bomber capable of sustained supersonic flight over long distances. As a high-altitude strategic bomber, the B-70 was eventually displaced by ballistic missiles and a tactical shift to the idea of low-altitude strikes to avoid enemy radars and antiaircraft rockets. The Air Force and the NACA continued to fly the plane for research. Despite the loss of one of the two prototypes in a tragic midair collision involving a chase plane, the remaining XB-70 generated considerable data on long- range, high-altitude supersonic operations. This data was useful in designing new generations of jet transports operating in the transonic region, as well as advanced military aircraft.
Helicopters, introduced into limited combat service at the end of World War II, entered both military and civilian service in the postwar era. The value of helicopters in medical evacuation was demonstrated time and again in Korea, and a variety of helicopter operations proliferated in the late 1950s. The NACA flight-tested new designs to help define handling qualities. Using wind tunnel experience, researchers also developed a series of special helicopter airfoil sections, and a rotor test tower aided research in many other areas.
As usual, NACA researchers also pursued a multifaceted R&D program touching many other aspects of flight. In one project, the NACA installed velocity-gravity-altitude recorders in aircraft flown in all parts of the world. The object was to acquire information about atmospheric turbulence and gusts so that designers could make allowances for such perturbations. At Langley, a Landing Loads Track Facility went into operation, using a hydraulically propelled unit that subjected landing gear to the stresses of repeated landings in a variety of conditions. Another test facility studied techniques in designing pressurized fuselage structures to avoid failures. In the mid-1950s, a rash of such failures in the world's first operational jet airliner, the British-built de Havilland Comet, dramatized the rationale for this kind of testing.
All of this postwar aeronautical activity received respectful and enthusiastic attention from press and public. Although the phenomenon of flight continued to enjoy extensive press coverage, events in the late 1950s suddenly caused aviation to share the limelight with space flight.
Among the legacies of World War II was a glittering array of new technologies spawned by the massive military effort. Atomic energy, radar, antibiotics, radio telemetry, the computer, the large rocket, and the jet engine seemed destined to shape the world's destiny in the next three decades and heavily influence the rest of the century. The world's political order had been drastically altered by the war. Much of Europe and Asia were in ashes. Old empires had crumbled; national economies were tottering perilously. On opposite sides of the world stood the United States and the Soviet Union, newly made into superpowers. It soon became apparent that they would test each other's mettle many times before a balance of power stabilized. And each nation moved quickly to exploit the new technologies.
The atomic bomb was the most obvious and most immediately threatening technological change from World War II. Both superpowers sought the best strategic systems that could deliver the bomb across the intercontinental distances that separated them. Jet-powered bombers were an obvious extension of the wartime B-17 and B-29, and both nations began putting them into service. The intercontinental rocket held great theoretical promise, but seemed much further down the technological road. Atomic bombs were bulky and heavy; a rocket to lift such a payload would be enormous in size and expense. The Soviet Union doggedly went ahead with attempts to build such rockets. The American military temporarily settled upon jet aircraft and smaller research and battlefield rockets. The Army imported Wernher von Braun and the German engineers who had created the wartime V-2 rockets and set them to overseeing the refurbishing and launching of V-2s at White Sands, New Mexico. The von Braun team was later transferred to Redstone Arsenal, Huntsville, Alabama, where it formed the core of the Army Ballistic Missile Agency (ABMA). With its contractor the Jet Propulsion Laboratory (JPL), the Army developed a series of battlefield missiles known as Corporal, Sergeant, and Redstone. The Navy designed and built the Viking research rockets. The freshly independent Air Force started a family of cruise missiles, from the jet Bomarc and Matador battlefield missiles to Snark and the ambitious rocket-propelled Navaho, which were intended as intercontinental weapons.
By 1951 progress on a thermonuclear bomb of smaller dimensions revived interest in the long-range ballistic missile. Two months before President Truman announced that the United States would develop the thermonuclear bomb, the Air Force contracted with Consolidated Vultee Aircraft Corporation (later Convair) to resume study, and then to develop, the Atlas intercontinental ballistic missile, a project that had been dormant for four years. During the next four years three intermediate range missiles; the Army's Jupiter, the Navy's Polaris, and the Air Force's Thor; and a second generation ICBM, the Air Force's Titan, had been added to the list of American rocket projects. All were accorded top national priority. Fiscal 1953 saw the Department of Defense (DoD) for the first time spend more than $1 million on missile research, development, and procurement. Fiscal 1957 saw the amount go over the $1 billion mark.
By the mid-1950s NACA had modern research facilities that had cost a total of $300 million, and a staff totaling 7200. Against the background of the "Cold War" between the U.S. and the U.S.S.R. and the national priority given to military rocketry, the NACA's sophisticated facilities inevitably became involved. With each passing year it was enlarging its missile research in proportion to the old mission of aerodynamic research. Major NACA contributions to the military missile programs came in 1955-1957. Materials research led by Robert R. Gilruth at Langley confirmed ablation as a means of controlling the intense heat generated by warheads and other bodies reentering the Earth's atmosphere; H. Julian Allen at Ames demonstrated the blunt-body shape as the most effective design for reentering bodies; and Alfred J. Eggers at Ames did significant work on the mechanics of ballistic reentry.
The mid-1950s saw America's infant space program burgeoning with promise and projects. As part of the U.S. participation in the forthcoming International Geophysical Year (IGY), it was proposed to launch a small satellite into orbit around the Earth. After a spirited design competition between the National Academy of Sciences-Navy proposal (Vanguard) and the ABMA-JPL candidate (Explorer), the Navy design was chosen in September 1955 as not interfering with the high-priority military missile programs, since it would use a new booster based on the Viking research rocket, and having a better tracking system and more scientific growth potential. By 1957 Vanguard was readying its first test vehicles for firing. The U.S.S.R. had also announced it would have an IGY satellite; the space race was extending beyond boosters and payloads to issues of national prestige.
On the military front, space activity was almost bewildering. The missiles were moving toward the critical flight-test phase. Satellite ideas were proliferating, though mostly on a sub-rosa planning basis; after Sputnik these would become Tiros, weather satellite; Transit, navigation satellite; Pioneer lunar probes; Discoverer research satellites; Samos, reconnaissance satellite; Midas, missile early-warning satellite. Payload size and weight were constant problems in all these concepts, with the limited thrust of the early rocket engines. Here the rapid advances in solid-state electronics came to the rescue by reducing volume and weight; with new techniques such as printed circuitry and transistors, the design engineers could achieve new levels of miniaturization of equipment. Even so, heavier payloads were obviously in the offing; more powerful engines had to be developed. So design was begun for several larger engines, topped by the monster F-1 engine, intended to produce eight times the power of the engines that lifted the Atlas, Thor, and Jupiter missiles.
All this activity, however, was still on the drawing board, work bench, or test stand on 4 October 1957, when the "beep, beep" signal from Sputnik 1 was heard around the world. The Soviet Union had orbited the world's first man-made satellite.
The American public's response was swift and widespread. It seemed equally compounded of alarm and chagrin. American certainty that the nation was always number one in technology had been rudely shattered. Not only had the Russians been first, but Sputnik 1 weighed an impressive 183 pounds against Vanguard's intended start at 3 pounds and working up to 22 pounds in later satellites. In a cold war environment, the contrast suggested undefined but ominous military implications.
Fuel for such apprehensions added up rapidly. Less than a month after
1 the Russians launched Sputnik 2, weighing a hefty 1100 pounds
and carrying a dog as passenger. President Eisenhower, trying to dampen
the growing concern, assured the public of our as yet undemonstrated progress
and denied there was any military threat in the Soviet space achievements.
As a counter, the White House announced the impending launch in December
of the first Vanguard test vehicle capable of orbit and belatedly authorized
von Braun's Army research team in Huntsville to try to launch their Explorer-Jupiter
combination. But pressures for dramatic action gathered rapidly. The media
ballyhooed the carefully qualified announcement on Vanguard into great
expectations of America's vindication. On 25 November Lyndon B. Johnson,
Senate majority leader, chaired the first meeting of the Preparedness Investigation
Subcommittee of the Senate Armed Services Committee. The hearings would
review the whole spectrum of American defense and space programs.
|A ball of fire and flying debris mark the explosive failure of the first American attempt to launch a satellite on Vanguard, 6 December 1957.|
Still the toboggan careened downhill. On 6 December 1957, the much-touted
Vanguard test vehicle rose about 3 feet from the launch platform, shuddered,
and collapsed in flames. Its tiny 3-pound payload broke away and lay at
the edge of the inferno, beeping impotently.
|A moment of triumph with the announcement that Explorer I has become the first American satellite to orbit the Earth. Here a duplicate Explorer is held aloft by (left to right) William H. Pickering of JPL, James A. van Allen of the State University of Iowa, and Wernher von Braun of the ABMA.|
Clouds of gloom deepened into the new year. Then, finally, a small rift. On 31 January 1958, an American satellite at last went into orbit. Not Vanguard but the ABMA-JPL Explorer had redeemed American honor. True, the payload weighed only 2 pounds against the 1100 of Sputnik 2. But there was a scientific first; an experiment aboard the satellite reported mysterious saturation of its radiation counters at 594 miles altitude. Professor James A. van Allen, the scientist who had built the experiment, thought this suggested the existence of a dense belt of radiation around the Earth at that altitude. American confidence perked up again on 17 March when Vanguard 1 joined Explorer 1 in orbit.
Meanwhile, in these same tense months, both consensus and competition had been forming on the political front; consensus that an augmented national space program was essential; competition as to who would run such a program, in what form, with what priorities. The DoD, with its component military services, was an obvious front runner; the Atomic Energy Commission, already working with nuclear warheads and nuclear propulsion, had some congressional support, particularly in the Joint Committee on Atomic Energy; and there was NACA.
NACA had devoted more and more of its facilities, budget, and expertise to missile research in the mid- and late 1950s. Under the skillful leadership of James H. Doolittle, chairman, and Hugh L. Dryden, director, the strong NACA research team had come up with a solid, long-term, scientifically based proposal for a blend of aeronautic and space research. Its concept for manned spaceflight, for example, envisioned a ballistic spacecraft with a blunt reentry shape, backed by a world-encircling tracking system, and equipped with dual automatic and manual controls that would enable the astronaut gradually to take over more and more of the flying of his spacecraft. Also NACA offered reassuring experience of long, close working relationships with the military services in solving their research problems, while at the same time translating the research into civil applications. But NACA's greatest political asset was its peaceful, research-oriented image. President Eisenhower and Senator Johnson and others in Congress were united in wanting above all to avoid projecting cold war tensions into the new arena of outer space.
By March 1958 the consensus in Washington had jelled. The administration position (largely credited to James R. Killian in the new post of president's special assistant for science and technology), the findings of Johnson's Senate subcommittee, and the NACA proposal converged. America needed a national space program. The military component would of course be under DOD But a civil component, lodged in a new agency, technologically and scientifically based, would pick up certain of the existing space projects and forge an expanded program of space exploration in close concert with the military. All these concepts fed into draft legislation. On 2 April 1958, the administration bill for establishing a national aeronautics and space agency was submitted to Congress; both houses had already established select space committees; debate ensued; a number of refinements were introduced; and on 29 July 1958 President Eisenhower signed into law P.L. 85-568, the National Aeronautics and Space Act of 1958.
The act established a broad charter for civilian aeronautical and space research with unique requirements for dissemination of information, absorbed the existing NACA into the new organization as its nucleus, and empowered broad transfers from other government programs. The National Aeronautics and Space Administration came into being on 1 October 1958.
All this made for a very busy spring and summer for the people in the small NACA Headquarters in Washington. Once the general outlines of the new organization were clear, both a space program and a new organization had to be charted. In April, Dryden brought Abe Silverstein, assistant director of the Lewis Laboratory, to Washington to head the program planning. Ira Abbott, NACA assistant director for aerodynamic research, headed a committee to plan the new organization. In August President Eisenhower nominated T. Keith Glennan, president of Case Institute of Technology and former commissioner of the Atomic Energy Commission, to be the first administrator of the new organization, NASA, and Dryden to be deputy administrator. Quickly confirmed by the Senate, they were sworn in on 19 August. Glennan reviewed the planning efforts and approved most. Talks with the Advanced Research Projects Agency identified the military space programs that were space science-oriented and were obvious transfers to the new agency. Plans were formulated for building a new center for space science research, satellite development, flight operations, and tracking. A site was chosen, nearly 500 acres of the Department of Agriculture's research center in Beltsville, Maryland. The Robert H. Goddard Space Flight Center (named for America's rocket pioneer) was dedicated in March 1961. | http://www.hq.nasa.gov/office/pao/History/SP-4406/chap3.html | 13 |
53 | A random variable is called continuous if it can assume all possible values in the possible range of the random variable. Suppose the temperature in a certain city in the month of June in the past many years has always been between to centigrade. The temperature can take any value between the ranges to. The temperature on any day may be or or it may take any value between and. When we say that the temperature is, it means that the temperature lies between somewhere between to. Any observation which is taken falls in the interval. There is nothing like an exact observation in the continuous variable. In discrete random variable the values of the variable are exact like 0, 1, 2 good bulbs. In continuous random variable the value of the variable is never an exact point. It is always in the form of an interval, the interval may be very small.
Some examples of the continuous random variables are:
- The computer time (in seconds) required to process a certain program.
- The time that a poultry bird will gain the weight of 1.5 kg.
- The amount of rain falls in the certain city.
- The amount of water passing through a pipe connected with a high level reservoir.
- The heat gained by a ceiling fan when it has worked for one hour.
Probability Density Function:
The probability function of the continuous random variable is called probability density function of briefly p.d.f. It is denoted by where is the probability that the random variable takes the value between and where is a very small change in .
If there are two points ‘a’ and ‘b’ then the probability that the random variable will take the value between a and b a given by the management.
Where ‘a’ and ‘b’ are the points between and . The quantity is called probability differential.
The number of possible outcomes of a continuous random variable is uncountable infinite. Therefore, a probability of zero is assigned to each point of the random variable. Thus for all values of. This means that we must calculate a probability for a continuous random variable over an internal and not for any particular point. This probability can be interpreted as an area under the graph between the interval from a to b. When we say that the probability is zero that a continuous random variable assumes a specific value, we do not necessarily mean that a particular value cannot occur. We in fact, mean that the point (event) is one of an infinite number of possible outcomes. Whenever we have to find the probability of some interval of the continuous random variable, we can use any one of these two methods.
- Integral calculus.
- Area by geometrical diagrams (this method is easy to apply when is a simple linear function).
Properties of Probability Density Function:
The probability density function must have the following properties.
- It is non-negative i.e. for all
- Where c is any constant
- As probability of area for (constant), therefore . If we take an interval a to b. It makes no difference whether end points of the interval are considered or not. Thus we can write:
A continuous random variable X which can assume between and 8 inclusive, has a density function given by where is a constant.
(a) Calculate (b) (c)
(a) will be density functions if (i) for every x and (ii) . If , is clearly for every x in the given interval. Hence for to be density function, we have | http://www.emathzone.com/tutorials/basic-statistics/continuous-random-variable.html | 13 |
96 | Linear Algebra/Topic: Dimensional Analysis
"You can't add apples and oranges," the old saying goes. It reflects our experience that in applications the quantities have units and keeping track of those units is worthwhile. Everyone has done calculations such as this one that use the units as a check.
However, the idea of including the units can be taken beyond bookkeeping. It can be used to draw conclusions about what relationships are possible among the physical quantities.
To start, consider the physics equation: . If the distance is in feet and the time is in seconds then this is a true statement about falling bodies. However it is not correct in other unit systems; for instance, it is not correct in the meter-second system. We can fix that by making the a dimensional constant.
For instance, the above equation holds in the yard-second system.
So our first point is that by "including the units" we mean that we are restricting our attention to equations that use dimensional constants.
By using dimensional constants, we can be vague about units and say only that all quantities are measured in combinations of some units of length , mass , and time . We shall refer to these three as dimensions (these are the only three dimensions that we shall need in this Topic). For instance, velocity could be measured in or , but in all events it involves some unit of length divided by some unit of time so the dimensional formula of velocity is . Similarly, the dimensional formula of density is . We shall prefer using negative exponents over the fraction bars and we shall include the dimensions with a zero exponent, that is, we shall write the dimensional formula of velocity as and that of density as .
In this context, "You can't add apples to oranges" becomes the advice to check that all of an equation's terms have the same dimensional formula. An example is this version of the falling body equation: . The dimensional formula of the term is . For the other term, the dimensional formula of is ( is the dimensional constant given above as ) and the dimensional formula of is , so that of the entire term is . Thus the two terms have the same dimensional formula. An equation with this property is dimensionally homogeneous.
Quantities with dimensional formula are dimensionless. For example, we measure an angle by taking the ratio of the subtended arc to the radius
which is the ratio of a length to a length and thus angles have the dimensional formula .
The classic example of using the units for more than bookkeeping, using them to draw conclusions, considers the formula for the period of a pendulum.
The period is in units of time . So the quantities on the other side of the equation must have dimensional formulas that combine in such a way that their 's and 's cancel and only a single remains. The table on below has the quantities that an experienced investigator would consider possibly relevant. The only dimensional formulas involving are for the length of the string and the acceleration due to gravity. For the 's of these two to cancel, when they appear in the equation they must be in ratio, e.g., as , or as , or as . Therefore the period is a function of .
This is a remarkable result: with a pencil and paper analysis, before we ever took out the pendulum and made measurements, we have determined something about the relationship among the quantities.
To do dimensional analysis systematically, we need to know two things (arguments for these are in (Bridgman 1931), Chapter II and IV). The first is that each equation relating physical quantities that we shall see involves a sum of terms, where each term has the form
for numbers , ..., that measure the quantities.
For the second, observe that an easy way to construct a dimensionally homogeneous expression is by taking a product of dimensionless quantities or by adding such dimensionless terms. Buckingham's Theorem states that any complete relationship among quantities with dimensional formulas can be algebraically manipulated into a form where there is some function such that
for a complete set of dimensionless products. (The first example below describes what makes a set of dimensionless products "complete".) We usually want to express one of the quantities, for instance, in terms of the others, and for that we will assume that the above equality can be rewritten
where is dimensionless and the products , ..., don't involve (as with , here is just some function, this time of arguments). Thus, to do dimensional analysis we should find which dimensionless products are possible.
For example, consider again the formula for a pendulum's period.
By the first fact cited above, we expect the formula to have (possibly sums of terms of) the form . To use the second fact, to find which combinations of the powers , ..., yield dimensionless products, consider this equation.
It gives three conditions on the powers.
Note that is and so the mass of the bob does not affect the period. Gaussian reduction and parametrization of that system gives this
(we've taken as one of the parameters in order to express the period in terms of the other quantities).
Here is the linear algebra. The set of dimensionless products contains all terms subject to the conditions above. This set forms a vector space under the "" operation of multiplying two such products and the "" operation of raising such a product to the power of the scalar (see Problem 5). The term "complete set of dimensionless products" in Buckingham's Theorem means a basis for this vector space.
We can get a basis by first taking , and then , . The associated dimensionless products are and . Because the set is complete, Buckingham's Theorem says that
where is a function that we cannot determine from this analysis (a first year physics text will show by other means that for small angles it is approximately the constant function ).
Thus, analysis of the relationships that are possible between the quantities with the given dimensional formulas has produced a fair amount of information: a pendulum's period does not depend on the mass of the bob, and it rises with the square root of the length of the string.
For the next example we try to determine the period of revolution of two bodies in space orbiting each other under mutual gravitational attraction. An experienced investigator could expect that these are the relevant quantities.
To get the complete set of dimensionless products we consider the equation
which results in a system
with this solution.
As earlier, the linear algebra here is that the set of dimensionless products of these quantities forms a vector space, and we want to produce a basis for that space, a "complete" set of dimensionless products. One such set, gotten from setting and , and also setting and is . With that, Buckingham's Theorem says that any complete relationship among these quantities is stateable this form.
Remark. An important application of the prior formula is when is the mass of the sun and is the mass of a planet. Because is very much greater than , the argument to is approximately , and we can wonder whether this part of the formula remains approximately constant as varies. One way to see that it does is this. The sun is so much larger than the planet that the mutual rotation is approximately about the sun's center. If we vary the planet's mass by a factor of (e.g., Venus's mass is times Earth's mass), then the force of attraction is multiplied by , and times the force acting on times the mass gives, since , the same acceleration, about the same center (approximately). Hence, the orbit will be the same and so its period will be the same, and thus the right side of the above equation also remains unchanged (approximately). Therefore, is approximately constant as varies. This is Kepler's Third Law: the square of the period of a planet is proportional to the cube of the mean radius of its orbit about the sun.
The final example was one of the first explicit applications of dimensional analysis. Lord Raleigh considered the speed of a wave in deep water and suggested these as the relevant quantities.
|velocity of the wave|
|density of the water|
|acceleration due to gravity|
gives this system
with this solution space
(as in the pendulum example, one of the quantities turns out not to be involved in the relationship). There is one dimensionless product, , and so is times a constant ( is constant since it is a function of no arguments).
As the three examples above show, dimensional analysis can bring us far toward expressing the relationship among the quantities. For further reading, the classic reference is (Bridgman 1931)—this brief book is delightful. Another source is (Giordano, Wells & Wilde 1987).. A description of dimensional analysis's place in modeling is in (Giordano, Jaye & Weir 1986)..
- Problem 1
Consider a projectile, launched with initial velocity , at an angle . An investigation of this motion might start with the guess that these are the relevant quantities. (de Mestre 1990)
|angle of launch|
|acceleration due to gravity|
- Show that is a complete set of dimensionless products. (Hint. This can be done by finding the appropriate free variables in the linear system that arises, but there is a shortcut that uses the properties of a basis.)
- These two equations of motion for projectiles are familiar: and . Manipulate each to rewrite it as a relationship among the dimensionless products of the prior item.
- Problem 2
- Einstein (Einstein 1911) conjectured that the infrared characteristic frequencies of a solid may be determined by the same forces between atoms as determine the solid's ordanary elastic behavior. The relevant quantities are
|number of atoms per cubic cm|
|mass of an atom|
Show that there is one dimensionless product. Conclude that, in any complete relationship among quantities with these dimensional formulas, is a constant times . This conclusion played an important role in the early study of quantum phenomena.
- Problem 3
The torque produced by an engine has dimensional formula . We may first guess that it depends on the engine's rotation rate (with dimensional formula ), and the volume of air displaced (with dimensional formula ) (Giordano, Wells & Wilde 1987).
- Try to find a complete set of dimensionless products. What goes wrong?
- Adjust the guess by adding the density of the air (with dimensional formula ). Now find a complete set of dimensionless products.
- Problem 4
Dominoes falling make a wave. We may conjecture that the wave speed depends on the the spacing between the dominoes, the height of each domino, and the acceleration due to gravity . (Tilley)
- Find the dimensional formula for each of the four quantities.
- Show that is a complete set of dimensionless products.
- Show that if is fixed then the propagation speed is proportional to the square root of .
- Problem 5
Prove that the dimensionless products form a vector space under the operation of multiplying two such products and the operation of raising such the product to the power of the scalar. (The vector arrows are a precaution against confusion.) That is, prove that, for any particular homogeneous system, this set of products of powers of , ...,
is a vector space under:
(assume that all variables represent real numbers).
- Problem 6
The advice about apples and oranges is not right. Consider the familiar equations for a circle and .
- Check that and have different dimensional formulas.
- Produce an equation that is not dimensionally homogeneous (i.e., it adds apples and oranges) but is nonetheless true of any circle.
- The prior item asks for an equation that is complete but not dimensionally homogeneous. Produce an equation that is dimensionally homogeneous but not complete.
(Just because the old saying isn't strictly right, doesn't keep it from being a useful strategy. Dimensional homogeneity is often used as a check on the plausibility of equations used in models. For an argument that any complete equation can easily be made dimensionally homogeneous, see (Bridgman 1931, Chapter I, especially page 15.)
- Bridgman, P. W. (1931), Dimensional Analysis, Yale University Press .
- de Mestre, Neville (1990), The Mathematics of Projectiles in sport, Cambridge University Press .
- Giordano, R.; Jaye, M.; Weir, M. (1986), "The Use of Dimensional Analysis in Mathematical Modeling", UMAP Modules (COMAP) (632) .
- Giordano, R.; Wells, M.; Wilde, C. (1987), "Dimensional Analysis", UMAP Modules (COMAP) (526) .
- Einstein, A. (1911), Annals of Physics 35: 686 .
- Tilley, Burt, Private Communication . | http://en.m.wikibooks.org/wiki/Linear_Algebra/Topic:_Dimensional_Analysis | 13 |
104 | The purpose of this activity is to provide an alternate method of solving systems of equations using the process of numerical iteration. This is not necessarily intended to always be an easier method of solving systems and, in fact, it sometimes complicates the problems. The iterative process does involve some very beautiful mathematics and it occasionally offers a method of solution to systems that are otherwise analytically impossible to solve. The first activity presents the algorithm for the solution of any system of two linear equations and is suitable for Algebra I or Algebra II students. The second activity generalizes the algorithm to systems of equations in which one of the equations is linear and the other is any continuous and invertible function. The second part requires that students be familiar with inverses of various standard functions and is probably most appropriate for Algebra II or Trigonometry. Transformational geometry is used to justify the algorithms.
We will illustrate this technique with the following system:
Notice that one of the functions in the system is the identity function. The reason for this requirement stems from the process involved in iteration. Allow students to pick any initial value of x that they want to try. This value of x is called a seed and is denoted by . For this example, we will let = 5. We now iterate our function using the non-identity equation in our system. Substitute into y = .5x+1 to obtain = .5(5)+1 = 3.5. Similarly, = .5()+1 = .5(3.5)+1 = 2.75. Continuing this process, we develop the sequence:
= 5, = 3.5, = 2.75, 2.375, 2.1875, 2.09375, . . .
An easy way to achieve this sequence of numbers using a TI calculator would be to first type the seed into your calculator and then press the key. This stores the seed value into the Ans memory location in the TI. The iteration can be performed by typing .5*Ans + 1 and repeatedly pressing the key. The initial steps on your calculator should look like the figure to the right. If the process is continued, you should notice that the numbers on the calculator converge relatively quickly to a value of two. In terms of our sequence, this is to say that the limit of
Texas Instruments TI-8_
.5*Ans + 1
our iterative process is 2. From the first equation in our system, we know that the x- and y-values of the solution are identical and therefore our system has the solution (2,2).
The most amazing thing about this process is that it does not matter what initial value was chosen for the seed. Any chosen value for will eventually converge to 2 and therefore lead to the solution point (2,2).
Does this process always work? Try each of the following systems with several different seed values.
The iteration of (2) should have developed a sequence converging to -8/5. Unless you made some lucky guesses (see problem 3 at the end of the article), the sequences for (3) and (4) went to , depending on your choices for .
The obvious question at this point is: Why do some systems converge while others do not? The answer to this lies in understanding the nature of fixed points in an iterative operation. Simply put, when iterating a system consisting of the identity function and one other function, the intersection of two graphs is called a fixed point and is attracting if the absolute value of the slope of the non-identity function at the point of intersection is less than one and is repelling if the slope at the fixed point is greater than one. In terms of our sequences, this means that if the absolute slope of the non-identity line is less than one, then we can solve our system by iteration and if the slope of the non-identity line is greater than one, then in general, we cannot solve the system using our present technique. This idea should become obvious to your students if they are given several different linear systems and are asked to determine when convergence is possible and when it is not.
At first glance, this seems to be quite a limited technique. Not only must one of the lines be the identity function, but the other must also have a slope between -1 and 1. Using the processes developed in transformational geometry, this process can be extended to all systems of two-dimensional linear equations.
Remember that convergence requires two important conditions:
1) one of the functions must be the identity function, and
2) the other function must have an absolute slope less than 1
Let us use (3) from above to illustrate the technique. Note that in this case, the first requirement has been met, but the second has not. This can be corrected by transforming the second linear equation into the identity function and then applying the same transformations to our first equation. By subtracting one from each equation and then dividing both by two, we obtain the sequence of systems below. This is transformationally equivalent to applying a
vertical translation of -1 and a vertical "shrink" of magnitude 2 to both of the lines in our system. Both of our requirements for convergence have now been met. If we were to try any value for now, we would notice that the system would converge to the fixed point that was previously repelling. For example, if = 5, then we obtain the sequence:
5, 2, .5, -.25, -.625, . . . , -1
Since the x- and y-values of the original system were identical, the solution would be (-1,-1).
Likewise, (4) could be transformed to (5) and upon iteration the solution would be found to be (.75,.75).
Why does this work? In general, we have system (6). If a = c, then our system is either inconsistent (if b d) or dependent (if b = d), neither of which are very interesting cases. Without loss of generality, we can therefore assume that a c. We want to transform the line with the greatest absolute slope into the identity function. This process will simultaneously satisfy both of the conditions needed for convergence. For our system, we will assume that . Now, subtract b from both equations and then divide both by a. Under the rules of transformational geometry, this has the effect of applying a vertical translation of magnitude -b and then a vertical "stretch" of magnitude 1/a. It is crucial to note that while this process certainly alters the y-value of the point of intersection of the two lines, it does not affect the x-coordinate in any way. Algebraically, our system has been transformed as follows:
The first equation is the identity and since , then the second has a slope with absolute value less than one. Because of the transformations involved, this process only yields the x-values of the solution points. To solve the system choose any value for and iterate. The resulting limit of the sequence will be the x-value of the solution point. To find the y-value, substitute your x-coordinate into either of the original two equations.
Consider (7). Since the top equation has the greatest absolute slope, it must be
transformed into the identity function. Subtract two from both equations and then divide both by -3. This is equivalent to sliding both lines down two units and vertically compressing them by a factor of -3. The result is (8). Since the choice for does not matter, we can use .425 as our seed. The resulting sequence is approximately:
.425, 3.05, 1.3, 2.47, . . . , 2
The x-value of the solution is 2 and the y-coordinate is found by substitution into the original system: y = 2(2) - 8 = -4. Therefore, the solution to the system is (2,-4). This can be verified by any of the traditional techniques.
In this section, your students should probably utilize graph paper to make reasonably careful sketches of each system with particular attention to the slopes.
As an easy initial example, consider
An important observation is that the identity function is again one of the parts of the system. Assuming no other knowledge of the process, we will proceed as before. Set your TI calculator mode to radians and enter the two functions from our system. The graph should look like the figure on the right. Again, any random number will work for so let = 12 and iterate as
Texas Instruments TI-8_
described in Part I: type 12 and press . Next type cos Ans and repeatedly press the key to perform the iteration. The initial iterations can be seen in the figure to the left and if the process is continued, the sequence will converge to 0.739085133. Like the earlier examples, this system has a solution with identical coordinates, so the solution by iteration is approximately (0.739,0.739).
(9) (10) .
Next, try (9) and apply transformations to obtain (10). Now that we have the identity function, graph the two equations on your calculator and verify that you get the graph to the right. While the graphs are now tangent, we still have an intersection. Unlike our previous times, not all seeds will work this time. Through trial and error you can discover that choices for in the interval [-2,2] will result in convergence (albeit very slowly) while any other choices for result in divergence. With some patience you can
determine that this iterative function has a limit of -2. Since we had to transform the original system, we can recall from section one that our limit is then the x-coordinate of the solution point. Evaluating either of the original functions at this x-value, we find that y = (-2)2 = 4 and therefore our system has solution (-2,4). Verify this using any traditional methods.
Try each of the following systems. Remember to graph the systems after any necessary transformations and try to notice any graphical relationships within the systems. It might also be helpful to try different values for .
System (11) converged to about -0.567143 while (12) did not converge for any seed. Why?
The slopes at the points of intersection for (9) - (12) varied. With partially non-linear systems, the slopes are not necessarily easy to find unless one uses calculus. However, using careful sketches of each transformed system, students can approximate the tangent lines to the curves at the points of intersection. They should then be able to approximate the slope of their tangent lines. Note that the absolute values of the slopes of the (9) and (11) were less than one and that the iterative processes converged rather quickly. The slope of (10) had a slope of one and converged slowly. (12) had a slope greater than one and all seeds diverged. These results are consistent with the restrictions on convergence from Part I.
Let us now change (12) by replacing the nonlinear component with its inverse to obtain:
Geometrically (and transformationally) this has the effect of reflecting the original function over the line y = x. Try any seed on the new system and notice that while it diverged quickly before, it now converges very quickly to 1.90416. As before, this is the x-value of the solution and since the identity was part of the original system, we get the y-coordinate with no work and the solution of the system is approximately (1.9,1.9).
This section discussed systems of two equations of the form:
The requirements for convergence with systems of this type are the same as they were in Part I. The first requirement is easy to satisfy by subtracting b from both equations and then dividing both by a to obtain:
We can no longer depend on f(x) being linear and this makes the process particularly interesting. If the absolute value of the slope of the non-linear portion of our system is less than one, then iterations will converge; otherwise, they will diverge. By using the inverse technique of the previous paragraph, we have effectively reflected our system over the line y=x. Recall that by our initial transformations we changed the y-value while the x-value was unchanged. Before finding the inverse, notice that we had moved the point of intersection of the system to a location on the line y=x. Upon reflection, transformational geometry guarantees that points on the reflecting line are their own images. This means that when we found the inverse of the non-linear function, again the x-value of the point of intersection remained unaltered. Finding the inverse is only necessary if the absolute value of the original slope of the tangent line was greater than one. Transformations also guarantee that if a line is reflected over the line y=x, then the slope of the image line is the reciprocal of the original slope. This means that if the slope of the tangent line was originally greater than one, than the slope of the inverse function at the point of intersection would be less than one and our second requirement will be met. From another perspective, this is exactly the same process we used in Part I (See problem #1).
When given a system containing at least one linear equation, this process can be simplified to the following steps:
1) Transform the linear equation to the identity function, y = x, and apply the identical transformation to the other function in the system.
2) To try to ensure convergence, pick a value for the seed somewhat close to where you think the point of intersection might be. Iterate!
3) If your sequence approaches a limit, then you have found the x-value of the solution point. If the sequence diverges, then you are guaranteed that slope at the intersection was greater than one. If this happens, replace the function with its inverse and iterate again. You are guaranteed to get the x-coordinate now.
4) Plug your x-value from above into either of the original functions and you will get the y-coordinate. Fini!
The method of numerical iteration is not necessarily intended to always be an easier method of solving systems and the method of finding inverses could in fact complicate the process. The iterative process does, though, involve some very beautiful mathematics and offers a method of solution to many systems that are otherwise analytically impossible to solve.
1) Why does the overall method for a system containing only one linear equation also work for the systems of two linear equations? That is, why is it only necessary to change one line to the identity and then see what happens? (Hint: Think about step three.)
2) Prove that if y = f(x) is a linear function with a slope between -1 and 1, then the iteration of f(x) with any seed will converge on some number xs such that f(xs) = xs.
3) Use these systems to answer the following questions:
A) The given method will never work if the slopes in the original system are opposite (a = -c). Try the iterative process that was given with any choice you want for in each system. What does happen? How quickly is this pattern achieved?
B) Solve each of the above systems by any traditional means available and record the solution point.
C) How do the x-coordinates of the solutions from part B correspond to the patterns you achieved in part A?
D) How can you now revise the iterative process to handle two line systems where the slopes are opposite (a = -c)? (This will only work with two line systems.)
4) What happens to the system if is chosen to be -1 or to the system if is chosen to be 3/4. Why? How do the given choices of relate to the graphs of each of the given systems? Try other types of systems to test your hypothesis.
5) Systems involving at least one non-linear equation often have multiple solutions.
A) Given: Look at the graph of the system to determine how many solutions exist. First transform the system to get the identity function and try some careful values for . Notice that only one solution is generally possible. Replace the cubic function with its inverse to obtain the other solutions. Notice that certain choices of tend to lead to specific solutions for the system. The set of all that result in a specific solution is called the basin of attraction for that given solution. Can you determine the basins of attraction for each solution to this system? Calculus students should be able to determine the exact values for the endpoints of the basins. (Recall the requirements for convergence in a system!)
B) Try another example: Unless something like problem 3 happens, all choices for will initially diverge and you will have to find the inverse of a quadratic. Be careful. The inverse of is . Try both the positive and the negative portions in your attempt to find a solution. Look at the graph of the original system and try to determine why both the negative and the positive square roots are needed to solve the system.
C) Use your answers to parts A and B to determine a method for solving systems that have multiple solutions.
6) The big question: How can you handle systems where neither equation is linear?
This particular system cannot be solved analytically and the usual approach is to graph the solutions on a calculator and "zoom in" on the points of intersection. The two solutions near the origin are relatively easy to locate in this manner, but the third solution "way out" in the first quadrant requires a significant readjustment of the viewing window. The window would also have to be readjusted after each zoom. Iteration allows you to avoid that problem.
Given a system of the form
where neither f nor g is linear, the system could be transformed into either (12) or (13). Transform (11) into each of the
given forms if and . At this point, your system is much like the transformed systems we have used before. Use the two different transformed systems and the iteration method to find the solutions to the systems. Determine the basis of attraction for each solution.
If you are familiar with the process of "cobwebbing" to graphically represent iteration, this paper could take on another beautiful dimension. In this light, another solution technique for problem 5 can be determined by "cobwebbing" between both and . Your students might enjoy this view of iteration. In this case, you are really transforming the system in this problem into the following two forms.
For an excellent discussion of transformational geometry and its features and properties, we suggest that you read the appropriate sections of UCSMP's Geometry and Advanced Algebra (Scott, Foresman publishers).
The TI-82 graphics calculator has a great cobwebbing feature. As an introduction to the concepts of "cobwebs," we recommend that you read the appropriate section of the TI-82 guidebook. | http://www.woodrow.org/teachers/mi/1993/17harr.html | 13 |
117 | everyone has had experience with surveys. Market surveys ask respondents whether
they recognize products and their feelings about them. Political polls ask questions
about candidates for political office or opinions related to political and social
issues. Needs assessments use surveys that identify the needs of groups. Evaluations
often use surveys to assess the extent to which programs achieve their goals.
is a method of collecting information by asking questions. Sometimes interviews
are done face-to-face with people at home, in school, or at work. Other times
questions are sent in the mail for people to answer and mail back. Increasingly,
surveys are conducted by telephone.
Although we want
to have information on all people, it is usually too expensive and time consuming
to question everyone. So we select only some of these individuals and question
them. It is important to select these people in ways that make it likely that
they represent the larger group.
is all the individuals in whom we are interested. (A population does not always
consist of individuals. Sometimes, it may be geographical areas such as all
cities with populations of 100,000 or more. Or we may be interested in all
households in a particular area. In the data used in the exercises of this
module the population consists of individuals who are California residents.)
A sample is the subset of the population involved in a study. In other
words, a sample is part of the population. The process of selecting the sample
is called sampling. The idea of sampling is to select part of the population
to represent the entire population.
The United States
Census is a good example of sampling. The census tries to enumerate all residents
every ten years with a short questionnaire. Approximately every fifth household
is given a longer questionnaire. Information from this sample (i.e., every
fifth household) is used to make inferences about the population. Political
polls also use samples. To find out how potential voters feel about a particular
race, pollsters select a sample of potential voters. This module uses opinions
from three samples of California residents age 18 and over. The data were
collected during July, 1985, September, 1991, and February, 1995, by the Field
Research Corporation (The Field Institute 1985, 1991, 1995). The Field Research
Corporation is a widely-respected survey research firm and is used extensively
by the media, politicians, and academic researchers.
Since a survey
can be no better than the quality of the sample, it is essential to understand
the basic principles of sampling. There are two types of sampling-probability
and nonprobability. A probability sample is one in which each individual
in the population has a known, nonzero chance of being selected in the sample.
The most basic type is the simple random sample. In a simple
random sample, every individual (and every combination of individuals) has
the same chance of being selected in the sample. This is the equivalent of
writing each person's name on a piece of paper, putting them in plastic balls,
putting all the balls in a big bowl, mixing the balls thoroughly, and selecting
some predetermined number of balls from the bowl. This would produce a simple
The simple random
sample assumes that we can list all the individuals in the population, but
often this is impossible. If our population were all the households or residents
of California, there would be no list of the households or residents available,
and it would be very expensive and time consuming to construct one. In this
type of situation, a multistage cluster sample would be used.
The idea is very simple. If we wanted to draw a sample of all residents of
California, we might start by dividing California into large geographical
areas such as counties and selecting a sample of these counties. Our sample
of counties could then be divided into smaller geographical areas such as
blocks and a sample of blocks would be selected. We could then construct a
list of all households for only those blocks in the sample. Finally, we would
go to these households and randomly select one member of each household for
our sample. Once the household and the member of that household have been
selected, substitution would not be allowed. This often means that we must
call back several times, but this is the price we must pay for a good sample.
The Field Poll
used in this module is a telephone survey. It is a probability sample using
a technique called random-digit dialing. With random-digit dialing,
phone numbers are dialed randomly within working exchanges (i.e., the first
three digits of the telephone number). Numbers are selected in such a way
that all areas have the proper proportional chance of being selected in the
sample. Random-digit dialing makes it possible to include numbers that are
not listed in the telephone directory and households that have moved into
an area so recently that they are not included in the current telephone directory.
sample is one in which each individual in the population does not have
a known chance of selection in the sample. There are several types of nonprobability
samples. For example, magazines often include questionnaires for readers to
fill out and return. This is a volunteer sample since respondents self-select
themselves into the sample (i.e., they volunteer to be in the sample). Another
type of nonprobability sample is a quota sample. Survey researchers
may assign quotas to interviewers. For example, interviewers might be told
that half of their respondents must be female and the other half male. This
is a quota on sex. We could also have quotas on several variables (e.g., sex
and race) simultaneously.
are preferable to nonprobability samples. First, they avoid the dangers of
what survey researchers call "systematic selection biases" which are inherent
in nonprobability samples. For example, in a volunteer sample, particular
types of persons might be more likely to volunteer. Perhaps highly-educated
individuals are more likely to volunteer to be in the sample and this would
produce a systematic selection bias in favor of the highly educated. In a
probability sample, the selection of the actual cases in the sample is left
to chance. Second, in a probability sample we are able to estimate the amount
of sampling error (our next concept to discuss).
We would like
our sample to give us a perfectly accurate picture of the population. However,
this is unrealistic. Assume that the population is all employees of a large
corporation, and we want to estimate the percent of employees in the population
that is satisfied with their jobs. We select a simple random sample of 500
employees and ask the individuals in the sample how satisfied they are with
their jobs. We discover that 75 percent of the employees in our sample are
satisfied. Can we assume that 75 percent of the population is satisfied? That
would be asking too much. Why would we expect one sample of 500 to give us
a perfect representation of the population? We could take several different
samples of 500 employees and the percent satisfied from each sample would
vary from sample to sample. There will be a certain amount of error as a result
of selecting a sample from the population. We refer to this as sampling
error. Sampling error can be estimated in a probability sample, but not
in a nonprobability sample.
It would be wrong
to assume that the only reason our sample estimate is different from the true
population value is because of sampling error. There are many other sources
of error called nonsampling error. Nonsampling error would include
such things as the effects of biased questions, the tendency of respondents
to systematically underestimate such things as age, the exclusion of certain
types of people from the sample (e.g., those without phones, those without
permanent addresses), or the tendency of some respondents to systematically
agree to statements regardless of the content of the statements. In some studies,
the amount of nonsampling error might be far greater than the amount of sampling
error. Notice that sampling error is random in nature, while nonsampling error
may be nonrandom producing systematic biases. We can estimate the amount of
sampling error (assuming probability sampling), but it is much more difficult
to estimate nonsampling error. We can never eliminate sampling error entirely,
and it is unrealistic to expect that we could ever eliminate nonsampling error.
It is good research practice to be diligent in seeking out sources of nonsampling
error and trying to minimize them.
Variables One at a Time (Univariate Analysis)
The rest of this
chapter will deal with the analysis of survey data. Data analysis involves
looking at variables or "things" that vary or change. A variable is
a characteristic of the individual (assuming we are studying individuals).
The answer to each question on the survey forms a variable. For example, sex
is a variable-some individuals in the sample are male and some are female.
Age is a variable; individuals vary in their ages.
Looking at variables
one at a time is called univariate analysis. This is the usual starting
point in analyzing survey data. There are several reasons to look at variables
one at a time. First, we want to describe the data. How many of our sample
are men and how many are women? How many are black and how many are white?
What is the distribution by age? How many say they are going to vote for Candidate
A and how many for Candidate B? How many respondents agree and how many disagree
with a statement describing a particular opinion?
we might want to look at variables one at a time involves recoding. Recoding
is the process of combining categories within a variable. Consider age, for
example. In the data set used in this module, age varies from 18 to 89, but
we would want to use fewer categories in our analysis, so we might combine
age into age 18 to 29, 30 to 49, and 50 and over. We might want to combine
African Americans with the other races to classify race into only two categories-white
and nonwhite. Recoding is used to reduce the number of categories in the variable
(e.g., age) or to combine categories so that you can make particular types
of comparisons (e.g., white versus nonwhite).
distribution is one of the basic tools for looking at variables one at a time.
A frequency distribution is the set of categories and the number of
cases in each category. Percent distributions show the percentage in
each category. Table 3.1 shows frequency and percent distributions for two
hypothetical variables-one for sex and one for willingness to vote for a woman
candidate. Begin by looking at the frequency distribution for sex. There are
three columns in this table. The first column specifies the categories-male
and female. The second column tells us how many cases there are in each category,
and the third column converts these frequencies into percents.
In this hypothetical
example, there are 380 males and 570 females or 40 percent male and 60 percent
female. There are a total of 950 cases. Since we know the sex for each case,
there are no missing data (i.e., no cases where we do not know the proper
category). Look at the frequency distribution for voting preference in Table
3.1. How many say they are willing to vote for a woman candidate and how many
are unwilling? (Answer: 460 willing and 440 not willing) How many refused to
answer the question? (Answer: 50) What percent say they are willing to vote
for a woman, what percent are not, and what percent refused to answer? (Answer:
48.4 percent willing to vote for a woman, 46.3 percent not willing, and 5.3
percent refused to tell us.) The 50 respondents who didn't want to answer the
question are called missing data because we don't know which category into which
to place them, so we create a new category (i.e., refused) for them. Since we
don't know where they should go, we might want a percentage distribution considering
only the 900 respondents who answered the question. We can determine this easily
by taking the 50 cases with missing information out of the base (i.e., the denominator
of the fraction) and recomputing the percentages. The fourth column in the frequency
distribution (labeled "valid percent") gives us this information. Approximately
51 percent of those who answered the question were willing to vote for a woman
and approximately 49 percent were not.
3.1 -- Frequency and Percent Distributions for Sex and Willingness to
Vote for a Woman Candidate (Hypothetical Data)
to Vote for a Woman
Willing to Vote for a Woman
With these data
we will use frequency distributions to describe variables one at a time. There
are other ways to describe single variables. The mean, median, and mode are
averages that may be used to describe the central tendency of a distribution.
The range and standard deviation are measures of the amount of variability
or dispersion of a distribution. (We will not be using measures of central
tendency or variability in this module.)
the Relationship Between Two Variables (Bivariate Analysis)
Usually we want
to do more than simply describe variables one at a time. We may want to analyze
the relationship between variables. Morris Rosenberg (1968:2) suggests that
there are three types of relationships: "(1) neither variable may influence
one another .... (2) both variables may influence one another ... (3) one
of the variables may influence the other." We will focus on the third of these
types which Rosenberg calls "asymmetrical relationships." In this type of
relationship, one of the variables (the independent variable) is assumed
to be the cause and the other variable (the dependent variable) is
assumed to be the effect. In other words, the independent variable is the
factor that influences the dependent variable.
researchers think that smoking causes lung cancer. The statement that specifies
the relationship between two variables is called a hypothesis (see
Hoover 1992, for a more extended discussion of hypotheses). In this hypothesis,
the independent variable is smoking (or more precisely, the amount one smokes)
and the dependent variable is lung cancer. Consider another example. Political
analysts think that income influences voting decisions, that rich people vote
differently from poor people. In this hypothesis, income would be the independent
variable and voting would be the dependent variable.
In order to demonstrate
that a causal relationship exists between two variables, we must meet
three criteria: (1) there must be a statistical relationship between the two
variables, (2) we must be able to demonstrate which one of the variables influences
the other, and (3) we must be able to show that there is no other alternative
explanation for the relationship. As you can imagine, it is impossible to
show that there is no other alternative explanation for a relationship. For
this reason, we can show that one variable does not influence another variable,
but we cannot prove that it does. We can only show that it is more plausible
or credible to believe that a causal relationship exists. In this section,
we will focus on the first two criteria and leave this third criterion to
the next section.
In the previous
section we looked at the frequency distributions for sex and voting preference.
All we can say from these two distributions is that the sample is 40 percent
men and 60 percent women and that slightly more than half of the respondents
said they would be willing to vote for a woman, and slightly less than half
are not willing to. We cannot say anything about the relationship between
sex and voting preference. In order to determine if men or women are more
likely to be willing to vote for a woman candidate, we must move from univariate
to bivariate analysis.
(or contingency table) is the basic tool used to explore the relationship
between two variables. Table 3.2 is the crosstabulation of sex and voting
preference. In the lower right-hand corner is the total number of cases in
this table (900). Notice that this is not the number of cases in the sample.
There were originally 950 cases in this sample, but any case that had missing
information on either or both of the two variables in the table has been excluded
from the table. Be sure to check how many cases have been excluded from your
table and to indicate this figure in your report. Also be sure that you understand
why these cases have been excluded. The figures in the lower margin and right-hand
margin of the table are called the marginal distributions. They are simply
the frequency distributions for the two variables in the whole table. Here,
there are 360 males and 540 females (the marginal distribution for the column
variable-sex) and 460 people who are willing to vote for a woman candidate
and 440 who are not (the marginal distribution for the row variable-voting
preference). The other figures in the table are the cell frequencies. Since
there are two columns and two rows in this table (sometimes called a 2 x 2
table), there are four cells. The numbers in these cells tell us how many
cases fall into each combination of categories of the two variables. This
sounds complicated, but it isn't. For example, 158 males are willing to vote
for a woman and 302 females are willing to vote for a woman.
We could make comparisons
rather easily if we had an equal number of women and men. Since these numbers
are not equal, we must use percentages to help us make the comparisons. Since
percentages convert everything to a common base of 100, the percent distribution
shows us what the table would look like if there were an equal number of men
3.2 -- Crosstabulation of Sex and Voting Preference (Frequencies)
to Vote for a Woman
Willing to Vote for a Woman
Before we percentage
Table 3.2, we must decide which of these two variables is the independent
and which is the dependent variable. Remember that the independent variable
is the variable we think might be the influencing factor. The independent
variable is hypothesized to be the cause, and the dependent variable is the
effect. Another way to express this is to say that the dependent variable
is the one we want to explain. Since we think that sex influences willingness
to vote for a woman candidate, sex would be the independent variable.
Once we have
decided which is the independent variable, we are ready to percentage the
table. Notice that percentages can be computed in different ways. In Table
3.3, the percentages have been computed so that they sum down to 100. These
are called column percents. If they sum across to 100, they are called
row percents. If the independent variable is the column variable,
then we want the percents to sum down to 100 (i.e., we want the column percents).
If the independent variable is the row variable, we want the percents to sum
across to 100 (i.e., we want the row percents). This is a simple, but very
important, rule to remember. We'll call this our rule for computing percents.
Although we often see the independent variable as the column variable so the
table sums down to 100 percent, it really doesn't matter whether the independent
variable is the column or the row variable. In this module, we will put the
independent variable as the column variable. Many others (but not everyone)
use this convention. It would be helpful if you did this when you write your
Now we are ready
to interpret this table. Interpreting a table means to explain what the table
is saying about the relationship between the two variables. First, we can look
at each category of the independent variable separately to describe the data
and then we compare them to each other. Since the percents sum down to 100 percent,
we describe down and compare across. The rule for interpreting percents
is to compare in the direction opposite to the way the percents sum to 100.
So, if the percents sum down to 100, we compare across, and if the percents
sum across to 100, compare down. If the independent variable is the column variable,
the percents will always sum down to 100. We can look at each category
of the independent variable separately to describe the data and then compare
them to each other-describe down and then compare across. In Table 3.3, row
one shows the percent of males and the percent of females who are willing to
vote for a woman candidate--43.9 percent of males are willing to vote for a
woman, while 55.9 percent of the females are. This is a difference of 12 percentage
points. Somewhat more females than males are willing to vote for a woman. The
second row shows the percent of males and females who are not willing to vote
for a woman. Since there are only two rows, the second row will be the complement
(or the reverse) of the first row. It shows that males are somewhat more likely
to be unwilling to vote for a woman candidate (a difference of 12 percentage
points in the opposite direction).
3.3 -- Voting Preference by Sex (Percents)
to Vote for a Woman
Willing to Vote for a Woman
When we observe
a difference, we must also decide whether it is significant. There are two
different meanings for significance-statistical significance and substantive
significance. Statistical significance considers whether the difference
is great enough that it is probably not due to chance factors. Substantive
significance considers whether a difference is large enough to be important.
With a very large sample, a very small difference is often statistically significant,
but that difference may be so small that we decide it isn't substantively
significant (i.e., it's so small that we decide it doesn't mean very much).
We're going to focus on statistical significance, but remember that even if
a difference is statistically significant, you must also decide if it is substantively
this idea of statistical significance. If our population is all men and women
of voting age in California, we want to know if there is a relationship between
sex and voting preference in the population of all individuals of voting age
in California. All we have is information about a sample from the population.
We use the sample information to make an inference about the population. This
is called statistical inference. We know that our sample is not a perfect
representation of our population because of sampling error. Therefore,
we would not expect the relationship we see in our sample to be exactly the
same as the relationship in the population.
Suppose we want
to know whether there is a relationship between sex and voting preference
in the population. It is impossible to prove this directly, so we have to
demonstrate it indirectly. We set up a hypothesis (called the null hypothesis)
that says that sex and voting preference are not related to each other in
the population. This basically says that any difference we see is likely to
be the result of random variation. If the difference is large enough that
it is not likely to be due to chance, we can reject this null hypothesis of
only random differences. Then the hypothesis that they are related (called
the alternative or research hypothesis) will be more credible.
first column of Table 3.4, we have listed the four cell frequencies from the
crosstabulation of sex and voting preference. We'll call these the observed
frequencies (f o) because they are what we observe from our table.
In the second column, we have listed the frequencies we would expect if, in
fact, there is no relationship between sex and voting preference in the population.
These are called the expected frequencies (f e). We'll briefly
explain how these expected frequencies are obtained. Notice from Table 3.1 that
51.1 percent of the sample were willing to vote for a woman candidate, while
48.9 percent were not. If sex and voting preference are independent (i.e., not
related), we should find the same percentages for males and females. In other
words, 48.9 percent (or 176) of the males and 48.9 percent (or 264) of the females
would be unwilling to vote for a woman candidate. (This explanation is adapted
from Norusis 1997.) Now, we want to compare these two sets of frequencies to
see if the observed frequencies are really like the expected frequencies. All
we do is to subtract the expected from the observed frequencies (column three).
We are interested in the sum of these differences for all cells in the table.
Since they always sum to zero, we square the differences (column four) to get
Finally, we divide
this squared difference by the expected frequency (column five). (Don't worry
about why we do this. The reasons are technical and don't add to your understanding.)
The sum of column five (12.52) is called the chi square statistic.
If the observed and the expected frequencies are identical (no difference),
chi square will be zero. The greater the difference between the observed and
expected frequencies, the larger the chi square.
If we get a large
chi square, we are willing to reject the null hypothesis. How large does the
chi square have to be? We reject the null hypothesis of no relationship between
the two variables when the probability of getting a chi square this large
or larger by chance is so small that the null hypothesis is very unlikely
to be true. That is, if a chi square this large would rarely occur by chance
(usually less than once in a hundred or less than five times in a hundred).
In this example, the probability of getting a chi square as large as 12.52
or larger by chance is less than one in a thousand. This is so unlikely that
we reject the null hypothesis, and we conclude that the alternative hypothesis
(i.e., there is a relationship between sex and voting preference) is credible
(not that it is necessarily true, but that it is credible). There is always
a small chance that the null hypothesis is true even when we decide to reject
it. In other words, we can never be sure that it is false. We can only conclude
that there is little chance that it is true.
we have concluded that there is a relationship between sex and voting preference
does not mean that it is a strong relationship. It might be a moderate or
even a weak relationship. There are many statistics that measure the strength
of the relationship between two variables. Chi square is not a measure of
the strength of the relationship. It just helps us decide if there is a basis
for saying a relationship exists regardless of its strength. Measures of
association estimate the strength of the relationship and are often used
with chi square. (See Appendix D for a discussion of how to compute the two
measures of association discussed below.)
is a measure of association appropriate when one or both of the variables
consists of unordered categories. For example, race (white, African American,
other) or religion (Protestant, Catholic, Jewish, other, none) are variables
with unordered categories. Cramer's V is a measure based on chi square. It
ranges from zero to one. The closer to zero, the weaker the relationship;
the closer to one, the stronger the relationship.
(sometimes referred to as Goodman and Kruskal's Gamma) is a measure of association
appropriate when both of the variables consist of ordered categories. For
example, if respondents answer that they strongly agree, agree, disagree,
or strongly disagree with a statement, their responses are ordered. Similarly,
if we group age into categories such as under 30, 30 to 49, and 50 and over,
these categories would be ordered. Ordered categories can logically be arranged
in only two ways-low to high or high to low. Gamma ranges from zero to one,
but can be positive or negative. For this module, the sign of Gamma would
have no meaning, so ignore the sign and focus on the numerical value. Like
V, the closer to zero, the weaker the relationship and the closer to one,
the stronger the relationship.
to use Cramer's V or Gamma depends on whether the categories of the variable
are ordered or unordered. However, dichotomies (variables consisting of only
two categories) may be treated as if they are ordered even if they are not.
For example, sex is a dichotomy consisting of the categories male and
female. There are only two possible ways to order sex-male, female and female,
male. Or, race may be classified into two categories-white and nonwhite. We
can treat dichotomies as if they consisted of ordered categories because they
can be ordered in only two ways. In other words, when one of the variables
is a dichotomy, treat this variable as if it were ordinal and use gamma. This
is important when choosing an appropriate measure of association.
In this chapter
we have described how surveys are done and how we analyze the relationship
between two variables. In the next chapter we will explore how to introduce
additional variables into the analysis.
AND SUGGESTED READING
- Riley, Matilda
White. 1963. Sociological Research I: A Case Approach. New York:
Harcourt, Brace and World.
- Hoover, Kenneth
R. 1992. The Elements of Social Scientific Thinking (5th
Ed.). New York: St. Martin's.
- Gorden, Raymond
L. 1987. Interviewing: Strategy, Techniques and Tactics. Chicago:
- Babbie, Earl
R. 1990. Survey Research Methods (2nd Ed.). Belmont, CA:
- Babbie, Earl
R. 1997. The Practice of Social Research (8th Ed). Belmont,
- Knoke, David,
and George W. Bohrnstedt. 1991. Basic Social Statistics. Itesche,
- Riley, Matilda
White. 1963. Sociological Research II Exercises and Manual.
New York: Harcourt, Brace & World.
Marija J. 1997. SPSS 7.5 Guide to Data Analysis. Upper Saddle River,
New Jersey: Prentice Hall.
- The Field
Institute. 1985. California Field Poll Study, July, 1985. Machine-readable
- The Field
Institute. 1991. California Field Poll Study, September, 1991. Machine-readable
- The Field
Institute. 1995. California Field Poll Study, February, 1995. Machine-readable | http://www.ssric.org/trd/modules/cowi/chapter3 | 13 |
55 | CS 5 Computer Literacy
Introduction to Databases: Tables and Queries
Microsoft Access organizes your information into tables. A table is made up of rows called records and columns called fields that look like a Microsoft Excel worksheet.
Every table has a topic. A table's topic is divided into categories that describe some aspect of the table's topic. These categories are called fields. For example, if the topic of a table is your customers, some fields the table would contain would be customers' last and first names. All the data stored in all the fields in one row of an Access table makes up one record. For example, a record might contain all the information you have about one item in your inventory.
A simple database might have only one table. However, most databases will contain more than one table. For example, you might have a table that stores information about inventory, another table that stores information about orders, and another table with information about customers.
To determine which columns a table needs, decide what information you need to collect about the table's topic. For example, for the Customers table, the First Name, Last Name, Address, City, State, Zip, and Email address would be a good starting list of fields. Each record in the table contains the same set of columns, so you can store the First Name, Last Name, Address, City, State, Zip, and Email address information for each record.
Specifying primary keys
Each table should include a field or group of fields that uniquely identifies each record stored in the table. The data stored in this field is often a unique identification number, such as an employee ID number or a serial number. In database terminology, this information is called the primary key of the table. Access uses primary key fields to quickly associate data from multiple tables and bring the data together for you.
To read more about tables and database design, follow the link below:
Database Design Basics
Setting Field Properties
As you define each field in a table, you will also set field properties. Field properties include data type and optionally, formats and a description of the field's content. Each data type has a separate set of options available in the Field Properties portion of the Table Design window.
For each field in a table, you make a selection from the data typeís list of options. Choose a field's data type appropriate for the kind of data to be stored in that field. For example a Price field that will hold dollar amounts would have a data type of Number or Currency while a Phone field would have a data type of text.
Field size defines the maximum number of characters field values may contain for text, number and AutoNumber data types. A field's size should be big enough to hold the largest piece of data you expect to store in the field without making the size too large. Field sizes larger than they need to be waste disk space. For example, a State field with a size of 50 would be wasteful; a more appropriate value would be a field size of 2.
Working With Records
Add Records in Datasheet View
After creating a table, you can add records to it in Datasheet View. Newly entered records are placed in the order in which you enter them. When you close the table and open it again, the records will be ordered according to the primary key.
To edit a recordís field values, select the field value and type the new information.
To delete a record, click the record (row) selector to the left of the first field value and either press the Delete key or click the Delete button in the Records section of the Home tab. To delete more than one record at a time, click and drag through the record selectors for all the records you want to delete and press the delete key. Remember, once you delete one or more records, the records are gone. You canít undo deletion of records!
Sort a Datasheet
Sorting rearranges the records in a table or query datasheet. You select one or more fields to sort by and then choose ascending or descending order. If you want to sort by more than one field in a table, the fields must be side by side; you canít sort on two or more fields if the fields you want to sort on are separated by other fields.
A collection of related tables is called a relational database. Relations between tables are based on a field that is common to two of the related tables. For example, every record for every customer in a Customers table has a field that contains data that uniquely identifies each customer. Even if you had two customers named Jane Brown, each would have her own unique identifier to distinguish one Jane's order from the other Jane's order. This unique identifier is called a primary key. Every time a customer places an order, a record is created in the Orders table and the customer's primary key is added to the order information. The existence of this common field enables Access to associate or link a record in the Customers table with a record in the Orders table. It is this ability to create links between tables that makes a relational database such as Access a powerful tool.
Data redundancy refers to the same data that exists in two or more tables. Data redundancy can lead to data that lacks integrity, that is, data that is inaccurate. Data redundancy is often caused by poor database design. For example, if you enter a customer's name, address and order information in an order table, each time the customer places an order you will have to re-enter the customer's name and address. Chances are good that at some point the customer's name or address will be entered incorrectly.
A better design for the database would be to keep the customer information in one table and the order information in another table. Every time the customer placed an order, the only piece of customer information that would go into the order table would be the customer's ID. This reduces the time it takes for data entry, reduces data redundancy and increases the chances that data in the two tables will have integrity, that is the data will be accurate.
What Is A Query? A query is a question you ask about the data stored in a table such as which employees live in New York or which inventory items cost more than $100. When you run a query (ask a question about the data in a table) Access responds by displaying the data that answers your question, that is, only the records that have NY in the State field or which inventory items cost more than $100.
Queries are what you use when you need to compare weekly sales figures, track packages, or find all the members of your club who live in Texas. In other words, they're how you answer questions, and that makes creating queries an essential "databasing" skill.
To open an existing query, click Queries in the Navigation pane and then double click the query you want to open; the query datasheet appears. Query datasheets look identical to table datasheets with records displayed in a row and column layout.
If you want it to, a query can also process data. For example, you can find the sales for last month and calculate performance figures for each office. And you can do more than just find data. For example, you can use a query to add last month's inventory figures to your database. You can also use queries as data sources for forms, reports, and even other queries.
Types of Queries
Types of Queries
Access provides several types of queries.
A select query retrieves data from one or more of the tables in your database, or other queries there, and displays the results in a datasheet. You can also use a select query to group data, and to calculate sums, averages, counts, and other types of totals.
A parameter query is a type of select query that prompts you for input before it runs. The query then uses your input as criteria that control your results. For example, a typical parameter query asks you for starting high and low values, and only returns records that fall within those values.
A crosstab query uses row headings and column headings so you can see your data in terms of two categories at once.
An action query alters your data or your database. For example, you can use an action query to create a new table, or add, delete, or change your data.
Building a Query
So, how do you start building a query? The first step is to know the structure of your database, and the data in each table. You don't have to memorize every last record, but you do need to know where everything is. Open your tables and explore their fields and data.
Next, you design your query. An easy way to do that is to state the question you want your data to answer. The more detail you add to your question, the more precisely you can define your query. For example:
What's our most popular sales item in South America?
How many of my DVDs are on loan to friends?
Who's ordering the team jerseys this year, and what sizes do we need?
If it helps, write down your question and include as many field names as you can. For example: "I need the 10 best selling products in Malaysia. I need to know product names, product IDs, and the department that makes each item."
Once you have your question, you can build your query.
Two tools you can use to build a query are the Query Wizard and Design view.
The Query Wizard makes it easy to build queries that group your data, and that retrieve data from multiple tables.
Design view gives you complete control over your queries.
With either tool, you start by choosing a record source ó the table or query that contains the data fields you want to see. After you the data source, you add the fields to your query and then run the query to test it.
You run a query whenever you open it. Whenever you run a query, it takes the latest data from your record source and loads the results into a datasheet. The data returned by a query is called a record set.
Create a New Query
Create a New Query
Click on a data source, either a table or a query, in the Navigation Pane, click Create on the Ribbon and then click either the Query Wizard or Query Design. The Simple Query Wizard leads you through the steps needed to create your query. You can also create a query in Design View.
Queries may display all or only selected fields from a table. In the Simple Query Wizard dialog box, select which fields you want included in the query. If you create a query in Design View, add one or more tables (or queries) to the Design grid and then drag fields from the field list(s) into the grid.
Query Design View
Query Design view allows you to specify the tables and/or queries and the fields you want to use in your query and the criteria (conditions) used to control displayed records. Each design grid column represents a field in the query datasheet .
Add Fields to a Query
There are several ways to specify which fields you want added to the query design grid:
Double click a field in the field list
Drag a field to the design grid
Double click the title bar of a field list and then drag any field into the grid to add all the fields to the design grid
Viewing Query Results
After you have selected the fields and chosen the criteria, to see the results of a query, click the View button or click the Run button. Query datasheet records result from the fields and optionally, criteria added to the query design grid. Query results display in the table's primary key order unless you specify another sort order.
Saving a Query
To save a Query, click the Save button and then give the query a name.
Table Data Is Linked to the Query
Query results are linked to the associated data source. This is extremely important to keep in mind. Changing data in the query datasheet changes data in the underlying table!
Criteria is another word for conditions. Criteria control which records appear when you run a query. Criteria can be simple such as State=CA or complex including several fields and sets of conditions.
Exact Match Queries
One type of select query is the Exact Match query. Exact Match Queries display only records that match the criteria (conditions) specified in the query design grid. Values in the specified field must match the condition exactly. You enter conditions in the criteria row of the query design grid. When you run the Query, only records that meet the criteria are displayed. When you change the conditions and run the query again, the query displays a new set of records.
Criteria entered into the query design grid control which records appear in the query's datasheet. Criteria are often entered using one or more of three types of operators, Comparison, Special and Logical.
Comparison operators do for criteria what mathematical operators do for numbers. Just as you join two numbers with a math operator (5+5), you join a field and a condition with a comparison operator (Salary > 75000). There are six comparison operators.
= Equals =MA
<> Not equal to <>MA
< Less than <50000
> Greater than >2/1/2004
<= Less than, equal to <=60015
>= Greater than, equal to >=75000
Quick Reference Card for Queries
Design a query
Write the question(s) you want to answer. In each sentence, include the name of each field you need to see, or at least a close approximation.
Run an existing query
In the Navigation Pane, under Queries, double-click the query you want to run.
If the query is open in Design view, go to the Design tab on the Ribbon, and in the Results group, click Run.
Open a query in Design view
In the Navigation Pane, under Queries, right-click the query and click Design View.
If the query is open, right-click its tab and click Design View.
Facts to remember about queries
Queries take their data from a record source ó fields from tables, other queries, or a combination of the two.
Queries display their results in a datasheet. The result is called a record set. You can work with the data in a record set in the same ways you work with data in a table ó add new data, change existing records, sort, filter, and so on.
You can alter queries until they produce the results you need.
You can use queries as data sources for forms and reports.
The Query Wizard and Design view provide the easiest ways to build queries.
This Week's Assignment
The Microsoft Web site has extensive help files and even self-guided tutorials on all the Microsoft products.
The links below will take you to two lessons on queries:
Queries I: Get started with queries
Queries II: Create basic select queries
This week you will complete an assignment with Access that will introduce you to Access table and queries.
Click on the links below to download the files you will need for this assignment and save them to your computer's hard drive or to your flash drive.
Access Query Questions
Table and queries database
There are two files for this week's assignment. The first file (AccessQueryQuestions.docx) contains terms for you to define and questions for you to answer. Use the information from our class discussions and on this class Web page to help you define the terms and answer the questions. After you have defined the terms and answered all the questions, upload your work to our class drop box. The name of the file you submit should be AccessQueryQuestions.docx
The second file (CS5TablesQueries.accdb) is a database file we will work on together in the lab. Make sure you upload your database file to our class drop box before you leave the lab today. | http://online.santarosa.edu/presentation/page/?106525 | 13 |
61 | Humidity is the amount of water vapor in the air. Water vapor is the gas phase of water and is invisible. Humidity indicates the likelihood of precipitation, dew, or fog. Higher humidity reduces the effectiveness of sweating in cooling the body by reducing the rate of evaporation of moisture from the skin. This effect is calculated in a heat index table, used during summer weather.
There are three main measurements of humidity: absolute, relative and specific. Absolute humidity is the water content of air. Relative humidity, expressed as a percent, measures the current absolute humidity relative to the maximum for that temperature. Specific humidity is a ratio of the water vapor content of the mixture to the total air content on a mass basis.
Absolute humidity
Absolute humidity is an amount of water vapor, usually discussed per unit volume. The mass of water vapor, , per unit volume of total air and water vapor mixture, , can be expressed as follows:
The absolute humidity changes as air temperature or pressure changes. This is very inconvenient for chemical engineering calculations, e.g. for clothes dryers, where temperature can vary considerably. As a result, absolute humidity is generally defined in chemical engineering as mass of water vapor per unit mass of dry air, also known as the mass mixing ratio (see below), which is much more rigorous for heat and mass balance calculations. Mass of water per unit volume as in the equation above would then be defined as volumetric humidity. Because of the potential confusion, British Standard BS 1339 (revised 2002) suggests avoiding the term "absolute humidity". Units should always be carefully checked. Most humidity charts are given in g/kg or kg/kg, but any mass units may be used.
The field concerned with the study of physical and thermodynamic properties of gas-vapor mixtures is named Psychrometrics.
Relative humidity
Relative humidity is the ratio of the partial pressure of water vapor in the air-water mixture to the saturated vapor pressure of water at those conditions. The relative humidity of air is a function of both its water content and temperature.
Relative humidity is normally expressed as a percentage and is calculated by using the following equation. It is defined as the ratio of the partial pressure of water vapor (H2O) in the mixture to the saturated vapor pressure of water at a prescribed temperature.
Relative humidity is an important metric used in weather forecasts and reports, as it is an indicator of the likelihood of precipitation, dew, or fog. In hot summer weather, a rise in relative humidity increases the apparent temperature to humans (and other animals) by hindering the evaporation of perspiration from the skin. For example, according to the Heat Index, a relative humidity of 75% at 80°F (27°C) would feel like 83.574°F ±1.3 °F (28.652°C ±0.7 °C) at ~44% relative humidity.
Specific humidity
Specific humidity is the ratio of water vapor to dry air in a particular mass, and is sometimes referred to as humidity ratio. Specific humidity is expressed as a ratio of mass of water vapor, , per unit mass of dry air . This quantity is also known as the water vapor "mixing ratio".
That ratio is defined as:
Specific humidity can be expressed in other ways including:
Using this definition of specific humidity, the relative humidity can be expressed as
However, specific humidity is also defined as the ratio of water vapor to the total mass of the system (dry air plus water vapor). For example, the ASHRAE 2009 Handbook, Ch1,1.2, (9a) defines specific humidity as "the ratio of the mass of water vapor to total mass of the moist air sample".
There are various devices used to measure and regulate humidity. A device used to measure humidity is called a psychrometer or hygrometer. A humidistat is a humidity-triggered switch, often used to control a dehumidifier.
Humidity is also measured on a global scale using remotely placed satellites. These satellites are able to detect the concentration of water in the troposphere at altitudes between 4 and 12 kilometers. Satellites that can measure water vapor have sensors that are sensitive to infrared radiation. Water vapor specifically absorbs and re-radiates radiation in this spectral band. Satellite water vapor imagery plays an important role in monitoring climate conditions (like the formation of thunderstorms) and in the development of future weather forecasts.
While humidity itself is a climate variable, it also interacts strongly with other climate variables. The humidity is affected by winds and by rainfall. At the same time, humidity affects the energy budget and thereby influences temperatures in two major ways. First, water vapor in the atmosphere contains "latent" energy. During transpiration or evaporation, this latent heat is removed from surface liquid, cooling the earth's surface. This is the biggest non-radiative cooling effect at the surface. It compensates for roughly 70% of the average net radiative warming at the surface. Second, water vapor is the most important of all greenhouse gases. Water vapor, like a green lens that allows green light to pass through it but absorbs red light, is a "selective absorber". Along with other greenhouse gases, water vapor is transparent to most solar energy, as you can literally see. But it absorbs the infrared energy emitted (radiated) upward by the earth's surface, which is the reason that humid areas experience very little nocturnal cooling but dry desert regions cool considerably at night. This selective absorption causes the greenhouse effect. It raises the surface temperature substantially above its theoretical radiative equilibrium temperature with the sun, and water vapor is the cause of more of this warming than any other greenhouse gas.
The most humid cities on earth are generally located closer to the equator, near coastal regions. Cities in South and Southeast Asia are among the most humid, such as Kolkata, Chennai and Cochin in India, the cities of Manila in the Philippines, Mogadishu in Somalia and Bangkok in Thailand and extremely humid Lahore in Pakistan: these places experience extreme humidity during their rainy seasons combined with warmth giving the feel of a lukewarm sauna. Darwin, Australia experiences an extremely humid wet season from December to April. Shanghai and Hong Kong in China also have an extreme humid period in their summer months. Kuala Lumpur and Singapore have very high humidity all year round because of their proximity to water bodies and the equator and overcast weather. Perfectly clear days are dependent largely upon the season in which one decides to travel. During the South-west and North-east Monsoon seasons (respectively, late May to September and November to March), expect heavy rains and a relatively high humidity post-rainfall. Outside the monsoon seasons, humidity is high (in comparison to countries North of the Equator), but completely sunny days abound. In cooler places such as Northern Tasmania, Australia, high humidity is experienced all year due to the ocean between mainland Australia and Tasmania. In the summer the hot dry air is absorbed by this ocean and the temperature rarely climbs above 35 °C (95 °F).
In the United States the most humid cities, strictly in terms of relative humidity, are Forks and Olympia, Washington. This fact may come as a surprise to many, as the climate in this region rarely exhibits the discomfort usually associated with high humidity. This is because high dew points play a more significant role than relative humidity in discomfort, and so the air in these western cities usually does not feel "humid" as a result. In general, dew points are much lower in the Western U.S. than those in the Eastern U.S.
The highest dew points in the US are found in coastal Florida and Texas. When comparing Key West and Houston, two of the most humid cities from those states, coastal Florida seems to have the higher dew points on average. However, Houston lacks the coastal breeze present in Key West, and, as a much larger city, it suffers from the urban heat island effect. A dew point of 88 °F (31 °C) was recorded in Moorhead Minnesota on July 19, 2011, with a heat index of 133.5, although dew points over 80 °F (27 °C) are rare there. The US city with the lowest annual humidity is Las Vegas, Nevada, averaging 39% for a high and 21% as a low.
Air density and volume
Humidity depends on water vaporization and condensation, which, in turn, mainly depends on temperature. Therefore, when applying more pressure to a gas saturated with water, all components will initially decrease in volume approximately according to the ideal gas law. However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what the ideal gas law predicted. Conversely, decreasing temperature would also make some water condense, again making the final volume deviate from predicted by the ideal gas law. Therefore, gas volume may alternatively be expressed as the dry volume, excluding the humidity content. This fraction more accurately follows the ideal gas law. On the contrary the saturated volume is the volume a gas mixture would have if humidity was added to it until saturation (or 100% relative humidity).
Humid air is less dense than dry air because a molecule of water (M ≈ 18 u ) is less massive than either a molecule of nitrogen (M ≈ 28) or a molecule of oxygen (M ≈ 32). About 78% of the molecules in dry air are nitrogen (N2). Another 21% of the molecules in dry air are oxygen (O2). The final 1% of dry air is a mixture of other gases.
For any gas, at a given temperature and pressure, the number of molecules present in a particular volume is constant – see ideal gas law. So when water molecules (vapor) are introduced into that volume of dry air, the number of air molecules in the volume must decrease by the same number, if the temperature and pressure remain constant. (The addition of water molecules, or any other molecules, to a gas, without removal of an equal number of other molecules, will necessarily require a change in temperature, pressure, or total volume; that is, a change in at least one of these three parameters. If temperature and pressure remain constant, the volume increases, and the dry air molecules that were displaced will initially move out into the additional volume, after which the mixture will eventually become uniform through diffusion.) Hence the mass per unit volume of the gas—its density—decreases. Isaac Newton discovered this phenomenon and wrote about it in his book Opticks.
Animals and plants
The human body dissipates heat through perspiration and its evaporation. Heat convection to the surrounding air, and thermal radiation are the primary modes of heat transport from the body. Under conditions of high humidity, the rate of evaporation of sweat from the skin decreases. Also, if the atmosphere is as warm as or warmer than the skin during times of high humidity, blood brought to the body surface cannot dissipate heat by conduction to the air, and a condition called hyperpyrexia results. With so much blood going to the external surface of the body, relatively less goes to the active muscles, the brain, and other internal organs. Physical strength declines, and fatigue occurs sooner than it would otherwise. Alertness and mental capacity also may be affected, resulting in heat stroke or hyperthermia.
Human comfort
Humans are sensitive to humid air because the human body uses evaporative cooling as the primary mechanism to regulate temperature. Under humid conditions, the rate at which perspiration evaporates on the skin is lower than it would be under arid conditions. Because humans perceive the rate of heat transfer from the body rather than temperature itself, we feel warmer when the relative humidity is high than when it is low.
Some people experience difficulty breathing in high humidity environments. Some cases may possibly be related to respiratory conditions such as asthma, while others may be the product of anxiety. Sufferers will often hyperventilate in response, causing sensations of numbness, faintness, and loss of concentration, among others.
Air conditioning reduces discomfort in the summer not only by reducing temperature, but also by reducing humidity. In winter, heating cold outdoor air can decrease relative humidity levels indoor to below 30%, leading to discomfort such as dry skin and excessive thirst.
Many electronic devices have humidity specifications, for example, 5% to 95%. At the top end of the range, moisture may increase the conductivity of permeable insulators leading to malfunction. Too low humidity may make materials brittle. A particular danger to electronic items, regardless of the stated operating humidity range, is condensation. When an electronic item is moved from a cold place (e.g., garage, car, shed, an air conditioned space in the tropics) to a warm humid place (house, outside tropics), condensation may coat circuit boards and other insulators, leading to short circuit inside the equipment. Such short circuits may cause substantial permanent damage if the equipment is powered on before the condensation has evaporated. A similar condensation effect can often be observed when a person wearing glasses comes in from the cold (i.e. the glasses become foggy). It is advisable to allow electronic equipment to acclimatise for several hours, after being brought in from the cold, before powering on. Some electronic devices can detect such a change and indicate, when plugged in and usually with a small droplet symbol, that they cannot be used until the risk from condensation has passed. In situations where time is critical, increasing air flow through the device's internals when, such as removing the side panel from a PC case and directing a fan to blow into the case will reduce significantly the time needed to acclimatise to the new environment.
On the opposite, very low humidity level favors the build-up of static electricity, which may result in spontaneous shutdown of computers when discharges occur. Apart from spurious erratic function, electrostatic discharges can cause dielectric breakdown in solid state devices, resulting in irreversible damage. Data centers often monitor relative humidity levels for these reasons.
Building construction
Traditional building designs typically had weak insulation, and it allowed air moisture to flow freely between the interior and exterior. The energy-efficient, heavily-sealed architecture introduced in the 20th century also sealed off the movement of moisture, and this has resulted in a secondary problem of condensation forming in and around walls, which encourages the development of mold and mildew. Additionally, buildings with foundations not properly sealed will allow water to flow through the walls due to capillary action of pores found in masonry products. Solutions for energy-efficient buildings that avoid condensation are a current topic of architecture.
See also
- Relative humidity
- Savory brittleness scale
- Dew point
- Humidity indicator
- "What is Water Vapor?". Retrieved 2012-08-28.
- Wyer, S.S., "A treatise on producer-gas and gas-producers", (1906) The Engineering and Mining Journal, London, p.23
- Perry, R.H. and Green, D.W, Perry's Chemical Engineers' Handbook (7th Edition), McGraw-Hill, ISBN 0-07-049841-5 , Eqn 12-7
- Lans P. Rothfusz. "The Heat Index 'Equation' (or, More Than You Ever Wanted to Know About Heat Index)", Scientific Services Division (NWS Southern Region Headquarters), 1 July 1990
- R.G. Steadman, 1979. "The assessment of sultriness. Part I: A temperature-humidity index based on human physiology and clothing science," J. Appl. Meteor., 18, 861-873
- Cengel, Yunus and Boles, Michael, Thermodynamics: An Engineering Approach, 1998, 3rd edition, McGraw-Hill, pp. 725–726
- AMS Glossary: specific humidity
- BBC – Weather Centre – World Weather – Average Conditions – Bangkok
- What Is The Most Humid City In The U.S.? | KOMO-TV – Seattle, Washington | News Archive
- Answers: Is Florida or Texas more humid: September 3,2003
- Isaac Newton (1704). Opticks. Dover. ISBN 978-0-486-60205-9.
- C.Michael Hogan. 2010. Abiotic factor. Encyclopedia of Earth. eds Emily Monosson and C. Cleveland. National Council for Science and the Environment. Washington DC
- "I have trouble breathing in humidity - Lung & Respiratory Disorders / COPD Message Board - HealthBoards". Retrieved 18 July 2011.
- "Fogging Glasses".
- United States Environmental Protection Agency, "IAQ in Large Buildings". Retrieved Jan. 9, 2006.
|Look up humidity in Wiktionary, the free dictionary.|
- Glossary definition of absolute humidity – National Science Digital Library
- Glossary definition of psychrometric tables – National Snow and Ice Data Center
- Glossary definition of specific humidity – National Snow and Ice Data Center
- FREE Humidity & Dewpoint Calculator – Vaisala
- Free Windows Program, Dewpoint Units Conversion Calculator – PhyMetrix
- Free Online Humidity Calculator – Calculate about 16 parameters online with the Rotronic Humidity Calculator | http://en.wikipedia.org/wiki/Specific_humidity | 13 |
61 | LECTURE 23 and 24
Procedure for drawing shear force and bending moment diagram:
The advantage of plotting a variation of shear force F and bending moment M in a beam as a function of ‘x' measured from one end of the beam is that it becomes easier to determine the maximum absolute value of shear force and bending moment.
Further, the determination of value of M as a function of ‘x' becomes of paramount importance so as to determine the value of deflection of beam subjected to a given loading.
Construction of shear force and bending moment diagrams:
A shear force diagram can be constructed from the loading diagram of the beam. In order to draw this, first the reactions must be determined always. Then the vertical components of forces and reactions are successively summed from the left end of the beam to preserve the mathematical sign conventions adopted. The shear at a section is simply equal to the sum of all the vertical forces to the left of the section.
When the successive summation process is used, the shear force diagram should end up with the previously calculated shear (reaction at right end of the beam. No shear force acts through the beam just beyond the last vertical force or reaction. If the shear force diagram closes in this fashion, then it gives an important check on mathematical calculations.
The bending moment diagram is obtained by proceeding continuously along the length of beam from the left hand end and summing up the areas of shear force diagrams giving due regard to sign. The process of obtaining the moment diagram from the shear force diagram by summation is exactly the same as that for drawing shear force diagram from load diagram.
It may also be observed that a constant shear force produces a uniform change in the bending moment, resulting in straight line in the moment diagram. If no shear force exists along a certain portion of a beam, then it indicates that there is no change in moment takes place. It may also further observe that dm/dx= F therefore, from the fundamental theorem of calculus the maximum or minimum moment occurs where the shear is zero. In order to check the validity of the bending moment diagram, the terminal conditions for the moment must be satisfied. If the end is free or pinned, the computed sum must be equal to zero. If the end is built in, the moment computed by the summation must be equal to the one calculated initially for the reaction. These conditions must always be satisfied.
In the following sections some illustrative problems have been discussed so as to illustrate the procedure for drawing the shear force and bending moment diagrams
1. A cantilever of length carries a concentrated load ‘W' at its free end.
Draw shear force and bending moment.
At a section a distance x from free end consider the forces to the left, then F = -W (for all values of x) -ve sign means the shear force to the left of the x-section are in downward direction and therefore negative
Taking moments about the section gives (obviously to the left of the section)
M = -Wx (-ve sign means that the moment on the left hand side of the portion is in the anticlockwise direction and is therefore taken as –ve according to the sign convention)
so that the maximum bending moment occurs at the fixed end i.e. M = -W l
From equilibrium consideration, the fixing moment applied at the fixed end is Wl and the reaction is W. the shear force and bending moment are shown as,
2. Simply supported beam subjected to a central load (i.e. load acting at the mid-way)
By symmetry the reactions at the two supports would be W/2 and W/2. now consider any section X-X from the left end then, the beam is under the action of following forces.
.So the shear force at any X-section would be = W/2 [Which is constant upto x < l/2]
If we consider another section Y-Y which is beyond l/2 then
for all values greater = l/2
Hence S.F diagram can be plotted as,
.For B.M diagram:
If we just take the moments to the left of the cross-section,
Which when plotted will give a straight relation i.e.
It may be observed that at the point of application of load there is an abrupt change in the shear force, at this point the B.M is maximum.
3. A cantilever beam subjected to U.d.L, draw S.F and B.M diagram.
Here the cantilever beam is subjected to a uniformly distributed load whose intensity is given w / length.
Consider any cross-section XX which is at a distance of x from the free end. If we just take the resultant of all the forces on the left of the X-section, then
So if we just plot the equation No. (1), then it will give a straight line relation. Bending Moment at X-X is obtained by treating the load to the left of X-X as a concentrated load of the same value acting through the centre of gravity.
Therefore, the bending moment at any cross-section X-X is
The above equation is a quadratic in x, when B.M is plotted against x this will produces a parabolic variation.
The extreme values of this would be at x = 0 and x = l
Hence S.F and B.M diagram can be plotted as follows:
4. Simply supported beam subjected to a uniformly distributed load [U.D.L].
The total load carried by the span would be
By symmetry the reactions at the end supports are each wl/2
If x is the distance of the section considered from the left hand end of the beam.
S.F at any X-section X-X is
Giving a straight relation, having a slope equal to the rate of loading or intensity of the loading.
The bending moment at the section x is found by treating the distributed load as acting at its centre of gravity, which at a distance of x/2 from the section
So the equation (2) when plotted against x gives rise to a parabolic curve and the shear force and bending moment can be drawn in the following way will appear as follows:
When the beam is subjected to couple, the shear force and Bending moment diagrams may be drawn exactly in the same fashion as discussed earlier.
6. Eccentric loads.
When the beam is subjected to an eccentric loads, the eccentric load are to be changed into a couple/ force as the case may be, In the illustrative example given below, the 20 kN load acting at a distance of 0.2m may be converted to an equivalent of 20 kN force and a couple of 2 kN.m. similarly a 10 kN force which is acting at an angle of 300 may be resolved into horizontal and vertical components.The rest of the procedure for drawing the shear force and Bending moment remains the same.
6. Loading changes or there is an abrupt change of loading:
When there is an aabrupt change of loading or loads changes, the problem may be tackled in a systematic way.consider a cantilever beam of 3 meters length. It carries a uniformly distributed load of 2 kN/m and a concentrated loads of 2kN at the free end and 4kN at 2 meters from fixed end.The shearing force and bending moment diagrams are required to be drawn and state the maximum values of the shearing force and bending moment.
Consider any cross section x-x, at a distance x from the free end
Shear Force at x-x = -2 -2x 0 < x < 1
S.F at x = 0 i.e. at A = -2 kN
S.F at x = 1 = -2-2 = - 4kN
S.F at C (x = 1) = -2 -2x - 4 Concentrated load
= - 2 - 4 -2x1 kN
= - 8 kN
Again consider any cross-section YY, located at a distance x from the free end
S.F at Y-Y = -2 - 2x - 4 1< x < 3
This equation again gives S.F at point C equal to -8kN
S.F at x = 3 m = -2 -4 -2x3
= -12 kN
Hence the shear force diagram can be drawn as below:
For bending moment diagrams – Again write down the equations for the respective cross sections, as consider above
Bending Moment at xx = -2x - 2x.x/2 valid upto AC
B.M at x = 0 = 0
B.M at x =1m = -3 kN.m
For the portion CB, the bending moment equation can be written for the x-section at Y-Y .
B.M at YY = -2x - 2x.x/2 - 4( x -1)
This equation again gives,
B.M at point C = - 2.1 - 1 - 0 i.e. at x = 1
= -3 kN.m
B.M at point B i.e. at x = 3 m
= - 6 - 9 - 8
= - 23 kN-m
The variation of the bending moment diagrams would obviously be a parabolic curve
Hence the bending moment diagram would be
7. Illustrative Example :
In this there is an abrupt change of loading beyond a certain point thus, we shall have to be careful at the jumps and the discontinuities.
For the given problem, the values of reactions can be determined as
R2 = 3800N and R1 = 5400N
The shear force and bending moment diagrams can be drawn by considering the X-sections at the suitable locations.
8. Illustrative Problem :
The simply supported beam shown below carries a vertical load that increases uniformly from zero at the one end to the maximum value of 6kN/m of length at the other end .Draw the shearing force and bending moment diagrams.
Determination of Reactions
For the purpose of determining the reactions R1 and R2 , the entire distributed load may be replaced by its resultant which will act through the centroid of the triangular loading diagram.
So the total resultant load can be found like this-
Average intensity of loading = (0 + 6)/2
= 3 kN/m
Total Load = 3 x 12
= 36 kN
Since the centroid of the triangle is at a 2/3 distance from the one end, hence 2/3 x 3 = 8 m from the left end support.
Now taking moments or applying conditions of equilibrium
Note: however, this resultant can not be used for the purpose of drawing the shear force and bending moment diagrams. We must consider the distributed load and determine the shear and moment at a section x from the left hand end.
Consider any X-section X-X at a distance x, as the intensity of loading at this X-section, is unknown let us find out the resultant load which is acting on the L.H.S of the X-section X-X, hence
So consider the similar triangles
OAB & OCD
In order to find out the total resultant load on the left hand side of the X-section
Find the average load intensity
Now these loads will act through the centroid of the triangle OAB. i.e. at a distance 2/3 x from the left hand end. Therefore, the shear force and bending momemt equations may be written as
9. Illustrative problem :
In the same way, the shear force and bending moment diagrams may be attempted for the given problem
10. Illustrative problem :
For the uniformly varying loads, the problem may be framed in a variety of ways, observe the shear force and bending moment diagrams
11. Illustrative problem :
In the problem given below, the intensity of loading varies from q1 kN/m at one end to the q2 kN/m at the other end.This problem can be treated by considering a U.d.i of intensity q1 kN/m over the entire span and a uniformly varying load of 0 to ( q2- q1)kN/m over the entire span and then super impose teh two loadings.
Point of Contraflexure:
Consider the loaded beam a shown below along with the shear force and Bending moment diagrams for It may be observed that this case, the bending moment diagram is completely positive so that the curvature of the beam varies along its length, but it is always concave upwards or sagging.However if we consider a again a loaded beam as shown below along with the S.F and B.M diagrams, then
It may be noticed that for the beam loaded as in this case,
The bending moment diagram is partly positive and partly negative.If we plot the deflected shape of the beam just below the bending moment
This diagram shows that L.H.S of the beam ‘sags' while the R.H.S of the beam ‘hogs'
The point C on the beam where the curvature changes from sagging to hogging is a point of contraflexure.
It corresponds to a point where the bending moment changes the sign, hence in order to find the point of contraflexures obviously the B.M would change its sign when it cuts the X-axis therefore to get the points of contraflexure equate the bending moment equation equal to zero.The fibre stress is zero at such sections
Note: there can be more than one point of contraflexure. | http://nptel.iitm.ac.in/courses/Webcourse-contents/IIT-ROORKEE/strength%20of%20materials/lects%20&%20picts/image/lect23%20and%2024/lecture%2023%20and%2024.htm | 13 |
72 | This section covers mathematical expressions and number types in Amzi! Prolog
The various math symbols, such as + - * /, are defined as operators in Prolog. See section on operators for discussion of operators in Prolog. This means you can write expressions such as:
5 + 3 / (2 * 6)
As such, these are just Prolog structures, no different from any other structures represented using operator syntax.
?- X = 3 + 2. X = 3 + 2 yes ?- 5 = 3 + 2. no
In other words, for most Prolog processing, the term 3 + 2 (internally +(3,2)) is treated no differently than say pet(duck,leona).
To evaluate mathematical expressions, you need the built-in predicate is/2. The mathematical comparson operators also do mathematical evaluation.
is/2 succeeds if X can be unified with the value of Y evaluated as an arithmetic expression.
?- X is 3 + 2. X = 5 yes ?- 5 is 3 + 2. yes ?- X is (3 + 2) / 4 * 6. X = 7.5 yes ?- X = 3.1 + 2, Y is X. X = 3.1 + 2 Y = 5.1
==>Note that there is no 'assignment' in Prolog, and variables always unify, so you cannot say X is X + 1 because it is impossible for the two Xs to unify. You can say XX is X + 1, and proceed to use the new variable.
Each of the following operators first performs mathematical evaluation of both sides, as described above, and then does a mathematical comparison of the results. These operators should be used for comparing numbers, even if they are not part of an expression.
Unification, either implicit between goals and heads of clause, or explicit with =/2, may fail for different numeric types that represent the same number. For this reason unification should NOT be used to test for equality of two numbers.
% An integer doesn't unify with a float. ?- 1 = 1.0e0. no % But 1 is mathmatically equal to 1, % no matter how its stored. ?- 1 =:= 1.0e0. yes % These are two different, % non-evaluated expressions. ?- 3 + 4 = 2 + 5. no % But evaluated they are arithmetically % equal. ?- 3 + 4 =:= 2 + 5. yes
Evaluate each side and test for greater or equal.
Evaluate each side and test for less than or equal.
Evaluate each side and test for greater than.
Evaluate each side and test for less than.
Evaluate each side and test for numerical equality.
Evaluate each side and test for numerical inequality.
Evaluate each side and test if the two sides are almost equal. Useful for comparing non-integer values.
There are a number of mathematical operators that can be used in evaluable mathematical expressions.
Sum of values of X and Y.
Value of X minus value of Y.
Evaluates to the negative of X evaluated.
Value of X multiplied by value of Y.
Value of X divided by value of Y.
Evaluates to X raised to the Y power. When X is a fractional real (infinite precision) number, the precision of the result is limited by the Prolog flag, epsilon.
Integer division of X by Y means the result is truncated to the absolute integer.
?- X is 11 // 4. X = 2 yes ?- X is -11 // 4. X = -2 yes
Integer division with a rounded rather than truncated answer, or more formally, such that the remainder is >= -Y/2 and < Y/2.
?- X is 11 divs 4. X = 3 yes ?- X is 11 divs 3. X = 4 yes ?- X is -11 divs 4. X = -3 yes ?- X is 13 divs 4. X = 3 yes
Integer division that is truncated. Works the same as // for positive integers, but for negative ones it ensures that the result times Y is less than X, or stated another way, the remainder is always positive.
?- X is 11 divu 4. X = 2 yes ?- X is 13 divu 4. X = 3 yes ?- X is -11 divu 4. X = -3 yes
The maximum of X and Y.
?- X is max(5,4). X = 5 yes
The minimum of X and Y.
?- X is min(5,4). X = 4 yes
The positive remainder after dividing the value of X by the value of Y. Corresponds with divu.
?- X is 11 mod 4. X = 3 yes ?- X is -11 mod 4. X = 1 yes
The remainder after rounded integer division (divs), or more formatlly, constrained so that the result is >= -Y/2 and < Y/2.
?- X is 11 mods 4. X = -1 yes ?- X is -11 mods 4. X = 1 yes ?- X is 13 mods 4. X = 1 yes
The remainder, constrained so that the result is positive, corresponding to divu.
?- X is 11 modu 4. X = 3 yes ?- X is -11 modu 4. X = 1 yes ?- X is 13 modu 4. X = 1 yes
For the following bitwise operators the arguments must be integers.
Bitwise "and" of value of X and value of Y.
?- X is 1 /\ 2. X = 0 yes ?- X is 1 /\ 3. X = 1 yes ?- X is 0xffffffff /\ 47. X = 47 yes ?- X is -1 /\ 47. X = 47 yes ?- X is 0xfffe /\ 47. X = 46 yes ?-
Bitwise "or" of value of X and value of Y.
?- X is 1 \/ 3. X = 3 yes ?- X is 1 \/ 2. X = 3 yes
Evaluates to X bit-shifted left by Y places.
?- X is 1 << 4. X = 16 yes
Evaluates to X bit-shifted right by Y places.
?- X is 16 >> 3. X = 2 yes
Evaluates to the bitwise complement of X (i.e., all those bits which were 1 become 0 and vice versa).
?- X is \ 1. X = -2 yes ?- X is \ -2. X = 1 yes
Evaluates to X exclusively or'd with Y.
?- X is 3 xor 2. X = 1 yes
The trigonomety functions all work with radians. You can use the built-in constants, degtorad and radtodeg to convert from radians to degrees and back.
?- X is sin( 30 * degtorad ). X = 5.000000e-001 yes ?- X is radtodeg * asin( 1/2 ). X = 3.000000e+001 yes
The trigonometry functions all work internally using double precision floating point numbers. If the 'decimals' Prolog flag is set to 'real', then the answer is converted to a real, but there will only be 15 digits of accuracy.
sin evaluates to the sine of X (in radians).
cos evaluates to the cosine of X (in radians)
tan evaluates to the tangent of X (in radians).
asin evaluates to the angle (in radians) whose arcsine is X.
acos evaluates to the angle (in radians) whose arccosine is X.
atan evaluates to the angle (in radians) whose arctangent is X.
abs evaluates to the absolute value of X.
ceiling evaluates to the smallest integer >= X.
exp evaluates to e raised to the power of X evaluated.
float converts X to a double precision floating point number.
floor evaluates to the largest integer =< X.
integer converts X to an integer (truncating any fractional part).
ln and log both evaluate to the natural log (loge()) of X evaluated. Use log10(X) for log base 10.
log10 evaluates to the natural log (loge()) of X evaluated. Use log10(X) for log base 10.
real converts X to a real number.
round rounds X to the nearest integer and returns that value.
sign evaluates to 1 for positive numbers and -1 for negative numbers.
sqrt evaluates to the square root of X. When X is a real (infinite precision) number, the fractional precision is limited by the setting of the Prolog flag, epsilon.
There are a number of built-in atoms, which have predetermined values that can be used in arithmetic expression.
?- T1 is cpuclock, dothing, T is (T1 - cpuclock)/1000, write(time:T).
?- T1 is cputime, dothing, T is T1 - cputime, write(time:T).
?- X is inf. X = 1.#INF
ROLL is integer( random*6 + 1 )
seed_random/1 is a predicate that seeds the random number generator with an integer argument. Random numbers, by default, always start from the same seed. This is often good for generating repeating test runs of an application. But non-test use of the program might require different random sequences each time. Here is one way to generate a unique seed at the start of a program:
main :- date(YEAR,MON,DAY), time(HOUR,MIN,SEC), SEED is YEAR + MON + DAY + HOUR + MIN + SEC + cpuclock, seed_random(SEED), start_work...
Internally, Amzi! uses five types of numbers (see overview):
Only one type of float is used in a session, either single or double, depending on how the Prolog flag, floats, is set. Single precision floats can be stored in an internal Prolog cell, which is efficient, whereas double precision floats require their own storage.
Both types of reals are used. The fixed reals are just a special case of a real that can fit into an internal Prolog cell, which is more efficient, rather than requiring its own storage.
The following predicates that can be used for type testing of numbers.
numeric_type/2 returns the type, T, of the number N. The type returned can be: integer, single_float, double_float, float, fixed_real, long_real, real. On backtracking, both the specific and more general types will be returned.
?- numeric_type(3.3e, T). T = single_float ; T = float ; no ?- numeric_type(3.3r, T). T = fixed_real ; T = real ; no
numeric_type/2 fails if N is not a number, or if T is specified with the wrong type.
Here is an example, showing the use of the Prolog flags, decimals and floats, and numeric_type.
?- current_prolog_flag(decimals, D). D = float yes ?- current_prolog_flag(floats, F). F = single yes ?- X = 3.3, numeric_type(X,T). X = 3.3 T = single_float yes ?- set_prolog_flag(decimals, real). yes ?- X = 3.3, numeric_type(X,T). X = 3.3 T = fixed_real yes ?- X = 3.3e, numeric_type(X,T). X = 3.3 T = single_float yes
integer/1 succeeds if N is of numeric type 'integer'. Note that integer/1 is a type test. To test if a decimal number is mathematically an integer, use is_integer/1.
?- integer(3). yes ?- integer(3.0e). no
is_fraction/1 succeeds if N is mathematically a fraction.
is_integer/1 succeeds if N is mathematically an integer, so both 3 and 3.0 succeed as arguments.
?- is_integer(3). yes ?- is_integer(3.0e). yes
is_odd/1 succeeds if N is mathematically an odd number.
is_number/1 succeeds if N is a number.
float/1 succeeds if N is of numeric type 'float', either single or double.
single_float/1 succeeds if N is of numeric type 'single_float'.
double_float/1 succeeds if N is of numeric type 'double_float'.
real/1 succeeds if N is of numeric type 'real', either fixed or long.
fixed_real/1 succeeds if N is of numeric type 'fixed_real'.
long_real/1 succeeds if N is of numeric type 'long_real'.
Mixed mode expressions, involving integers and/or floats and/or reals, will promote the result to the more complex type. This means that the result of an expression will only be an integer if all the variables are integers, and any functions called can return an integer. Some functions, such as trigonometric ones, can only return floating point or real values.
Mathematical expressions involving reals will always see if the answer can be stored as a fixed real instead of as a long real.
Mathematical expressions involving floats are always calculated using double precision, but the result is stored as either single or double depending on the setting of the Prolog flag, 'floats'.
Mixed mode involving floats and reals will promote to whichever one is specified as the default in the 'decimals' Prolog flag.
Thanks to our resident mathematician, Ray Reeves, there are a number of features in Amzi! Prolog designed for mathematical experimentation. Many of these are illustrated in his library of samples, ChezRay. Other comments follow:
Using arithmetic on the domain of integers (Z) it is a straightforward matter to perform calculations that do not involve numbers exceeding 32 bits (including the sign bit). However, many expressions involve the quotient of two integers and even though the sought-for solution is within that range the sub-expressions may exceed it. That is where modulo arithmetic comes in.
It may come as a suprise to hear that if all calculations are performed modulo some prime (domain Zm) and the true solution is less than that prime then the result will be the same.
To this end, Amzi! Prolog has a 'modulo' flag which defaults to zero but can be set to some integer. If it is set to integer M not zero then the arithmetic operations +, -, * will be performed modulo M. You will notice that // was not included among those operators. This is because when M is prime the operations are being performed on an integer ring, and for every element in that ring there is a unique inverse. Multiplying a number by it's inverse corresponds to division in Z arithmetic.
Built in to Amzi! Prolog is a set of primes which are as large as possible and of a particularly useful form for abstract fourier transforms, called fourier primes.
At present, the Index runs from 1 to 11, the highest first. The inverse of an element E is found with the (bilateral) primitives:
inverse(E, M, Inverse) inverse(E, Inverse)
In the second case the modulo flag value is used for M.
Some typical applications of the method are nPr/2, nCr/2, mS1r/2 and mS2r/2 from my integer library, which count permutations, combinations and subsets. It is also used in the solution of simultaneous equations with integer coefficents, by means of determinants.
If the solution itself is not within the 32-bit range the method can still be used with a set of primes, and from the set of solutions so obtained the Chinese Remainder Algorithm (cra) can be used to find the large unique solution. This works provided the product of the primes employed is greater than the solution, which in turn requires some insight into the probable result.
To avoid that there is a cra which increments the prime set automatically until redundancy is detected. The primes are drawn from the set of fourier primes. When the result is large it cannot be denoted by a single integer, so it is presented as a list of gigadigits, which are integers less than 1 billion.
Each gigadigit represents nine decimal digits, and if there are less than that a padding of prefixed zeroes is implied. This format is used because it conforms to a denotation of a 'real' integer, which is a new data type that Amzi! supports for large numbers. Thus the cra can be embedded in real data expressions, if required, or simply printed out in a real number format.
Amzi! Prolog allows arithmetic of high precision. Real numbers are kept in a non-standard floating point format with a mantissa and exponent. The mantissa is an integer in an array of up to 2048 32-bit gigadigits. The exponent is a signed integer in twelve bits.
For fun, run the following program which goes forever calculating factorials of increasingly large numbers. You'll need to [CTRL-Break] to end it. Note that the G exponents refer to 9 zeroes, not one, 1g1 is 1.0e9.
main :- go(1,1). go(N,T) :- write(N:T), nl, NN is N + 1, TT is T * NN, go(NN, TT).
A real number is said to be normalized if there are no leading zeros and no trailing zeros in the mantissa, unless the real number is actually zero. The result of any real arithmetic operation is normalized.
The base of the exponentiation is 1,000,000,000 (1 billion), so the numbers in the exponent are typically small. It's purpose is to efficiently pack large numbers with leading or trailing gigadigits that are zeroes.
A gigadigit is a positive integer less than 1 billion. Think of the decimal digits in the denotation as the name of that gigadigit. There are 1 billion names.
Gigadigits do not totally exploit 32-bit words but this has practical advantages because they facilitate the interface between decimal numbers and 32-bit words. Given an ordinary arithmetic number, each block of nine decimal digits in either direction from the decimal point denotes a gigadigit. Conversely, displaying a real number is just a concatenation of the displays of the individual digits.
Generally, familiar integer operations will work on real integers, but 'for' (and maybe other things) have a limited range of 1 gigadigit.
Integers and floats are still supported in addition to reals and are denoted the same way as before, but denotation of a number beyond the range of an integer will automatically produce a real. Integer or float denotions ending in 'G' will also produce a real.
In addition, there is an exponential notation using the letter 'g' to denote reals. Thus 1g-1 denotes 0.000000001, and 1g2 succintly denotes 1 000000000 000000000.
A real number may also be denoted by a prolog list of gigadigits (bounded integers) and optionally containing a decimal point ('.'), and optionally containing a leading negative sign ('-'). The advantage is that no leading (decimal) zeroes in the gigadigits are required or displayed.
Real numbers are normally displayed in ordinary arithmetic style, with no punctuation or spaces. However, display/1 will display them as lists, so that long numbers are more easily read. Remember that each element in such a list denotes nine decimal digits, so if less than that are displayed then there are implicit leading zeros. eg:
display([1,1]*[1,1]). [1,2,1] % ie: 1000000002000000001
As always, integer/1 succeeds only if the argument is of type integer. real/1 is a predicate that succeeds if the argument is of type real. To determine if a number (real, float or integer) is a mathematical integer use is_integer/1. is_fraction/1 is a predicate that succeeds if the argument is of type real (or float) and it's exponent is negative.
Real bit operations are limited to xor, but the predicate is_odd/1 will check the last bit of an integer or real in column zero.
Real number division can produce an infinite length, so the quotient length is limited to: length(num) + length(denom) + delta where delta is the current value of the set_prolog_flag delta (initially 2). ie. the precision of the quotient is limited to the precision of the given arguments plus delta.
truncate/2 attempts to derive a real from a real with reduced length such that the new exponent is not less than that defined by the set_prolog_flag epsilon (initially -2). However, it will not reduce the number of gigadigits to be less than two. The main purpose of epsilon is to provide the user with a stopping criterion when generating convergent series.
Mixed mode arithmetic involving a real and an integer or float works by internally promoting all operands to reals, if necessary. There are explicit integer_real/2 and float_real/2 bilateral primitives for the user. Note: converting from real to integer or float may not succeed.
It is anticipated that real arrays may be useful beyond real number representation, and then a way of indexing into the array may be needed. The tool for this purpose is:
nth(Index, Real, Gigit)
?- nth(1, 234093420983203.24309823409823490823409g, X).
X = 234090000
The internal format of a real array is:
Descriptor, LSG, ... , MSG 0, 1, Length
where Descriptor is packed with Exponent, Sign and Length, LSG is the least significant mantissa gigadigit, MSG is the most significant mantissa gigadigit.
Thus, in nth, the acceptable range of Index for gigadigits is the closed interval 1 to Length, and Index == 0 gets the Descriptor. nth/3 can also be applied to lists. Descriptor may be unpacked with: realDescr(Descriptor, Length, Exponent, Sign).
Generates offline array of primes up to N.
Each new allocation will free any old one, so prime is a unique array that needs no handle to access.
HighPrime is the prime at the LastIndex of the prime array.
Prime is the prime at the Index of the prime array.
Naive test for prime, using asserted prime array.
Naive trial and error factoring using prime array.
F is a list of the prime factors of N, in the form: [P**Exponent, ... ].
?- M = 2130706433, M1 is M - 1, primeFactors(M1,X).
X = [2 ** 24,127 ]
These built-in predicates support rational number arithmetic, where a rational number is represented by a numerator and denominator using the '/' operator. The predicates always return the simplest form of the rational number.
prodq/3 - can be used to multiply or divide rational numbers. The first two arguments are the multiplicands, the last the product. At least two of the three must be bound to either a rational or integer number.
sumq/3 - can be used to add and subtract rational numbers.
compareq/3 - compares rational numbers for testing. The first two arguments are the rational numbers, the third is an operator indicating the results of the comparison.
?- prodq(2/3,1/2,X). X = 1/3 yes
?- prodq(X,1/2,1/3). X = 2/3 yes
?- prodq(1/2,X,1/3). X = 2/3 yes
?- prodq(16/32,32/64,X). X = 1/4 yes
?- compareq(2/3,4/6,OP). OP = = yes
?- compareq(2/3,1/2,OP). OP = > yes
?- compareq(2/3,4/6,>). no
?- sumq(2/3,5/6,X). X = 3/2 yes
?- sumq(X,5/6,3/2). X = 2/3 yes
A simple continued fraction of length n may be denoted by a Prolog list of integers in the following form:
[cf, a0, a1, ... an]
The atom cf is there to distinguish the cf list from an evaluable list of gigadigits.
A continued fraction can be produced from a rational fraction q by q_cf/2 , which is a bilateral relationship.
Irrational numbers may be represented by infinite continued fractions which repeat over the last p integers, where p is the length of the periodic part. Therefore they can be finitely represented. This is done here by denoting the periodic part as a sub-list:
[cf, a0, a2, ... [p0, p1, ...]]
The program rootn.pro denotes the irrational square roots of non-square integers up to 50, in this way with sqrtcf/2. This can be transformed to a rational with q_cf and the rational evaluated as decimal with is.
Since the evaluation of a periodic continued fraction does not terminate we use the prolog flag epsilon to determine the desired precision. Be aware that the above procedure will be an approximation limited by epsilon and so the user should call truncate/2 (which also consults epsilon) on the result to extract the valid part.
Copyright ©1987-2011 Amzi! inc. All Rights Reserved. Amzi! is a registered trademark and Logic Server is a trademark of Amzi! inc. | http://www.amzi.com/manuals/amzi/pro/ref_math.htm | 13 |
62 | Solar Panel Mounting Angle
Solar Panel Mounting Angle
Find out how to position a solar panel to maximise power outputsolar | education
A photovoltaic solar panel
will generate the most electricity when solar radiation hits it directly - i.e. the sun's rays are hitting the panel perpendicularly (i.e. at a 90 degree angle, face on). As the sun appears to move across the sky from east to west through the day, and it appears to move up and then down in the sky as well, an optimum fixed mounting position must be found for a solar array
to collect the maximum amount of energy possible.
The Solar Spreading Out Effect
The diagram above demonstrates the solar spreading out effect
. As the sun goes lower in the sky, the solar radiation hits the ground at an angle - the solar elevation angle
(SEA). As the angle of the sun above the horizon goes down, the concentration of solar radiation hitting a unit area on the ground goes down too.
Imagine we have one square metre of sunshine hitting the Earth. When the sun is directly overhead (perpendicular to the ground), that one square metre will illuminate exactly one square metre on the ground. As the sun moves down toward the horizon, that same one square metre of sunshine will illuminate more than one square metre on the ground, but at a lower intensity since the same amount of solar radiation has been spread out over a larger area.
As the diagram above shows in one dimension, as the sun moves lower, the length of the illuminated area on the ground increases: x
(perpendicular - 90 degree SEA) < y
(60 degrees SEA) < z
(45 degrees SEA).
Trigonometry Calculations of the Solar Spreading Out Effect
When it comes to solar panels
what we want to know is how the angle the sun hitting the panel affects power output. Going back to our one-dimensional diagram and making some calculations, we can do exactly that:
To keep things simple we have use a unit (1) width for the solar radiation hitting a solar panel located flat on the ground when the sun is perpendicular to the panel. First making the pink triangle, and then the blue, we find that the 1 unit width of the solar radiation hitting the solar panel is stretched when the sun is hitting the panel at the angle SEA to:
width ? = 1 / ( (sin(90-SEA) * tan(SEA) )
Putting some numbers into this equation, we see that if SEA is 60 degrees, the answer is 1.15. So, 1 unit length of solar radiation has been stretched to 1.15 units, or the amount of power that will be generated compared to when the sun was perpendicular is 1/1.15 = 0.87, a power reduction of 13%.
Solar Spreading Effect on Solar Panels
Of course in reality, almost no-one has their solar panels lying flat on the ground. They will instead typically be installed on a sloping roof, or on brackets which give the same pitch (aka tilt angle). Therefore, for real world scenarios we need to take into account the tilt angle
of the solar panels when making these calculations.
Above shows an alternative method of calculating the solar spread effect in one dimension. Note that this time the notation is a bit different - e.g. |90-SEA| where the || symbols indicate that we must turn any negative result into a positive result of the same magnitude - e.g. if SEA was 110, 90-110 is -20, but |90-110| is +20, the difference between the two angles.
are fitted with a tilt angle in order to point the panels closer to the direction of the sun. Therefore we need to ADD the tilt angle to the solar elevation angle to get the angle that the sun is elevated from the solar panel - for example, if the tilt angle is 30 degrees, and the SEA is 20 degrees, we get 30+20 = 50 degrees (50 degrees from the solar panel face, 40 degrees from perpendicular).
If the tilt angle is instead 45 degrees (typical UK roof pitch) and the SEA is 60 degrees (noon on a summer's day), then the result is 60+50 = 110 which is |90-110| = 20 degrees from perpendicular.
The above diagram shows how all this comes together in the final equation which takes into account the solar panel tilt angle as well as the solar elevation angle:
width ? = tan( |90-SEA-tilt| ) / sin( |90-SEA-tilt| )
Putting some numbers into the final equation, we see that if the SEA is 15 degrees (UK winter at noon) and the tilt angle is 45 degrees, then the resulting width is:
tan (|30|)/ sin (|30|) = 1.154.
...and if the SEA is 15 degrees and the tilt angle is 60 degrees, then the resulting width is:
tan (|15|)/ sin (|15|) = 1.035.
...which proves that seasonal adjustment
of solar panels increases power generation as increasing the tilt angle from the usual 45 degrees to 60 degrees in the winter results in 10% more generation.
Two Dimensional Power Calculations
All these calculations have so far been in one dimension focusing on the changing height of the sun in the sky and the tilt angle of the solar panel. There is of course a second factor - the sun moves across the sky
from East to West. Fortunately the exact same equation can be used again to work out the effect of this movement across the sky.
Let's say we have a solar panel installed facing due South with no longitudinal tilt, and the sun is shining in the South East. A line drawn perpendicular to the solar panel points due South, and the angle between it and South East is 360/8 = 45 degrees. The width of our one unit wide beam of solar radiation on the solar panel will therefore be equal to:
tan (|90-45|) / sin (|90-45) = 1.41, so we are getting 1/1.41 = 71% of the power we'd get if the sun were in the South.
When taking all the factors into account, we have solar spreading in the vertical direction and in the horizontal direction. Our perfect one square unit area of solar radiation therefore hits our angled solar panel as a rectangle whose size tells us how much power we are getting relative to the solar panel pointing directly at the sun.
For example, if we have the sun at 15 degrees above the horizon and a tilt angle of 45 degrees, and the solar panel is installed facing due South, but the sun is in the South East, we multiply the two widths previously calculated to give us the area of the rectangle of sunlight hitting the solar panel:
1.154 * 1.41 = 1.632
...therefore in these conditions, the solar panel will generate 100 * (1/1.632) = 61.3% of the power it would have been generating if it were pointing directly at the sun. Such large losses in generation potential are why some people use solar trackers
(discussed later in this article).
The Effect of the Atmosphere on Solar Generation
When the sun is low in the sky, in addition to the spreading out effect, solar radiation
has to pass through a lot more of the atmosphere than when the sun is at its highest in the sky at noon. This further reduces the amount of solar radiation which gets to the solar panel - for example, by around 20% when the sun is 30 degrees above the horizon, and by 10% when the sun is 45 degrees above the horizon.
Here in the high latitudes of the UK, even in the South of England there are only a few days per year when the sun manages to get over 60 degrees at noon, and whole months when it doesn't even get up to 15 degrees at noon meaning that more than 40% of the solar radiation is absorbed by the air mass
before it even gets to your solar panel at noon on a winter's day. Sadly there is not a lot you can do about this short of moving your solar panels up a high mountain so that the solar radiation is only passing through the thin high atmosphere. (This will also give benefits thanks to the effect of temperature on solar panels
For a lot more information on the Effect of Air Mass on Solar Panels
Mounting Angle for Solar Panels
In general photovoltaic solar panels
should be mounted at an angle
of 10 to 15 degrees plus the siteís latitude. Therefore in London, which has a latitude of around 51 degrees, solar panels should ideally be mounted at an angle of approximately 65 degrees.
In the Northern Hemisphere, solar panels should face to the South where the sun is found at midday. Similarly in the Southern Hemisphere, solar panels should face to the North.
Seasonal Adjustment and Tracking
For all locations outside the tropics, if the panels are not fitted to a roof and cannot be moved, it is possible to greatly increase total annual power generation by adjusting the solar panel mounting angle
through the seasons so that it is steeper during the winter when the sun is low in the sky, and more shallow during the summer when the sun is relatively high in the sky.
A solar tracker
is a device which swings solar panels so that they follow the sunís apparent motion across the sky during the day - either in one dimension from east to west, or in two dimensions from east to west and up and down. This can increase daily power output by as much as 30%
, but it introduces mechanical components to the system which are liable to fail and require maintenance and it can also add considerable cost.
It normally does not make financial sense to use solar tracking with solar systems with a rated power output in excess of a few hundred Watts. Click here for more information on solar tracking
, and here for details of a simple solar tracking concept
Article Last Modified: 08:08, 12th Dec 2012
Comment on this Article
If you have any comments on this article, please email them to firstname.lastname@example.org
Recommended Related Articles
People who read this article also enjoyed the following articles:Power Inverters
Find out about the uses of power inverters in renewable energy generationArticle Last Modified: 16:41, 21st Jul 2008storage | general | electronicsLM741 Light Dark Sensor Circuit
Build a light/dark sensing circuit to automatically turn on/off lights and other devicesArticle Last Modified: 08:43, 2nd Nov 2007electronics | electric circuit | generalWireless Electricity Monitors
Up to date details of the wireless electricity monitors for sale in the UKArticle Last Modified: 13:29, 24th May 2012shopping | energy efficiency | carbonSolar Shed Lighting Circuit Testing
Analysis of a typical solar shed LED lighting circuitArticle Last Modified: 11:56, 31st Mar 2007lighting | solar | education | electric circuitBattery Specific Gravity
Find out about the specific gravity of lead acid battery electrolyteArticle Last Modified: 21:02, 18th Sep 2011storageOwl Wireless Electricity Monitor
Find out more about the Owl wireless electricity monitor (formerly know as Electrisave)Article Last Modified: 13:46, 13th Feb 2010shopping | carbon | electronicsLight Dependent Resistor
Find out more about LDRs and their use in renewable energy projectsArticle Last Modified: 09:26, 24th Oct 2007electric circuit | solarSwitch from Inverter to Mains Automatically
Power a circuit directly from a power inverter, but with automatic mains power back up.Article Last Modified: 11:16, 28th Nov 2012electric circuit | storage | generalSolar Charge Controllers on eBay
Find out about the selection of solar charge controllers available on ebayArticle Last Modified: 15:43, 29th Jul 2008solar | electronics | shoppingSimple Solar Tracker Concept
Make a very simple sun tracker to maximise electricity generation from a PV solar panelArticle Last Modified: 10:37, 9th Feb 2008solar | electric circuit | general | http://www.reuk.co.uk/Solar-Panel-Mounting-Angle.htm | 13 |
50 | According to the American Statistical Association, statistics is the scientific application of mathematical principles to the collection, analysis, and presentation of numerical data. Statisticians design surveys and experiments; collect, process, and analyze data; and interpret the results. Statisticians also work with experts in other fields. For example, statisticians can provide guidance in determining what information is reliable and which predictions can be trusted. They often help search for clues to the solution of scientific mysteries and can keep investigators from being misled by false impressions. The application of statistics can lead to a better and more complete understanding of the world and human activity.
Based on the statistics taught in their K12 mathematics classes, students should gradually come to believe and understand why:
- Data beat anecdotes
- Variability in data is natural, predictable, and quantifiable
- Random sampling allows results of surveys and experiments to be extended to the population from which the sample was taken
- Random assignment in comparative experiments allows cause-and-effect conclusions to be drawn
- Association is not causation
Here we provide you with tools to help your 68 students develop a conceptual and practical understanding of statistical concepts and processes.
In Exploring Histograms
(ORC #1458, grades 59), students calculate mean, median, and mode and can use an applet to explore how the length of the interval affects the shape of a histogram and how outliers do or do not affect the measures of center. For more lesson ideas, see Histograms and Bar Graphs
(ORC #5098, grades 68).
(ORC #5284, grades 58) explores the frequency of letters that occur in the names of the 50 states. Students represent these frequencies using a bar graph, stem-and leaf plot, and box-and-whiskers plot.
We're All Tuned In
(ORC #274, grades 510) offers students a peek into how advertising agencies gather and use data to identify trends and to make recommendations to a client.
The rich problem presented in Boxes and Cats: Statistics from the Beginning
(ORC #9710, grades 58) challenges students to consider the question, Why is a box like a cat? Students collect data and learn how to represent the results using box-and-whiskers plots and stem-and-leaf plots. An extension of the problem and a complete discussion of the underlying statistical ideas are included.
The rich problem What Is the Average Birth Month?
(ORC #10113, grades 810) will engage students in analyzing data and exploring the meaning of "average" month. This problem is designed to lead students to a deeper understanding of basic statistical concepts.
Representation of Datathe U.S. Census
(ORC #9741, grades 68) provides another rich problem. Students analyze an illustration that presents data from the 1930 and the 1960 censuses to see what inferences they can draw from the data displays.
The news stories in NewsHour Extra: Economics
(ORC #2255, grades 712) provide background and ideas for classroom lessons applying economics, mathematics, and statistics related to contemporary issues and newsworthy situations.
The unit Collecting, Representing, and Interpreting Data Using Spreadsheets and Graphing Software consists of two lessons: Collecting and Examining Weather Data
(ORC #1442) and Representing and Interpreting Data
(ORC #1443). In the lessons, students in grades 37 compile weather data on a spreadsheet and then use the spreadsheet's graphing function to analyze and interpret the data.
In Representing Data: Whale Weight
(ORC #4214, grades 68), students examine the relationship between the length and weight of a whale. They perform simple unit conversions and use proportional reasoning to complete a table of the lengths and weights of whales and determine if the results can be generalized.
Accessing and Investigating Population Data features two lessonsNational Population Projections
(ORC #1135) and State Population Projections
(ORC #1136)in which students in grades 57 use the U.S. Census Bureau to investigate and analyze population data and trends. They also create graphs to compare and contrast information.
Exploring Linear Data
(ORC #3324, grades 812) lets students collect data and model linear data by constructing scatterplots. The students interpret data points and trends and investigate the line of best fit.
Functions Derived from Data
(ORC #10127, grades 812) presents another rich problem. The activities focus on identifying the type of function a data set might represent, finding the symbolic representation of a function through pattern building, and looking at behaviors of functions through a variety of real-world contexts.
Go to Math: Misleading Graphs
(ORC #6522, grades 58) to find examples of misleading graphs that illustrate how information can be distorted and misrepresented.
Set your students loose on the U.S. Census Bureau
website or the Quick Facts
page. See what they can find out about your area or any other area of the country!
Mean and Median
is an interactive histogram applet. When the students input the data forming the histogram, all the displayed statistical measures including standard deviation are updated.
is an introductory statistics tutorial appropriate for middle school students or anyone beginning to study statistics. Find more interactive mathematics activities from the BBC at KS2 Bitesize
for students ages 711 and at KS3 Bitesize
for students ages 1114.
Salaries: Math Challenge #29
addresses the question, Will women ever earn as much money as men? It is one of eighty challenging mathematics activities developed for middle school students by the National Council of Teachers of Mathematics.
Data Analysis, Statistics, and Probability
is an online college-level course for elementary and middle school teachers. It features practical examples designed to develop the learner's understanding and use of data. A glossary of statistical terms is included.
What Is a Survey?
is an online ten-chapter booklet written for the layperson. It provides everything you need to know about surveys, from survey methods to the meaning of margin of error. | http://www.ohiorc.org/orcon/statisticstoday/68.aspx | 13 |
57 | ||This article needs more links to other articles to help integrate it into the encyclopedia. (December 2012)|
The Nullification Crisis was a sectional crisis during the presidency of Andrew Jackson created by South Carolina's 1832 Ordinance of Nullification. This ordinance declared by the power of the State that the federal Tariffs of 1828 and 1832 were unconstitutional and therefore null and void within the sovereign boundaries of South Carolina. The controversial and highly protective Tariff of 1828 (known to its detractors as the "Tariff of Abominations") was enacted into law during the presidency of John Quincy Adams. The tariff was opposed in the South and parts of New England. Its opponents expected that the election of Jackson as President would result in the tariff being significantly reduced.
The nation had suffered an economic downturn throughout the 1820s, and South Carolina was particularly affected. Many South Carolina politicians blamed the change in fortunes on the national tariff policy that developed after the War of 1812 to promote American manufacturing over its European competition. By 1828 South Carolina state politics increasingly organized around the tariff issue. When the Jackson administration failed to take any actions to address their concerns, the most radical faction in the state began to advocate that the state itself declare the tariff null and void within South Carolina. In Washington, an open split on the issue occurred between Jackson and Vice President John C. Calhoun, the most effective proponent of the constitutional theory of state nullification.
On July 14, 1832, after Calhoun had resigned the Vice Presidency in order to run for the Senate where he could more effectively defend nullification, Jackson signed into law the Tariff of 1832. This compromise tariff received the support of most northerners and half of the southerners in Congress. The reductions were too little for South Carolina, and in November 1832 a state convention declared that the tariffs of both 1828 and 1832 were unconstitutional and unenforceable in South Carolina after February 1, 1833. Military preparations to resist anticipated federal enforcement were initiated by the state. In late February both a Force Bill, authorizing the President to use military forces against South Carolina, and a new negotiated tariff satisfactory to South Carolina were passed by Congress. The South Carolina convention reconvened and repealed its Nullification Ordinance on March 11, 1833.
The crisis was over, and both sides could find reasons to claim victory. The tariff rates were reduced and stayed low to the satisfaction of the South, but the states’ rights doctrine of nullification remained controversial. By the 1850s the issues of the expansion of slavery into the western territories and the threat of the Slave Power became the central issues in the nation.
Since the Nullification Crisis, the doctrine of states' rights has been asserted again by opponents of the Fugitive Slave Act of 1850, proponents of California's Specific Contract Act of 1863, (which nullified the Legal Tender Act of 1862) opponents of Federal acts prohibiting the sale and possession of marijuana in the first decade of the 21st century, and opponents of implementation of laws and regulations pertaining to firearms from the late 1900s up to 2013.
Background (1787 - 1816)
The historian Richard E. Ellis wrote:
|“||By creating a national government with the authority to act directly upon individuals, by denying to the state many of the prerogatives that they formerly had, and by leaving open to the central government the possibility of claiming for itself many powers not explicitly assigned to it, the Constitution and Bill of Rights as finally ratified substantially increased the strength of the central government at the expense of the states.||”|
The extent of this change and the problem of the actual distribution of powers between state and the federal governments would be a matter of political and ideological discussion up to the Civil War and beyond. In the early 1790s the debate centered on Alexander Hamilton's nationalistic financial program versus Jefferson's democratic and agrarian program, a conflict that led to the formation of two opposing national political parties. Later in the decade the Alien and Sedition Acts led to the states' rights position being articulated in the Kentucky and Virginia Resolutions. The Kentucky Resolutions, written by Thomas Jefferson, contained the following, which has often been cited as a justification for both nullification and secession:
|“||… that in cases of an abuse of the delegated powers, the members of the general government, being chosen by the people, a change by the people would be the constitutional remedy; but, where powers are assumed which have not been delegated, a nullification of the act is the rightful remedy: that every State has a natural right in cases not within the compact, (casus non fœderis) to nullify of their own authority all assumptions of power by others within their limits: that without this right, they would be under the dominion, absolute and unlimited, of whosoever might exercise this right of judgment for them: that nevertheless, this commonwealth, from motives of regard and respect for its co-States, has wished to communicate with them on the subject: that with them alone it is proper to communicate, they alone being parties to the compact, and solely authorized to judge in the last resort of the powers exercised under it… .||”|
The Virginia Resolutions, written by James Madison, hold a similar argument:
|“||The resolutions, having taken this view of the Federal compact, proceed to infer that, in cases of a deliberate, palpable, and dangerous exercise of other powers, not granted by the said compact, the States, who are parties thereto, have the right, and are in duty bound to interpose to arrest the evil, and for maintaining, within their respective limits, the authorities, rights, and liberties appertaining to them. ...The Constitution of the United States was formed by the sanction of the States, given by each in its sovereign capacity. It adds to the stability and dignity, as well as to the authority of the Constitution, that it rests on this solid foundation. The States, then, being parties to the constitutional compact, and in their sovereign capacity, it follows of necessity that there can be no tribunal above their authority to decide, in the last resort, whether the compact made by them be violated; and, consequently, as parties to it, they must themselves decide, in the last resort, such questions as may be of sufficient magnitude to require their interposition.||”|
Historians differ over the extent to which either resolution advocated the doctrine of nullification. Historian Lance Banning wrote, “The legislators of Kentucky (or more likely, John Breckinridge, the Kentucky legislator who sponsored the resolution) deleted Jefferson's suggestion that the rightful remedy for federal usurpations was a "nullification" of such acts by each state acting on its own to prevent their operation within its respective borders. Rather than suggesting individual, although concerted, measures of this sort, Kentucky was content to ask its sisters to unite in declarations that the acts were "void and of no force", and in "requesting their appeal" at the succeeding session of the Congress.” The key sentence, and the word "nullification" was used in supplementary Resolutions passed by Kentucky in 1799.
Madison's judgment is clearer. He was chairman of a committee of the Virginia Legislature which issued a book-length Report on the Resolutions of 1798, published in 1800 after they had been decried by several states. This asserted that the state did not claim legal force. "The declarations in such cases are expressions of opinion, unaccompanied by other effect than what they may produce upon opinion, by exciting reflection. The opinions of the judiciary, on the other hand, are carried into immediate effect by force." If the states collectively agreed in their declarations, there were several methods by which it might prevail, from persuading Congress to repeal the unconstitutional law, to calling a constitutional convention, as two-thirds of the states may. When, at the time of the Nullification Crisis, he was presented with the Kentucky resolutions of 1799, he argued that the resolutions themselves were not Jefferson's words, and that Jefferson meant this not as a constitutional but as a revolutionary right.
Madison biographer Ralph Ketcham wrote:
|“||Though Madison agreed entirely with the specific condemnation of the Alien and Sedition Acts, with the concept of the limited delegated power of the general government, and even with the proposition that laws contrary to the Constitution were illegal, he drew back from the declaration that each state legislature had the power to act within its borders against the authority of the general government to oppose laws the legislature deemed unconstitutional.”||”|
Historian Sean Wilentz explains the widespread opposition to these resolutions:
|“||Several states followed Maryland's House of Delegates in rejecting the idea that any state could, by legislative action, even claim that a federal law was unconstitutional, and suggested that any effort to do so was treasonous. A few northern states, including Massachusetts, denied the powers claimed by Kentucky and Virginia and insisted that the Sedition law was perfectly constitutional .... Ten state legislatures with heavy Federalist majorities from around the country censured Kentucky and Virginia for usurping powers that supposedly belonged to the federal judiciary. Northern Republicans supported the resolutions' objections to the alien and sedition acts, but opposed the idea of state review of federal laws. Southern Republicans outside Virginia and Kentucky were eloquently silent about the matter, and no southern legislature heeded the call to battle.||”|
The election of 1800 was a turning point in national politics as the Federalists were replaced by the Democratic-Republican Party led by Thomas Jefferson and James Madison, the authors of the Kentucky and Virginia Resolutions. But, the four presidential terms spanning the period from 1800 to 1817 “did little to advance the cause of states’ rights and much to weaken it.” Over Jefferson’s opposition, the power of the federal judiciary, led by Federalist Chief Justice John Marshall, increased. Jefferson expanded federal powers with the acquisition of the Louisiana Territory and his use of a national embargo designed to prevent involvement in a European war. Madison in 1809 used national troops to enforce a Supreme Court decision in Pennsylvania, appointed an “extreme nationalist” in Joseph Story to the Supreme Court, signed the bill creating the Second Bank of the United States, and called for a constitutional amendment to promote internal improvements.
Opposition to the War of 1812 was centered in New England. Delegates to a convention in Hartford, Connecticut met in December 1814 to consider a New England response to Madison’s war policy. The debate allowed many radicals to argue the cause of states’ rights and state sovereignty. In the end, moderate voices dominated and the final product was not secession or nullification, but a series of proposed constitutional amendments. Identifying the South’s domination of the government as the cause of much of their problems, the proposed amendments included “the repeal of the three-fifths clause, a requirement that two-thirds of both houses of Congress agree before any new state could be admitted to the Union, limits on the length of embargoes, and the outlawing of the election of a president from the same state to successive terms, clearly aimed at the Virginians.” The war was over before the proposals were submitted to President Madison.
After the conclusion of the War of 1812 Sean Wilentz notes:
|“||Madison’s speech [his 1815 annual message to Congress] affirmed that the war had reinforced the evolution of mainstream Republicanism, moving it further away from its original and localist assumptions. The war’s immense strain on the treasury led to new calls from nationalist Republicans for a national bank. The difficulties in moving and supplying troops exposed the wretchedness of the country’s transportation links, and the need for extensive new roads and canals. A boom in American manufacturing during the prolonged cessation of trade with Britain created an entirely new class of enterprisers, most of them tied politically to the Republicans, who might not survive without tariff protection. More broadly, the war reinforced feelings of national identity and connection.||”|
This spirit of nationalism was linked to the tremendous growth and economic prosperity of this post war era. However in 1819 the nation suffered its first financial panic and the 1820s turned out to be a decade of political turmoil that again led to fierce debates over competing views of the exact nature of American federalism. The “extreme democratic and agrarian rhetoric” that had been so effective in 1798 led to renewed attacks on the “numerous market-oriented enterprises, particularly banks, corporations, creditors, and absentee landholders”.
Tariffs (1816-1828)
The Tariff of 1816 had some protective features, and it received support throughout the nation, including that of John C. Calhoun and fellow South Carolinian William Lowndes. The first explicitly protective tariff linked to a specific program of internal improvements was the Tariff of 1824. Sponsored by Henry Clay, this tariff provided a general level of protection at 35% ad valorem (compared to 25% with the 1816 act) and hiked duties on iron, woolens, cotton, hemp, and wool and cotton bagging. The bill barely passed the federal House of Representatives by a vote of 107 to 102. The Middle states and Northwest supported the bill, the South and Southwest opposed it, and New England split its vote with a majority opposing it. In the Senate the bill, with the support of Tennessee Senator Andrew Jackson, passed by four votes, and President James Monroe, the Virginia heir to the Jefferson-Madison control of the White House, signed the bill on March 25, 1824. Daniel Webster of Massachusetts led the New England opposition to this tariff.
Protest against the prospect and the constitutionality of higher tariffs began in 1826 and 1827 with William Branch Giles, who had the Virginia legislature pass resolutions denying the power of Congress to pass protective tariffs, citing the Virginia Resolutions of 1798 and James Madison's 1800 defense of them. Madison denied both the appeal to nullification and the unconstitutionality; he had always held that the power to regulate commerce included protection. Jefferson had, at the end of his life, written against protective tariffs.
The Tariff of 1828 was largely the work of Martin Van Buren (although Silas Wright Jr. of New York prepared the main provisions) and was partly a political ploy to elect Andrew Jackson president. Van Buren calculated that the South would vote for Jackson regardless of the issues so he ignored their interests in drafting the bill. New England, he thought, was just as likely to support the incumbent John Quincy Adams, so the bill levied heavy taxes on raw materials consumed by New England such as hemp, flax, molasses, iron and sail duck. With an additional tariff on iron to satisfy Pennsylvania interests, Van Buren expected the tariff to help deliver Pennsylvania, New York, Missouri, Ohio, and Kentucky to Jackson. Over opposition from the South and some from New England, the tariff was passed with the full support of many Jackson supporters in Congress and signed by President Adams in early 1828.
As expected, Jackson and his running mate John Calhoun carried the entire South with overwhelming numbers in all the states but Louisiana where Adams drew 47% of the vote in a losing effort. However many Southerners became dissatisfied as Jackson, in his first two annual messages to Congress, failed to launch a strong attack on the tariff. Historian William J. Cooper Jr. writes:
|“||The most doctrinaire ideologues of the Old Republican group [supporters of the Jefferson and Madison position in the late 1790s] first found Jackson wanting. These purists identified the tariff of 1828, the hated Tariff of Abominations, as the most heinous manifestation of the nationalist policy they abhorred. That protective tariff violated their constitutional theory, for, as they interpreted the document, it gave no permission for a protective tariff. Moreover, they saw protection as benefiting the North and hurting the South.||”|
South Carolina Background (1819-1828)
South Carolina had been adversely affected by the national economic decline of the 1820s. During this decade, the population decreased by 56,000 whites and 30,000 slaves, out of a total free and slave population of 580,000. The whites left for better places; they took slaves with them or sold them to traders moving slaves to the Deep South for sale.
Historian Richard E. Ellis describes the situation:
|“||Throughout the colonial and early national periods, South Carolina had sustained substantial economic growth and prosperity. This had created an extremely wealthy and extravagant low country aristocracy whose fortunes were based first on the cultivation of rice and indigo, and then on cotton. Then the state was devastated by the Panic of 1819. The depression that followed was more severe than in almost any other state of the Union. Moreover, competition from the newer cotton producing areas along the Gulf Coast, blessed with fertile lands that produced a higher crop-yield per acre, made recovery painfully slow. To make matters worse, in large areas of South Carolina slaves vastly outnumbered whites, and there existed both considerable fear of slave rebellion and a growing sensitivity to even the smallest criticism of “the peculiar institution.”||”|
State leaders, led by states’ rights advocates like William Smith and Thomas Cooper, blamed most of the state’s economic problems on the Tariff of 1816 and national internal improvement projects Soil erosion and competition from the New Southwest were also very significant reasons for the state’s declining fortunes. George McDuffie was a particularly effective speaker for the anti-tariff forces, and he popularized the Forty Bale theory. McDuffie argued that the 40% tariff on cotton finished goods meant that “the manufacturer actually invades your barns, and plunders you of 40 out of every 100 bales that you produce.” Mathematically incorrect, this argument still struck a nerve with his constituency. Nationalists such as Calhoun were forced by the increasing power of such leaders to retreat from their previous positions and adopt, in the words of Ellis, "an even more extreme version of the states' rights doctrine" in order to maintain political significance within South Carolina.
South Carolina’s first effort at nullification occurred in 1822. Its planters believed that free black sailors had assisted Denmark Vesey in his planned slave rebellion. South Carolina passed a Negro Seamen Act, which required that all black foreign seamen be imprisoned while their ships were docked in Charleston. Britain strongly objected, especially as it was recruiting more Africans as sailors. What was worse, if the captains did not pay the fees to cover the cost of jailing, South Carolina would sell the sailors into slavery. Other southern states also passed laws against free black sailors.
Supreme Court Justice William Johnson, in his capacity as a circuit judge, declared the South Carolina law as unconstitutional since it violated United States treaties with Great Britain. The South Carolina Senate announced that the judge’s ruling was invalid and that the Act would be enforced. The federal government did not attempt to carry out Johnson's decision.
Route to nullification in South Carolina (1828-1832)
Historian Avery Craven argues that, for the most part, the debate from 1828-1832 was a local South Carolina affair. The state's leaders were not united and the sides were roughly equal. The western part of the state and a faction in Charleston, led by Joel Poinsett, would remain loyal to Jackson almost to the end. Only in small part was the conflict between “a National North against a States’-right South”.
After the final vote on the Tariff of 1828, the South Carolina congressional delegation held two caucuses, the second at the home of Senator Robert Y. Hayne. They were rebuffed in their efforts to coordinate a united Southern response and focused on how their state representatives would react. While many agreed with George McDuffie that tariff policy could lead to secession at some future date, they all agreed that as much as possible, the issue should be kept out of the upcoming presidential election. Calhoun, while not at this meeting, served as a moderating influence. He felt that the first step in reducing the tariff was to defeat Adams and his supporters in the upcoming election. William C. Preston, on behalf of the South Carolina legislature, asked Calhoun to prepare a report on the tariff situation. Calhoun readily accepted this challenge and in a few weeks time had a 35,000-word draft of what would become his “Exposition and Protest”.
Calhoun’s “Exposition” was completed late in 1828. He argued that the tariff of 1828 was unconstitutional because it favored manufacturing over commerce and agriculture. He thought that the tariff power could only be used to generate revenue, not to provide protection from foreign competition for American industries. He believed that the people of a state or several states, acting in a democratically elected convention, had the retained power to veto any act of the federal government which violated the Constitution. This veto, the core of the doctrine of nullification, was explained by Calhoun in the Exposition:
|“||If it be conceded, as it must be by every one who is the least conversant with our institutions, that the sovereign powers delegated are divided between the General and State Governments, and that the latter hold their portion by the same tenure as the former, it would seem impossible to deny to the States the right of deciding on the infractions of their powers, and the proper remedy to be applied for their correction. The right of judging, in such cases, is an essential attribute of sovereignty, of which the States cannot be divested without losing their sovereignty itself, and being reduced to a subordinate corporate condition. In fact, to divide power, and to give to one of the parties the exclusive right of judging of the portion allotted to each, is, in reality, not to divide it at all; and to reserve such exclusive right to the General Government (it matters not by what department to be exercised), is to convert it, in fact, into a great consolidated government, with unlimited powers, and to divest the States, in reality, of all their rights, It is impossible to understand the force of terms, and to deny so plain a conclusion.||”|
The report also detailed the specific southern grievances over the tariff that led to the current dissatisfaction. ” Fearful that “hotheads” such as McDuffie might force the legislature into taking some drastic action against the federal government, historian John Niven describes Calhoun’s political purpose in the document:
|“||All through that hot and humid summer, emotions among the vociferous planter population had been worked up to a near-frenzy of excitement. The whole tenor of the argument built up in the “Exposition” was aimed to present the case in a cool, considered manner that would dampen any drastic moves yet would set in motion the machinery for repeal of the tariff act. It would also warn other sections of the Union against any future legislation that an increasingly self-conscious South might consider punitive, especially on the subject of slavery.||”|
The report was submitted to the state legislature which had 5,000 copies printed and distributed. Calhoun, who still had designs on succeeding Jackson as president, was not identified as the author but word on this soon leaked out. The legislature took no action on the report at that time.
In the summer of 1828 Robert Barnwell Rhett, soon to be considered the most radical of the South Carolinians, entered the fray over the tariff. As a state representative, Rhett called for the governor to convene a special session of the legislature. An outstanding orator, Rhett appealed to his constituents to resist the majority in Congress. Rhett addressed the danger of doing nothing:
|“||But if you are doubtful of yourselves – if you are not prepared to follow up your principles wherever they may lead, to their very last consequence – if you love life better than honor, -- prefer ease to perilous liberty and glory; awake not! Stir not! -- Impotent resistance will add vengeance to your ruin. Live in smiling peace with your insatiable Oppressors, and die with the noble consolation that your submissive patience will survive triumphant your beggary and despair.||”|
Rhett’s rhetoric about revolution and war was too radical in the summer of 1828 but, with the election of Jackson assured, James Hamilton Jr. on October 28 in the Colleton County Courthouse in Walterborough “launched the formal nullification campaign.” Renouncing his former nationalism, Hamilton warned the people that, “Your task-master must soon become a tyrant, from the very abuses and corruption of the system, without the bowels of compassion, or a jot of human sympathy.” He called for implementation of Mr. Jefferson’s “rightful remedy” of nullification. Hamilton sent a copy of the speech directly to President-elect Jackson. But, despite a statewide campaign by Hamilton and McDuffie, a proposal to call a nullification convention in 1829 was defeated by the South Carolina legislature meeting at the end of 1828. State leaders such as Calhoun, Hayne, Smith, and William Drayton were all able to remain publicly non-committal or opposed to nullification for the next couple of years.
The division in the state between radicals and conservatives continued throughout 1829 and 1830. After the failure of a state project to arrange financing of a railroad within the state to promote internal trade, the state petitioned Congress to invest $250,000 in the company trying to build the railroad. After Congress tabled the measure, the debate in South Carolina resumed between those who wanted state investment and those who wanted to work to get Congress' support. The debate demonstrated that a significant minority of the state did have an interest in Clay’s American System. The effect of the Webster-Haynes debate was to energize the radicals, and some moderates started to move in their direction.
The state election campaign of 1830 focused on the tariff issue and the need for a state convention. On the defensive, radicals underplayed the intent of the convention as pro-nullification. When voters were presented with races where an unpledged convention was the issue, the radicals generally won. When conservatives effectively characterized the race as being about nullification, the radicals lost. The October election was narrowly carried by the radicals, although the blurring of the issues left them without any specific mandate. In South Carolina, the governor was selected by the legislature, which selected James Hamilton, the leader of the radical movement, as governor and fellow radical Henry L. Pinckney as speaker of the South Carolina House. For the open Senate seat, the legislature chose the more radical Stephen Miller over William Smith.
With radicals in leading positions, in 1831, they began to capture momentum. State politics became sharply divided along Nullifier and Unionist lines. Still, the margin in the legislature fell short of the two-thirds majority needed for a convention. Many of the radicals felt that convincing Calhoun of the futility of his plans for the presidency would lead him into their ranks. Calhoun meanwhile had concluded that Martin Van Buren was clearly establishing himself as Jackson’s heir apparent. At Hamilton’s prompting, George McDuffie made a three-hour speech in Charleston demanding nullification of the tariff at any cost. In the state, the success of McDuffie’s speech seemed to open up the possibilities of both military confrontation with the federal government and civil war within the state. With silence no longer an acceptable alternative, Calhoun looked for the opportunity to take control of the anti-tariff faction in the state; by June he was preparing what would be known as his Fort Hill Address.
Published on July 26, 1831, the address repeated and expanded the positions Calhoun had made in the “Exposition”. While the logic of much of the speech was consistent with the states’ rights position of most Jacksonians, and even Daniel Webster remarked that it “was the ablest and most plausible, and therefore the most dangerous vindication of that particular form of Revolution”, the speech still placed Calhoun clearly in the nullifier camp. Within South Carolina, his gestures at moderation in the speech were drowned out as planters received word of the Nat Turner insurrection in Virginia. Calhoun was not alone in finding a connection between the abolition movement and the sectional aspects of the tariff issue. It confirmed for Calhoun what he had written in a September 11, 1830 letter:
|“||I consider the tariff act as the occasion, rather than the real cause of the present unhappy state of things. The truth can no longer be disguised, that the peculiar domestick [sic] institution of the Southern States and the consequent direction which that and her soil have given to her industry, has placed them in regard to taxation and appropriations in opposite relation to the majority of the Union, against the danger of which, if there be no protective power in the reserved rights of the states they must in the end be forced to rebel, or, submit to have their paramount interests sacrificed, their domestic institutions subordinated by Colonization and other schemes, and themselves and children reduced to wretchedness.||”|
From this point, the nullifiers accelerated their organization and rhetoric. In July 1831 the States Rights and Free Trade Association was formed in Charleston and expanded throughout the state. Unlike state political organizations in the past, which were led by the South Carolina planter aristocracy, this group appealed to all segments of the population, including non-slaveholder farmers, small slaveholders, and the Charleston non-agricultural class. Governor Hamilton was instrumental in seeing that the association, which was both a political and a social organization, expanded throughout the state. In the winter of 1831 and spring of 1832, the governor held conventions and rallies throughout the state to mobilize the nullification movement. The conservatives were unable to match the radicals in either organization or leadership.
The state elections of 1832 were “charged with tension and bespattered with violence,” and “polite debates often degenerated into frontier brawls.” Unlike the previous year’s election, the choice was clear between nullifiers and unionists. The nullifiers won and on October 20, 1832, Governor Hamilton called the legislature into a special session to consider a convention. The legislative vote was 96-25 in the House and 31-13 in the Senate
In November 1832 the Nullification Convention met. The convention declared that the tariffs of 1828 and 1832 were unconstitutional and unenforceable within the state of South Carolina after February 1, 1833. They said that attempts to use force to collect the taxes would lead to the state’s secession. Robert Hayne, who followed Hamilton as governor in 1833, established a 2,000-man group of mounted minutemen and 25,000 infantry who would march to Charleston in the event of a military conflict. These troops were to be armed with $100,000 in arms purchased in the North.
The enabling legislation passed by the legislature was carefully constructed to avoid clashes if at all possible and to create an aura of legality in the process. To avoid conflicts with Unionists, it allowed importers to pay the tariff if they so desired. Other merchants could pay the tariff by obtaining a paper tariff bond from the customs officer. They would then refuse to pay the bond when due, and if the customs official seized the goods, the merchant would file for a writ of replevin to recover the goods in state court. Customs officials who refused to return the goods (by placing them under the protection of federal troops) would be civilly liable for twice the value of the goods. To insure that state officials and judges supported the law, a "test oath" would be required for all new state officials, binding them to support the ordinance of nullification.
Governor Hayne in his inaugural address announced South Carolina's position:
|“||If the sacred soil of Carolina should be polluted by the footsteps of an invader, or be stained with the blood of her citizens, shed in defense, I trust in Almighty God that no son of hers … who has been nourished at her bosom … will be found raising a parricidal arm against our common mother. And even should she stand ALONE in this great struggle for constitutional liberty … that there will not be found, in the wider limits of the state, one recreant son who will not fly to the rescue, and be ready to lay down his life in her defense.||”|
Washington, D.C. (1828-1832)
When President Jackson took office in March 1829 he was well aware of the turmoil created by the “Tariff of Abominations”. While he may have abandoned some of his earlier beliefs that had allowed him to vote for the Tariff of 1824, he still felt protectionism was justified for products essential to military preparedness and did not believe that the current tariff should be reduced until the national debt was fully paid off. He addressed the issue in his inaugural address and his first three messages to Congress, but offered no specific relief. In December 1831, with the proponents of nullification in South Carolina gaining momentum, Jackson was recommending “the exercise of that spirit of concession and conciliation which has distinguished the friends of our Union in all great emergencies.” However on the constitutional issue of nullification, despite his strong beliefs in states’ rights, Jackson did not waver.
Calhoun’s “Exposition and Protest” did start a national debate over the doctrine of nullification. The leading proponents of the nationalistic view included Daniel Webster, Supreme Court Justice Joseph Story, Judge William Alexander Duer, John Quincy Adams, Nathaniel Chipman, and Nathan Dane. These people rejected the compact theory advanced by Calhoun, claiming that the Constitution was the product of the people, not the states. According to the nationalist position, the Supreme Court had the final say on the constitutionality of legislation, the national union was perpetual and had supreme authority over individual states. The nullifiers, on the other hand, asserted that the central government was not to be the ultimate arbiter of its own power, and that the states, as the contracting entities, could judge for themselves what was or was not constitutional. While Calhoun’s “Exposition” claimed that nullification was based on the reasoning behind the Kentucky and Virginia Resolutions, an aging James Madison in an August 28, 1830 letter to Edward Everett, intended for publication, disagreed. Madison wrote, denying that any individual state could alter the compact:
|“||Can more be necessary to demonstrate the inadmissibility of such a doctrine than that it puts it in the power of the smallest fraction over 1/4 of the U. S. — that is, of 7 States out of 24 — to give the law and even the Constn. to 17 States, each of the 17 having as parties to the Constn. an equal right with each of the 7 to expound it & to insist on the exposition. That the 7 might, in particular instances be right and the 17 wrong, is more than possible. But to establish a positive & permanent rule giving such a power to such a minority over such a majority, would overturn the first principle of free Govt. and in practice necessarily overturn the Govt. itself.||”|
Part of the South’s strategy to force repeal of the tariff was to arrange an alliance with the West. Under the plan, the South would support the West’s demand for free lands in the public domain if the West would support repeal of the tariff. With this purpose Robert Hayne took the floor on the Senate in early 1830, thus beginning “the most celebrated debate, in the Senate’s history.” Daniel Webster’s response shifted the debate, subsequently styled the Webster-Hayne debates, from the specific issue of western lands to a general debate on the very nature of the United States. Webster's position differed from Madison's: Webster asserted that the people of the United States acted as one aggregate body, Madison held that the people of the several states had acted collectively. John Rowan spoke against Webster on that issue, and Madison wrote, congratulating Webster, but explaining his own position. The debate presented the fullest articulation of the differences over nullification, and 40,000 copies of Webster’s response, which concluded with “liberty and Union, now and forever, one and inseparable”, were distributed nationwide.
Many people expected the states’ rights Jackson to side with Haynes. However once the debate shifted to secession and nullification, Jackson sided with Webster. On April 13, 1830 at the traditional Democratic Party celebration honoring Thomas Jefferson’s birthday, Jackson chose to make his position clear. In a battle of toasts, Hayne proposed, “The Union of the States, and the Sovereignty of the States.” Jackson’s response, when his turn came, was, “Our Federal Union: It must be preserved.” To those attending, the effect was dramatic. Calhoun would respond with his own toast, in a play on Webster’s closing remarks in the earlier debate, “The Union. Next to our liberty, the most dear.” Finally Martin Van Buren would offer, “Mutual forbearance and reciprocal concession. Through their agency the Union was established. The patriotic spirit from which they emanated will forever sustain it.”
Van Buren wrote in his autobiography of Jackson’s toast, “The veil was rent – the incantations of the night were exposed to the light of day.” Thomas Hart Benton, in his memoirs, stated that the toast “electrified the country.” Jackson would have the final words a few days later when a visitor from South Carolina asked if Jackson had any message he wanted relayed to his friends back in the state. Jackson’s reply was:
|“||Yes I have; please give my compliments to my friends in your State and say to them, that if a single drop of blood shall be shed there in opposition to the laws of the United States, I will hang the first man I can lay my hand on engaged in such treasonable conduct, upon the first tree I can reach.||”|
Other issues than the tariff were still being decided. In May 1830 Jackson vetoed an important (especially to Kentucky and Henry Clay) internal improvements program in the Maysville Road Bill and then followed this with additional vetoes of other such projects shortly before Congress adjourned at the end of May. Clay would use these vetoes to launch his presidential campaign. In 1831 the re-chartering of the Bank of the United States, with Clay and Jackson on opposite sides, reopened a long simmering problem. This issue was featured at the December 1831 National Republican convention in Baltimore which nominated Henry Clay for president, and the proposal to re-charter was formally introduced into Congress on January 6, 1832. The Calhoun-Jackson split entered the center stage when Calhoun, as vice-president presiding over the Senate, cast the tie-breaking vote to deny Martin Van Buren the post of minister to England. Van Buren was subsequently selected as Jackson’s running mate at the 1832 Democratic National Convention held in May.
In February 1832 Henry Clay, back in the Senate after a two decades absence, made a three day long speech calling for a new tariff schedule and an expansion of his American System. In an effort to reach out to John Calhoun and other southerners, Clay’s proposal provided for a ten million dollar revenue reduction based on the amount of budget surplus he anticipated for the coming year. Significant protection was still part of the plan as the reduction primarily came on those imports not in competition with domestic producers. Jackson proposed an alternative that reduced overall tariffs to 28%. John Quincy Adams, now in the House of Representatives, used his Committee of Manufacturers to produce a compromise bill that, in its final form, reduced revenues by five million dollars, lowered duties on non-competitive products, and retained high tariffs on woolens, iron, and cotton products. In the course of the political maneuvering, George McDuffie’s Ways and Means Committee, the normal originator of such bills, prepared a bill with drastic reduction across the board. McDuffie’s bill went nowhere. Jackson signed the Tariff of 1832 on July 14, 1832, a few days after he vetoed the Bank of the United States re-charter bill. Congress adjourned after it failed to override Jackson’s veto.
With Congress in adjournment, Jackson anxiously watched events in South Carolina. The nullifiers found no significant compromise in the Tariff of 1832 and acted accordingly (see the above section). Jackson heard rumors of efforts to subvert members of the army and navy in Charleston and he ordered the secretaries of the army and navy to begin rotating troops and officers based on their loyalty. He ordered General Winfield Scott to prepare for military operations and ordered a naval squadron in Norfolk to prepare to go to Charleston. Jackson kept lines of communication open with unionists like Joel Poinsett, William Drayton, and James L. Petigru and sent George Breathitt, brother of the Kentucky governor, to independently obtain political and military intelligence. After their defeat at the polls in October, Petigru advised Jackson that he should " Be prepared to hear very shortly of a State Convention and an act of Nullification.” On October 19, 1832 Jackson wrote to his Secretary of War, “The attempt will be made to surprise the Forts and garrisons by the militia, and must be guarded against with vestal vigilance and any attempt by force repelled with prompt and exemplary punishment.” By mid-November Jackson’s reelection was assured.
On December 3, 1832 Jackson sent his fourth annual message to Congress. The message “was stridently states’ rights and agrarian in its tone and thrust” and he disavowed protection as anything other than a temporary expedient. His intent regarding nullification, as communicated to Van Buren, was “to pass it barely in review, as a mere buble [sic], view the existing laws as competent to check and put it down.” He hoped to create a “moral force” that would transcend political parties and sections. The paragraph in the message that addressed nullification was:
|“||It is my painful duty to state that in one quarter of the United States opposition to the revenue laws has arisen to a height which threatens to thwart their execution, if not to endanger the integrity of the Union. What ever obstructions may be thrown in the way of the judicial authorities of the General Government, it is hoped they will be able peaceably to overcome them by the prudence of their own officers and the patriotism of the people. But should this reasonable reliance on the moderation and good sense of all portions of our fellow citizens be disappointed, it is believed that the laws themselves are fully adequate to the suppression of such attempts as may be immediately made. Should the exigency arise rendering the execution of the existing laws impracticable from any cause what ever, prompt notice of it will be given to Congress, with a suggestion of such views and measures as may be deemed necessary to meet it.||”|
On December 10 Jackson issued the Proclamation to the People of South Carolina, in which he characterized the positions of the nullifiers as "impractical absurdity" and "a metaphysical subtlety, in pursuit of an impractical theory." He provided this concise statement of his belief:
|“||I consider, then, the power to annul a law of the United States, assumed by one State, incompatible with the existence of the Union, contradicted expressly by the letter of the Constitution, unauthorized by its spirit, inconsistent with every principle on which It was founded, and destructive of the great object for which it was formed.||”|
The language used by Jackson, combined with the reports coming out of South Carolina, raised the spectre of military confrontation for many on both sides of the issue. A group of Democrats, led by Van Buren and Thomas Hart Benton among others, saw the only solution to the crisis in a substantial reduction of the tariff.
Negotiation and Confrontation (1833)
In apparent contradiction of his previous claim that the tariff could be enforced with existing laws, on January 16 Jackson sent his Force Bill Message to Congress. Custom houses in Beaufort and Georgetown would be closed and replaced by ships located at each port. In Charleston the custom house would be moved to either Castle Pinckney or Fort Moultrie in Charleston harbor. Direct payment rather than bonds would be required, and federal jails would be established for violators that the state refused to arrest and all cases arising under the state’s nullification act could be removed to the United States Circuit Court. In the most controversial part, the militia acts of 1795 and 1807 would be revised to permit the enforcement of the custom laws by both the militia and the regular United States military. Attempts were made in South Carolina to shift the debate away from nullification by focusing instead on the proposed enforcement.
The Force bill went to the Senate Judiciary Committee chaired by Pennsylvania protectionist William Wilkins and supported by members Daniel Webster and Theodore Frelinghuysen of New Jersey; it gave Jackson everything he asked. On January 28 the Senate defeated a motion by a vote of 30 to 15 to postpone debate on the bill. All but two of the votes to delay were from the lower South and only three from this section voted against the motion. This did not signal any increased support for nullification but did signify doubts about enforcement. In order to draw more votes, proposals were made to limit the duration of the coercive powers and restrict the use of force to suppressing, rather than preventing, civil disorder. In the House the Judiciary Committee, in a 4-3 vote, rejected Jackson’s request to use force. By the time Calhoun made a major speech on February 15 strongly opposing it, the Force Bill was temporarily stalled.
On the tariff issue, the drafting of a compromise tariff was assigned in December to the House Ways and Means Committee, now headed by Gulian C. Verplanck. Debate on the committee’s product on the House floor began in January 1833. The Verplanck tariff proposed reductions back to the 1816 levels over the course of the next two years while maintaining the basic principle of protectionism. The anti-Jackson protectionists saw this as an economic disaster that did not allow the Tariff of 1832 to even be tested and "an undignified truckling to the menaces and blustering of South Carolina." Northern Democrats did not oppose it in principle but still demanded protection for the varying interests of their own constituents. Those sympathetic to the nullifiers wanted a specific abandonment of the principle of protectionism and were willing to offer a longer transition period as a bargaining point. It was clear that the Verplanck tariff was not going to be implemented.
In South Carolina, efforts were being made to avoid an unnecessary confrontation. Governor Hayne ordered the 25,000 troops he had created to train at home rather than gathering in Charleston. At a mass meeting in Charleston on January 21, it was decided to postpone the February 1 deadline for implementing nullification while Congress worked on a compromise tariff. At the same time a commissioner from Virginia, Benjamin Watkins Leigh, arrived in Charleston bearing resolutions that criticized both Jackson and the nullifiers and offering his state as a mediator.
Henry Clay had not taken his defeat in the presidential election well and was unsure on what position he could take in the tariff negotiations. His long term concern was that Jackson eventually was determined to kill protectionism along with the American Plan. In February, after consulting with manufacturers and sugar interests in Louisiana who favored protection for the sugar industry, Clay started to work on a specific compromise plan. As a starting point, he accepted the nullifiers' offer of a transition period but extended it from seven and a half years to nine years with a final target of a 20% ad valorem rate. After first securing the support of his protectionist base, Clay, through an intermediary, broached the subject with Calhoun. Calhoun was receptive and after a private meeting with Clay at Clay’s boardinghouse, negotiations preceded.
Clay introduced the negotiated tariff bill on February 12, and it was immediately referred to a select committee consisting of Clay as chairman, Felix Grundy of Tennessee, George M. Dallas of Pennsylvania, William Cabell Rives of Virginia, Webster, John M. Clayton of Delaware, and Calhoun. On February 21 the committee reported a bill to the floor of the Senate which was largely the original bill proposed by Clay. The Tariff of 1832 would continue except that reduction of all rates above 20% would be reduced by one tenth every two years with the final reductions back to 20% coming in 1842. Protectionism as a principle was not abandoned and provisions were made for raising the tariff if national interests demanded it.
Although not specifically linked by any negotiated agreement, it became clear that the Force Bill and Compromise Tariff of 1833 were inexorably linked. In his February 25 speech ending the debate on the tariff, Clay captured the spirit of the voices for compromise by condemning Jackson's Proclamation to South Carolina as inflammatory, admitting the same problem with the Force Bill but indicating its necessity, and praising the Compromise Tariff as the final measure to restore balance, promote the rule of law, and avoid the "sacked cities," "desolated fields," and "smoking ruins" that he said would be the product of the failure to reach a final accord. The House passed the Compromise Tariff by 119-85 and the Force Bill by 149-48. In the Senate the tariff passed 29-16 and the Force bill by 32-1 with many opponents of it walking out rather than voting for it.
Calhoun rushed to Charleston with the news of the final compromises. The Nullification Convention met again on March 11. It repealed the November Nullification Ordinance and also, "in a purely symbolic gesture", nullified the Force Bill. While the nullifiers claimed victory on the tariff issue, even though they had made concessions, the verdict was very different on nullification. The majority had, in the end, ruled and this boded ill for the South and their minorities hold on slavery. Rhett summed this up at the convention on March 13. Warning that, "A people, owning slaves, are mad, or worse than mad, who do not hold their destinies in their own hands," he continued:
|“||Every stride of this Government, over your rights, brings it nearer and nearer to your peculiar policy. …The whole world are in arms against your institutions … Let Gentlemen not be deceived. It is not the Tariff – not Internal Improvement – nor yet the Force bill, which constitutes the great evil against which we are contending. … These are but the forms in which the despotic nature of the government is evinced – but it is the despotism which constitutes the evil: and until this Government is made a limited Government … there is no liberty – no security for the South.||”|
People reflected on the meaning of the nullification crisis and its outcome for the country. On May 1, 1833 Jackson wrote, "the tariff was only a pretext, and disunion and southern confederacy the real object. The next pretext will be the negro, or slavery question."
The final resolution of the crisis and Jackson’s leadership had appeal throughout the North and South. Robert Remini, the historian and Jackson biographer, described the opposition that nullification drew from traditionally states’ rights Southern states:
The Alabama legislature, for example, pronounced the doctrine “unsound in theory and dangerous in practice.” Georgia said it was “mischievous,” “rash and revolutionary.” Mississippi lawmakers chided the South Carolinians for acting with “reckless precipitancy.”
Forest McDonald, describing the split over nullification among proponents of states rights, wrote, “The doctrine of states’ rights, as embraced by most Americans, was not concerned exclusively, or even primarily with state resistance to federal authority.” But, by the end of the nullification crisis, many southerners started to question whether the Jacksonian Democrats still represented Southern interests. The historian William J. Cooper notes that, “Numerous southerners had begun to perceive it [the Jacksonian Democratic Party] as a spear aimed at the South rather than a shield defending the South.”
In the political vacuum created by this alienation, the southern wing of the Whig Party was formed. The party was a coalition of interests united by the common thread of opposition to Andrew Jackson and, more specifically, his “definition of federal and executive power.” The party included former National Republicans with an “urban, commercial, and nationalist outlook” as well as former nullifiers. Emphasizing that “they were more southern than the Democrats,” the party grew within the South by going “after the abolition issue with unabashed vigor and glee.” With both parties arguing who could best defend southern institutions, the nuances of the differences between free soil and abolitionism, which became an issue in the late 1840s with the Mexican War and territorial expansion, never became part of the political dialogue. This failure increased the volatility of the slavery issues.
Richard Ellis argues that the end of the crisis signified the beginning of a new era. Within the states’ rights movement, the traditional desire for simply “a weak, inactive, and frugal government” was challenged. Ellis states that “in the years leading up to the Civil War the nullifiers and their pro-slavery allies used the doctrine of states’ rights and state sovereignty in such a way as to try to expand the powers of the federal government so that it could more effectively protect the peculiar institution.” By the 1850s, states’ rights had become a call for state equality under the Constitution.
Madison reacted to this incipient tendency by writing two paragraphs of "Advice to My Country," found among his papers. It said that the Union "should be cherished and perpetuated. Let the open enemy to it be regarded as a Pandora with her box opened; and the disguised one, as the Serpent creeping with his deadly wiles into paradise." Richard Rush published this "Advice" in 1850, by which time Southern spirit was so high that it was denounced as a forgery.
The first test for the South over the slavery issue began during the final congressional session of 1835. In what became known as the Gag Rule Debates, abolitionists flooded the Congress with anti-slavery petitions to end slavery and the slave trade in Washington, D.C. The debate was reopened each session as Southerners, led by South Carolinians Henry Pinckney and John Hammond, prevented the petitions from even being officially received by Congress. Led by John Quincy Adams, the slavery debate remained on the national stage until late 1844 when Congress lifted all restrictions on processing the petitions.
Describing the legacy of the crisis, Sean Wilentz writes:
|“||The battle between Jacksonian democratic nationalists, northern and southern, and nullifier sectionalists would resound through the politics of slavery and antislavery for decades to come. Jackson’s victory, ironically, would help accelerate the emergence of southern pro-slavery as a coherent and articulate political force, which would help solidify northern antislavery opinion, inside as well as outside Jackson’s party. Those developments would accelerate the emergence of two fundamentally incompatible democracies, one in the slave South, the other in the free North.||”|
For South Carolina, the legacy of the crisis involved both the divisions within the state during the crisis and the apparent isolation of the state as the crisis was resolved. By 1860, when South Carolina became the first state to secede, the state was more internally united than any other southern state. Historian Charles Edward Cauthen writes:
|“||Probably to a greater extent than in any other Southern state South Carolina had been prepared by her leaders over a period of thirty years for the issues of 1860. Indoctrination in the principles of state sovereignty, education in the necessity of maintaining Southern institutions, warnings of the dangers of control of the federal government by a section hostile to its interests – in a word, the education of the masses in the principles and necessity of secession under certain circumstances – had been carried on with a skill and success hardly inferior to the masterly propaganda of the abolitionists themselves. It was this education, this propaganda, by South Carolina leaders which made secession the almost spontaneous movement that it was.||”|
See also
- Origins of the American Civil War
- American System (economic plan)
- American School (economics)
- Alexander Hamilton
- Friedrich List
- Nullification Convention
- Remini, Andrew Jackson, v2 pp. 136-137. Niven pg. 135-137. Freehling, Prelude to Civil War pg 143
- Freehling, The Road to Disunion, pg. 255. Craven pg. 60. Ellis pg. 7
- Craven pg.65. Niven pg. 135-137. Freehling, Prelude to Civil War pg 143
- Niven p. 192. Calhoun replaced Robert Y. Hayne as senator so that Hayne could follow James Hamilton as governor. Niven writes, "There is no doubt that these moves were part of a well-thought-out plan whereby Hayne would restrain the hotheads in the state legislature and Calhoun would defend his brainchild, nullification, in Washington against administration stalwarts and the likes of Daniel Webster, the new apostle of northern nationalism."
- Howe p. 410. In the Senate only Virginia and South Carolina voted against the 1832 tariff. Howe writes, "Most southerners saw the measure as a significant amelioration of their grievance and were now content to back Jackson for reelection rather than pursue the more drastic remedy such as the one South Carolina was touting."
- Freehling, Prelude to Civil War pg. 1-3. Freehling writes, “In Charleston Governor Robert Y. Hayne ... tried to form an army which could hope to challenge the forces of ‘Old Hickory.’ Hayne recruited a brigade of mounted minutemen, 2,000 strong, which could swoop down on Charleston the moment fighting broke out, and a volunteer army of 25,000 men which could march on foot to save the beleaguered city. In the North Governor Hayne’s agents bought over $100,000 worth of arms; in Charleston Hamilton readied his volunteers for an assault on the federal forts.”
- Wilentz pg. 388
- Woods pg. 78
- Tuttle, California Digest 26 pg. 47
- Ellis pg. 4
- McDonald pg. vii. McDonald wrote, “Of all the problems that beset the United States during the century from the Declaration of Independence to the end of Reconstruction, the most pervasive concerned disagreements about the nature of the Union and the line to be drawn between the authority of the general government and that of the several states. At times the issue bubbled silently and unseen between the surface of public consciousness; at times it exploded: now and again the balance between general and local authority seemed to be settled in one direction or another, only to be upset anew and to move back toward the opposite position, but the contention never went away.”
- Ellis pg. 1-2.
- For full text of the resolutions, see Kentucky Resolutions of 1798 and Kentucky Resolutions of 1799.
- James Madison, Virginia Resolutions of 1798
- Banning pg. 388
- Brant, p. 297, 629
- Brant, pp. 298.
- Brant, p.629
- Ketchum pg. 396
- Wilentz pg. 80.
- Ellis p.5. Madison called for the constitutional amendment because he believed much of the American System was unconstitutional. Historian Richard Buel Jr. notes that in preparing for the worst from the Hartford Convention, the Madison administration made preparation to intervene militarily in case of New England secession. Troops from the Canadian border were moved near Albany so that they could move into either Massachusetts or Connecticut if necessary. New England troops were also returned to their recruitment areas in order to serve as a focus for loyalists. Buel pg.220-221
- McDonald pg. 69-70
- Wilentz pg.166
- Wilentz pg. 181
- Ellis pg. 6. Wilentz pg. 182.
- Freehling, Prelude to Civil War pg. 92-93
- Wilentz pg. 243. Economic historian Frank Taussig notes “The act of 1816, which is generally said to mark the beginning of a distinctly protective policy in this country, belongs rather to the earlier series of acts, beginning with that of 1789, than to the group of acts of 1824, 1828, and 1832. Its highest permanent rate of duty was twenty per cent., an increase over the previous rates which is chiefly accounted for by the heavy interest charge on the debt incurred during the war. But after the crash of 1819, a movement in favor of protection set in, which was backed by a strong popular feeling such as had been absent in the earlier years.” http://teachingamericanhistory.org/library/index.asp?document=1136
- Remini, Henry Clay pg. 232. Freehling, The Road to Disunion, pg. 257.
- McDonald pg. 95
- Brant, p. 622
- Remini, Andrew Jackson, v2 pp. 136-137. McDonald presents a slightly different rationale. He stated that the bill would “adversely affect New England woolen manufacturers, ship builders, and shipowners” and Van Buren calculated that New England and the South would unite to defeat the bill, allowing Jacksonians to have it both ways – in the North they could claim they tried but failed to pass a needed tariff and in the South they could claim that they had thwarted an effort to increase import duties. McDonald pg. 94-95
- Cooper pg. 11-12.
- Freehling, The Road to Disunion, pg. 255. Historian Avery Craven wrote, “Historians have generally ignored the fact that the South Carolina statesmen, in the so-called Nullification controversy, were struggling against a practical situation. They have conjured up a great struggle between nationalism and States” rights and described these men as theorists reveling in constitutional refinements for the mere sake of logic. Yet here was a clear case of commercial and agricultural depression. Craven pg. 60
- Ellis pg. 7. Freehling notes that divisions over nullification in the state generally corresponded to the extent that the section suffered economically. The exception was the “Low country rice and luxury cotton planters” who supported nullification despite their ability to survive the economic depression. This section had the highest percentage of slave population. Freehling, Prelude to Civil War, pg. 25.
- Cauthen pg. 1
- Ellis pg. 7. Freehling, Road to Disunion, pg. 256
- Gerald Horne, Negro Comrades of the Crown: African Americans and the British Empire Fight the U.S. Before Emancipation, New York University (NYU) Press, 2012, pp. 97-98
- Freehling, Road to Disunion, p. 254
- Craven pg.65.
- Niven pg. 135-137. Freehling, Prelude to Civil War pg 143.
- South Carolina Exposition and Protest
- Niven pg. 158-162
- Niven pg. 161
- Niven pg. 163-164
- Walther pg. 123. Craven pg. 63-64.
- Freehling, Prelude to Civil War pg. 149
- Freehling, Prelude to Civil War pg. 152-155, 173-175. A two-thirds vote of each house of the legislature was required to convene a state convention.
- Freehling, Prelude to Civil War pg. 177-186
- Freehling, Prelude to Civil War, pg. 205-213
- Freehling, Prelude to Civil War, pg. 213-218
- Peterson pg. 189-192. Niven pg. 174-181. Calhoun wrote of McDuffie’s speech, “I think it every way imprudent and have so written Hamilton … I see clearly it brings matters to a crisis, and that I must meet it promptly and manfully.” Freehling in his works frequently refers to the radicals as “Calhounites” even before 1831. This is because the radicals, rallying around Calhoun’s “Exposition,” were linked ideologically, if not yet practically, with Calhoun.
- Niven pg. 181-184
- Ellis pg. 193. Freehling, Prelude to Civil War, pg. 257.
- Freehling pg. 224-239
- Freehling, Prelude to Civil War pg. 252-260
- Freehling, Prelude to Civil War pg. 1-3.
- Ellis pg. 97-98
- Remini, Andrew Jackson, v. 3 pg. 14
- Ellis pg. 41-43
- Ellis p. 9
- Ellis pg. 9
- Brant, p.627.
- Ellis pg. 10. Ellis wrote, "But the nullifiers' attempt to legitimize their controversial doctrine by claiming it was a logical extension of the principles embodied in the Kentucky and Virginia Resolutions upset him. In a private letter he deliberately wrote for publication, Madison denied many of the assertions of the nullifiers and lashed out in particular at South Carolina's claim that if a state nullified an act of the federal government it could only be overruled by an amendment to the Constitution." Full text of the letter is available at http://www.constitution.org/jm/18300828_everett.htm.
- Brant, pp. 626-7. Webster never asserted the consolidating position again.
- McDonald pg.105-106
- Remini, Andrew Jackson, v.2 pg. 233-235.
- Remini, Andrew Jackson, v.2 pg. 233-237.
- Remini, Andrew Jackson, v.2 pg. 255-256 Peterson pg. 196-197.
- Remini, Andrew Jackson, v.2 pg. 343-348
- Remini, Andrew Jackson, v.2 pg. 347-355
- Remini, Andrew Jackson, v.2 pg. 358-373. Peterson pg. 203-212
- Remini, Andrew Jackson, v.2 pg. 382-389
- Ellis pg. 82
- Remini, Andrew Jackson, v. 3 pg. 9-11. Full text of his message available at http://www.thisnation.com/library/sotu/1832aj.html
- Ellis pg 83-84. Full document available at: http://www.yale.edu/lawweb/avalon/presiden/proclamations/jack01.htm
- Ellis pg. 93-95
- Ellis pg. 160-165. Peterson pg. 222-224. Peterson differs with Ellis in arguing that passage of the Force Bill “was never in doubt.”
- Ellis pg. 99-100. Peterson pg. 217.
- Wilentz pg. 384-385.
- Peterson pg. 217-226
- Peterson pg. 226-228
- Peterson pg. 229-232
- Freehling, Prelude to Civil War, pg. 295-297
- Freehling, Prelude to Civil War, pg. 297. Willentz pg. 388
- Jon Meacham (2009), American Lion: Andrew Jackson in the White House, New York: Random House, p. 247; Correspondence of Andrew Jackson, Vol. V, p. 72.
- Remini, Andrew Jackson, v3. pg. 42.
- McDonald pg. 110
- Cooper pg. 53-65
- Ellis pg. 198
- Brant p. 646; Rush produced a copy in Mrs. Madison's hand; the original also survives. The contemporary letter to Edward Coles (Brant, p. 639) makes plain that the enemy in question is the nullifier.
- Freehling, Prelude to Civil War pg. 346-356. McDonald (pg 121-122) saw states’ rights in the period from 1833-1847 as almost totally successful in creating a “virtually nonfunctional” federal government. This did not insure political harmony, as “the national political arena became the center of heated controversy concerning the newly raised issue of slavery, a controversy that reached the flash point during the debates about the annexation of the Republic of Texas” pg. 121-122
- Cauthen pg. 32
- Brant, Irving: The Fourth President: A Life of James Madison Bobbs Merrill, 1970.
- Buel, Richard Jr. America on the Brink: How the Political Struggle Over the War of 1812 Almost Destroyed the Young Republic. (2005) ISBN 1-4039-6238-3
- Cauthen, Charles Edward. South Carolina Goes to War. (1950) ISBN 1-57003-560-1
- Cooper, William J. Jr. The South and the Politics of Slavery 1828-1856 (1978) ISBN 0-8071-0385-3
- Craven, Avery. The Coming of the Civil War (1942) ISBN 0-226-11894-0
- Ellis, Richard E. The Union at Risk: Jacksonian Democracy, States' Rights, and the Nullification Crisis (1987)
- Freehling, William W. The Road to Disunion: Secessionists at Bay, 1776-1854 (1991), Vol. 1
- Freehling, William W. Prelude to Civil War: The Nullification Crisis in South Carolina 1816-1836. (1965) ISBN 0-19-507681-8
- Howe, Daniel Walker. What Hath God Wrought: The Transformation of America, 1815-1848. (2007) ISBN 978-0-19-507894-7
- McDonald, Forrest. States’ Rights and the Union: Imperium in Imperio 1776-1876 (2000) ISBN 0-7006-1040-5
- Niven, John. John C. Calhoun and the Price of Union (1988) ISBN 0-8071-1451-0
- Peterson, Merrill D. The Great Triumvirate: Webster, Clay, and Calhoun. (1987) ISBN 0-19-503877-0
- Remini, Robert V. Andrew Jackson and the Course of American Freedom, 1822-1832,v2 (1981) ISBN 0-06-014844-6
- Remini, Robert V. Andrew Jackson and the Course of American Democracy, 1833-1845, v3 (1984) ISBN 0-06-015279-6
- Remini, Robert V. Henry Clay: Statesman for the Union (1991) ISBN 0-393-31088-4
- Tuttle, Charles A. (Court Reporter) California Digest: A Digest of the Reports of the Supreme Court of California, Volume 26 (1906)
- Walther, Eric C. The Fire-Eaters (1992) ISBN 0-8071-1731-5
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln. (2005) ISBN 0-393-05820-4
- Woods, Thomas E. Jr. Nullification (2010) ISBN 978-1-59698-149-2
Further reading
- Barnwell, John. Love of Order: South Carolina's First Secession Crisis (1982)
- Capers, Gerald M. John C. Calhoun, Opportunist: A Reappraisal (1960)
- Coit, Margaret L. John C. Calhoun: American Portrait (1950)
- Houston, David Franklin (1896). A Critical Study of Nullification in South Carolina. Longmans, Green, and Co.
- Latner, Richard B. "The Nullification Crisis and Republican Subversion," Journal of Southern History 43 (1977): 18-38, in JSTOR
- McCurry, Stephanie. Masters of Small Worlds.New York: Oxford UP, 1993.
- Pease, Jane H. and William H. Pease, "The Economics and Politics of Charleston's Nullification Crisis", Journal of Southern History 47 (1981): 335-62, in JSTOR
- Ratcliffe, Donald. "The Nullification Crisis, Southern Discontents, and the American Political Process", American Nineteenth Century History. Vol 1: 2 (2000) pp. 1–30
- Wiltse, Charles. John C. Calhoun, nullifier, 1829-1839 (1949)
- South Carolina Exposition and Protest, by Calhoun, 1828.
- The Fort Hill Address: On the Relations of the States and the Federal Government, by Calhoun, July 1831.
- South Carolina Ordinance of Nullification, November 24, 1832.
- President Jackson's Proclamation to South Carolina, December 10, 1832.
- Primary Documents in American History: Nullification Proclamation (Library of Congress)
- President Jackson's Message to the Senate and House Regarding South Carolina's Nullification Ordinance, January 16, 1833
- Nullification Revisited: An article examining the constitutionality of nullification (from a favorable aspect, and with regard to both recent and historical events). | http://en.wikipedia.org/wiki/Nullification_Crisis | 13 |
105 | A globular cluster is a spherical collection of stars that orbits a galactic core as a satellite. Globular clusters are very tightly bound by gravity, which gives them their spherical shapes and relatively high stellar densities toward their centers. The name of this category of star cluster is derived from the Latin globulus—a small sphere. A globular cluster is sometimes known more simply as a globular.
Globular clusters, which are found in the halo of a galaxy, contain considerably more stars and are much older than the less dense galactic, or open clusters, which are found in the disk. Globular clusters are fairly common; there are about 150 to 158 currently known globular clusters in the Milky Way, with perhaps 10 to 20 more still undiscovered. Large galaxies can have more: Andromeda, for instance, may have as many as 500. Some giant elliptical galaxies, particularly those at the centers of galaxy clusters, such as M87, have as many as 13,000 globular clusters. These globular clusters orbit the galaxy out to large radii, 40 kiloparsecs (approximately 131,000 light-years) or more.
Every galaxy of sufficient mass in the Local Group has an associated group of globular clusters, and almost every large galaxy surveyed has been found to possess a system of globular clusters. The Sagittarius Dwarf and Canis Major Dwarf galaxies appear to be in the process of donating their associated globular clusters (such as Palomar 12) to the Milky Way. This demonstrates how many of this galaxy's globular clusters might have been acquired in the past.
Although it appears that globular clusters contain some of the first stars to be produced in the galaxy, their origins and their role in galactic evolution are still unclear. It does appear clear that globular clusters are significantly different from dwarf elliptical galaxies and were formed as part of the star formation of the parent galaxy rather than as a separate galaxy. However, recent conjectures by astronomers suggest that globular clusters and dwarf spheroidals may not be clearly separate and distinct types of objects.
Observation history
|Cluster name||Discovered by||Year|
|ω Cen||Edmond Halley||1677|
|M71||Philippe Loys de Chéseaux||1745|
|M4||Philippe Loys de Chéseaux||1746|
The first globular cluster discovered was M22 in 1665 by Abraham Ihle, a German amateur astronomer. However, given the small aperture of early telescopes, individual stars within a globular cluster were not resolved until Charles Messier observed M4. The first eight globular clusters discovered are shown in the table. Subsequently, Abbé Lacaille would list NGC 104, NGC 4833, M55, M69, and NGC 6397 in his 1751–52 catalogue. The M before a number refers to the catalogue of Charles Messier, while NGC is from the New General Catalogue by John Dreyer.
William Herschel began a survey program in 1782 using larger telescopes and was able to resolve the stars in all 33 of the known globular clusters. In addition he found 37 additional clusters. In Herschel's 1789 catalog of deep sky objects, his second such, he became the first to use the name globular cluster as their description.
The number of globular clusters discovered continued to increase, reaching 83 in 1915, 93 in 1930 and 97 by 1947. A total of 152 globular clusters have now been discovered in the Milky Way galaxy, out of an estimated total of 180 ± 20. These additional, undiscovered globular clusters are believed to be hidden behind the gas and dust of the Milky Way.
Beginning in 1914, Harlow Shapley began a series of studies of globular clusters, published in about 40 scientific papers. He examined the RR Lyrae variables in the clusters (which he assumed were cepheid variables) and would use their period–luminosity relationship for distance estimates. Later, it was found that RR Lyrae variables are fainter than cepheid variables, which caused Shapley to overestimate the distance to the clusters.
Of the globular clusters within our Milky Way, the majority are found in the vicinity of the galactic core, and the large majority lie on the side of the celestial sky centered on the core. In 1918 this strongly asymmetrical distribution was used by Harlow Shapley to make a determination of the overall dimensions of the galaxy. By assuming a roughly spherical distribution of globular clusters around the galaxy's center, he used the positions of the clusters to estimate the position of the sun relative to the galactic center. While his distance estimate was significantly in error, it did demonstrate that the dimensions of the galaxy were much greater than had been previously thought. His error was because dust in the Milky Way diminished the amount of light from a globular cluster that reached the earth, thus making it appear farther away. Shapley's estimate was, however, within the same order of magnitude as the currently accepted value.
Shapley's measurements also indicated that the Sun was relatively far from the center of the galaxy, contrary to what had previously been inferred from the apparently nearly even distribution of ordinary stars. In reality, ordinary stars lie within the galaxy's disk and are thus often obscured by gas and dust, whereas globular clusters lie outside the disk and can be seen at much further distances.
Shapley was subsequently assisted in his studies of clusters by Henrietta Swope and Helen Battles Sawyer (later Hogg). In 1927–29, Harlow Shapley and Helen Sawyer began categorizing clusters according to the degree of concentration the system has toward the core. The most concentrated clusters were identified as Class I, with successively diminishing concentrations ranging to Class XII. This became known as the Shapley–Sawyer Concentration Class. (It is sometimes given with numbers [Class 1–12] rather than Roman numerals.)
At present, the formation of globular clusters remains a poorly understood phenomenon, and it remains uncertain whether the stars in a globular cluster form in a single generation, or are spawned across multiple generations over a period of several hundred million years. In many globular clusters, most of the stars are at approximately the same stage in stellar evolution, suggesting that they formed at about the same time. However, the star formation history varies from cluster to cluster, with some clusters showing distinct populations of stars. An example of this is the globular clusters in the Large Magellanic Cloud (LMC) that exhibit a bimodal population. During their youth, these LMC clusters may have encountered giant molecular clouds that triggered a second round of star formation. This star-forming period is relatively brief, compared to the age of many globular clusters.
Observations of globular clusters show that these stellar formations arise primarily in regions of efficient star formation, and where the interstellar medium is at a higher density than in normal star-forming regions. Globular cluster formation is prevalent in starburst regions and in interacting galaxies. Research indicates a correlation between the mass of a central supermassive black holes (SMBH) and the extent of the globular cluster systems of elliptical and lenticular galaxies. The mass of the SMBH in such a galaxy is often close to the combined mass of the galaxy's globular clusters.
No known globular clusters display active star formation, which is consistent with the view that globular clusters are typically the oldest objects in the Galaxy, and were among the first collections of stars to form. Very large regions of star formation known as super star clusters, such as Westerlund 1 in the Milky Way, may be the precursors of globular clusters.
Globular clusters are generally composed of hundreds of thousands of low-metal, old stars. The type of stars found in a globular cluster are similar to those in the bulge of a spiral galaxy but confined to a volume of only a few million cubic parsecs. They are free of gas and dust and it is presumed that all of the gas and dust was long ago turned into stars.
Globular clusters can contain a high density of stars; on average about 0.4 stars per cubic parsec, increasing to 100 or 1000 stars per cubic parsec in the core of the cluster. The typical distance between stars in a globular cluster is about 1 light year, but at its core, the separation is comparable to the size of the Solar System (100 to 1000 times closer than stars near the Solar System).
However, they are not thought to be favorable locations for the survival of planetary systems. Planetary orbits are dynamically unstable within the cores of dense clusters because of the perturbations of passing stars. A planet orbiting at 1 astronomical unit around a star that is within the core of a dense cluster such as 47 Tucanae would only survive on the order of 108 years. There is a planetary system orbiting a pulsar (PSR B1620−26) that belongs to the globular cluster M4, but these planets likely formed after the event that created the pulsar.
Some globular clusters, like Omega Centauri in our Milky Way and G1 in M31, are extraordinarily massive, with several million solar masses and multiple stellar populations. Both can be regarded as evidence that supermassive globular clusters are in fact the cores of dwarf galaxies that are consumed by the larger galaxies. About a quarter of the globular cluster population in the Milky Way may have been accreted along with their host dwarf galaxy.
Several globular clusters (like M15) have extremely massive cores which may harbor black holes, although simulations suggest that a less massive black hole or central concentration of neutron stars or massive white dwarfs explain observations equally well.
Metallic content
Globular clusters normally consist of Population II stars, which have a low proportion of elements other than hydrogen and helium when compared to Population I stars such as the Sun. Astronomers refer to these heavier elements as metals and to the proportions of these elements as the metallicity. These elements are produced by stellar nucleosynthesis and then are recycled into the interstellar medium, where they enter the next generation of stars. Hence the proportion of metals can be an indication of the age of a star, with older stars typically having a lower metallicity.
The Dutch astronomer Pieter Oosterhoff noticed that there appear to be two populations of globular clusters, which became known as Oosterhoff groups. The second group has a slightly longer period of RR Lyrae variable stars. Both groups have weak lines of metallic elements. But the lines in the stars of Oosterhoff type I (OoI) cluster are not quite as weak as those in type II (OoII). Hence type I are referred to as "metal-rich" while type II are "metal-poor".
These two populations have been observed in many galaxies, especially massive elliptical galaxies. Both groups are nearly as old as the universe itself and are of similar ages, but differ in their metal abundances. Many scenarios have been suggested to explain these subpopulations, including violent gas-rich galaxy mergers, the accretion of dwarf galaxies, and multiple phases of star formation in a single galaxy. In our Milky Way, the metal-poor clusters are associated with the halo and the metal-rich clusters with the bulge.
In the Milky Way it has been discovered that the large majority of the low metallicity clusters are aligned along a plane in the outer part of the galaxy's halo. This result argues in favor of the view that type II clusters in the galaxy were captured from a satellite galaxy, rather than being the oldest members of the Milky Way's globular cluster system as had been previously thought. The difference between the two cluster types would then be explained by a time delay between when the two galaxies formed their cluster systems.
Exotic components
Globular clusters have a very high star density, and therefore close interactions and near-collisions of stars occur relatively often. Due to these chance encounters, some exotic classes of stars, such as blue stragglers, millisecond pulsars and low-mass X-ray binaries, are much more common in globular clusters. A blue straggler is formed from the merger of two stars, possibly as a result of an encounter with a binary system. The resulting star has a higher temperature than comparable stars in the cluster with the same luminosity, and thus differs from the main sequence stars formed at the beginning of the cluster.
Astronomers have searched for black holes within globular clusters since the 1970s. The resolution requirements for this task, however, are exacting, and it is only with the Hubble space telescope that the first confirmed discoveries have been made. In independent programs, a 4,000 solar mass intermediate-mass black hole has been suggested to exist based on HST observations in the globular cluster M15 and a 20,000 solar mass black hole in the Mayall II cluster in the Andromeda Galaxy. Both x-ray and radio emissions from Mayall II appear to be consistent with an intermediate-mass black hole.
These are of particular interest because they are the first black holes discovered that were intermediate in mass between the conventional stellar-mass black hole and the supermassive black holes discovered at the cores of galaxies. The mass of these intermediate mass black holes is proportional to the mass of the clusters, following a pattern previously discovered between supermassive black holes and their surrounding galaxies.
Claims of intermediate mass black holes have been met with some skepticism. The densest objects in globular clusters are expected to migrate to the cluster center due to mass segregation. These will be white dwarfs and neutron stars in an old stellar population like a globular cluster. As pointed out in two papers by Holger Baumgardt and collaborators, the mass-to-light ratio should rise sharply towards the center of the cluster, even without a black hole, in both M15 and Mayall II.
Color-magnitude diagram
The Hertzsprung-Russell diagram (HR-diagram) is a graph of a large sample of stars that plots their visual absolute magnitude against their color index. The color index, B−V, is the difference between the magnitude of the star in blue light, or B, and the magnitude in visual light (green-yellow), or V. Large positive values indicate a red star with a cool surface temperature, while negative values imply a blue star with a hotter surface.
When the stars near the Sun are plotted on an HR diagram, it displays a distribution of stars of various masses, ages, and compositions. Many of the stars lie relatively close to a sloping curve with increasing absolute magnitude as the stars are hotter, known as main-sequence stars. However the diagram also typically includes stars that are in later stages of their evolution and have wandered away from this main-sequence curve.
As all the stars of a globular cluster are at approximately the same distance from us, their absolute magnitudes differ from their visual magnitude by about the same amount. The main-sequence stars in the globular cluster will fall along a line that is believed to be comparable to similar stars in the solar neighborhood. The accuracy of this assumption is confirmed by comparable results obtained by comparing the magnitudes of nearby short-period variables, such as RR Lyrae stars and cepheid variables, with those in the cluster.
By matching up these curves on the HR diagram the absolute magnitude of main-sequence stars in the cluster can also be determined. This in turn provides a distance estimate to the cluster, based on the visual magnitude of the stars. The difference between the relative and absolute magnitude, the distance modulus, yields this estimate of the distance.
When the stars of a particular globular cluster are plotted on an HR diagram, in many cases nearly all of the stars fall upon a relatively well defined curve. This differs from the HR diagram of stars near the Sun, which lumps together stars of differing ages and origins. The shape of the curve for a globular cluster is characteristic of a grouping of stars that were formed at approximately the same time and from the same materials, differing only in their initial mass. As the position of each star in the HR diagram varies with age, the shape of the curve for a globular cluster can be used to measure the overall age of the star population.
The most massive main-sequence stars will also have the highest absolute magnitude, and these will be the first to evolve into the giant star stage. As the cluster ages, stars of successively lower masses will also enter the giant star stage. Thus the age of a single population cluster can be measured by looking for the stars that are just beginning to enter the giant star stage. This forms a "knee" in the HR diagram, bending to the upper right from the main-sequence line. The absolute magnitude at this bend is directly a function of the age of globular cluster, so an age scale can be plotted on an axis parallel to the magnitude.
In addition, globular clusters can be dated by looking at the temperatures of the coolest white dwarfs. Typical results for globular clusters are that they may be as old as 12.7 billion years. This is in contrast to open clusters which are only tens of millions of years old.
The ages of globular clusters place a bound on the age limit of the entire universe. This lower limit has been a significant constraint in cosmology. During the early 1990s, astronomers were faced with age estimates of globular clusters that appeared older than cosmological models would allow. However, better measurements of cosmological parameters through deep sky surveys and satellites such as COBE have resolved this issue as have computer models of stellar evolution that have different models of mixing.
Evolutionary studies of globular clusters can also be used to determine changes due to the starting composition of the gas and dust that formed the cluster. That is, the evolutionary tracks change with changes in the abundance of heavy elements. The data obtained from studies of globular clusters are then used to study the evolution of the Milky Way as a whole.
In globular clusters a few stars known as blue stragglers are observed, apparently continuing the main sequence in the direction of brighter, bluer stars. The origins of these stars is still unclear, but most models suggest that these stars are the result of mass transfer in multiple star systems.
In contrast to open clusters, most globular clusters remain gravitationally bound for time periods comparable to the life spans of the majority of their stars. However, a possible exception is when strong tidal interactions with other large masses result in the dispersal of the stars.
After they are formed, the stars in the globular cluster begin to interact gravitationally with each other. As a result the velocity vectors of the stars are steadily modified, and the stars lose any history of their original velocity. The characteristic interval for this to occur is the relaxation time. This is related to the characteristic length of time a star needs to cross the cluster as well as the number of stellar masses in the system. The value of the relaxation time varies by cluster, but the mean value is on the order of 109 years.
Although globular clusters generally appear spherical in form, ellipticities can occur due to tidal interactions. Clusters within the Milky Way and the Andromeda Galaxy are typically oblate spheroids in shape, while those in the Large Magellanic Cloud are more elliptical.
Astronomers characterize the morphology of a globular cluster by means of standard radii. These are the core radius (rc), the half-light radius (rh) and the tidal radius (rt). The overall luminosity of the cluster steadily decreases with distance from the core, and the core radius is the distance at which the apparent surface luminosity has dropped by half. A comparable quantity is the half-light radius, or the distance from the core within which half the total luminosity from the cluster is received. This is typically larger than the core radius.
Note that the half-light radius includes stars in the outer part of the cluster that happen to lie along the line of sight, so theorists will also use the half-mass radius (rm)—the radius from the core that contains half the total mass of the cluster. When the half-mass radius of a cluster is small relative to the overall size, it has a dense core. An example of this is Messier 3 (M3), which has an overall visible dimension of about 18 arc minutes, but a half-mass radius of only 1.12 arc minutes.
Almost all globular clusters have a half-light radius of less than 10 pc, although there are well-established globular clusters with very large radii (i.e. NGC 2419 (Rh = 18 pc) and Palomar 14 (Rh = 25 pc)).
Finally the tidal radius is the distance from the center of the globular cluster at which the external gravitation of the galaxy has more influence over the stars in the cluster than does the cluster itself. This is the distance at which the individual stars belonging to a cluster can be separated away by the galaxy. The tidal radius of M3 is about 38 arc minutes.
Mass segregation, luminosity and core collapse
In measuring the luminosity curve of a given globular cluster as a function of distance from the core, most clusters in the Milky Way increase steadily in luminosity as this distance decreases, up to a certain distance from the core, then the luminosity levels off. Typically this distance is about 1–2 parsecs from the core. However about 20% of the globular clusters have undergone a process termed "core collapse". In this type of cluster, the luminosity continues to increase steadily all the way to the core region. An example of a core-collapsed globular is M15.
Core-collapse is thought to occur when the more massive stars in a globular cluster encounter their less massive companions. Over time, dynamic processes cause individual stars to migrate from the center of the cluster to the outside. This results in a net loss of kinetic energy from the core region, leading the remaining stars grouped in the core region to occupy a more compact volume. When this gravothermal instability occurs, the central region of the cluster becomes densely crowded with stars and the surface brightness of the cluster forms a power-law cusp. (Note that a core collapse is not the only mechanism that can cause such a luminosity distribution; a massive black hole at the core can also result in a luminosity cusp.) Over a lengthy period of time this leads to a concentration of massive stars near the core, a phenomenon called mass segregation.
The dynamical heating effect of binary star systems works to prevent an initial core collapse of the cluster. When a star passes near a binary system, the orbit of the latter pair tends to contract, releasing energy. Only after the primordial supply of binaries are exhausted due to interactions can a deeper core collapse proceed. In contrast, the effect of tidal shocks as a globular cluster repeatedly passes through the plane of a spiral galaxy tends to significantly accelerate core collapse.
The different stages of core-collapse may be divided into three phases. During a globular cluster's adolescence, the process of core-collapse begins with stars near the core. However, the interactions between binary star systems prevents further collapse as the cluster approaches middle age. Finally, the central binaries are either disrupted or ejected, resulting in a tighter concentration at the core.
The interaction of stars in the collapsed core region causes tight binary systems to form. As other stars interact with these tight binaries, they increase the energy at the core, which causes the cluster to re-expand. As the mean time for a core collapse is typically less than the age of the galaxy, many of a galaxy's globular clusters may have passed through a core collapse stage, then re-expanded.
The Hubble Space Telescope has been used to provide convincing observational evidence of this stellar mass-sorting process in globular clusters. Heavier stars slow down and crowd at the cluster's core, while lighter stars pick up speed and tend to spend more time at the cluster's periphery. The globular star cluster 47 Tucanae, which is made up of about 1 million stars, is one of the densest globular clusters in the Southern Hemisphere. This cluster was subjected to an intensive photographic survey, which allowed astronomers to track the motion of its stars. Precise velocities were obtained for nearly 15,000 stars in this cluster.
A 2008 study by John Fregeau of 13 globular clusters in the Milky Way shows that three of them have an unusually large number of X-ray sources, or X-ray binaries, suggesting the clusters are middle-aged. Previously, these globular clusters had been classified as being in old age because they had very tight concentrations of stars in their centers, another test of age used by astronomers. The implication is that most globular clusters, including the other ten studied by Fregeau, are not in middle age as previously thought, but are actually in 'adolescence'.
The overall luminosities of the globular clusters within the Milky Way and the Andromeda Galaxy can be modeled by means of a gaussian curve. This gaussian can be represented by means of an average magnitude Mv and a variance σ2. This distribution of globular cluster luminosities is called the Globular Cluster Luminosity Function (GCLF). (For the Milky Way, Mv = −7.20 ± 0.13, σ = 1.1 ± 0.1 magnitudes.) The GCLF has also been used as a "standard candle" for measuring the distance to other galaxies, under the assumption that the globular clusters in remote galaxies follow the same principles as they do in the Milky Way.
N-body simulations
Computing the interactions between the stars within a globular cluster requires solving what is termed the N-body problem. That is, each of the stars within the cluster continually interacts with the other N−1 stars, where N is the total number of stars in the cluster. The naive CPU computational "cost" for a dynamic simulation increases in proportion to N3, so the potential computing requirements to accurately simulate such a cluster can be enormous. An efficient method of mathematically simulating the N-body dynamics of a globular cluster is done by subdividing into small volumes and velocity ranges, and using probabilities to describe the locations of the stars. The motions are then described by means of a formula called the Fokker-Planck equation. This can be solved by a simplified form of the equation, or by running Monte Carlo simulations and using random values. However the simulation becomes more difficult when the effects of binaries and the interaction with external gravitation forces (such as from the Milky Way galaxy) must also be included.
The results of N-body simulations have shown that the stars can follow unusual paths through the cluster, often forming loops and often falling more directly toward the core than would a single star orbiting a central mass. In addition, due to interactions with other stars that result in an increase in velocity, some of the stars gain sufficient energy to escape the cluster. Over long periods of time this will result in a dissipation of the cluster, a process termed evaporation. The typical time scale for the evaporation of a globular cluster is 1010 years. In 2010 it became possible to directly compute, star by star, N-body simulations of a globular cluster over the course of its lifetime.
Binary stars form a significant portion of the total population of stellar systems, with up to half of all stars occurring in binary systems. Numerical simulations of globular clusters have demonstrated that binaries can hinder and even reverse the process of core collapse in globular clusters. When a star in a cluster has a gravitational encounter with a binary system, a possible result is that the binary becomes more tightly bound and kinetic energy is added to the solitary star. When the massive stars in the cluster are sped up by this process, it reduces the contraction at the core and limits core collapse.
Intermediate forms
The distinction between cluster types is not always clear-cut, and objects have been found that blur the lines between the categories. For example, BH 176 in the southern part of the Milky Way has properties of both an open and a globular cluster.
In 2005, astronomers discovered a completely new type of star cluster in the Andromeda Galaxy, which is, in several ways, very similar to globular clusters. The new-found clusters contain hundreds of thousands of stars, a similar number to that found in globular clusters. The clusters share other characteristics with globular clusters such as stellar populations and metallicity. What distinguishes them from the globular clusters is that they are much larger – several hundred light-years across – and hundreds of times less dense. The distances between the stars are, therefore, much greater within the newly discovered extended clusters. Parametrically, these clusters lie somewhere between a globular cluster and a dwarf spheroidal galaxy.
How these clusters are formed is not yet known, but their formation might well be related to that of globular clusters. Why M31 has such clusters, while the Milky Way does not, is not yet known. It is also unknown if any other galaxy contains these types of clusters, but it would be very unlikely that M31 is the sole galaxy with extended clusters.
Tidal encounters
When a globular cluster has a close encounter with a large mass, such as the core region of a galaxy, it undergoes a tidal interaction. The difference in the pull of gravity between the part of the cluster nearest the mass and the pull on the furthest part of the cluster results in a tidal force. A "tidal shock" occurs whenever the orbit of a cluster takes it through the plane of a galaxy.
As a result of a tidal shock, streams of stars can be pulled away from the cluster halo, leaving only the core part of the cluster. These tidal interaction effects create tails of stars that can extend up to several degrees of arc away from the cluster. These tails typically both precede and follow the cluster along its orbit. The tails can accumulate significant portions of the original mass of the cluster, and can form clumplike features.
The globular cluster Palomar 5, for example, is near the apogalactic point of its orbit after passing through the Milky Way. Streams of stars extend outward toward the front and rear of the orbital path of this cluster, stretching out to distances of 13,000 light-years. Tidal interactions have stripped away much of the mass from Palomar 5, and further interactions as it passes through the galactic core are expected to transform it into a long stream of stars orbiting the Milky Way halo.
Tidal interactions add kinetic energy into a globular cluster, dramatically increasing the evaporation rate and shrinking the size of the cluster. Not only does tidal shock strip off the outer stars from a globular cluster, but the increased evaporation accelerates the process of core collapse. The same physical mechanism may be at work in Dwarf spheroidal galaxies such as the Sagittarius Dwarf, which appears to be undergoing tidal disruption due to its proximity to the Milky Way.
In 2000, the results of a search for giant planets in the globular cluster 47 Tucanae were announced. The lack of any successful discoveries suggests that the abundance of elements (other than hydrogen or helium) necessary to build these planets may need to be at least 40% of the abundance in the Sun. Terrestrial planets are built from heavier elements such as silicon, iron and magnesium. The very low abundance of these elements in globular clusters means that the members stars have a far lower likelihood of hosting Earth-mass planets, when compared to stars in the neighborhood of the Sun. Hence the halo region of the Milky Way galaxy, including globular cluster members, are unlikely to host habitable terrestrial planets.
In spite of the lower likelihood of giant planet formation, just such an object has been found in the globular cluster Messier 4. This planet was detected orbiting a pulsar in the binary star system PSR B1620-26. The eccentric and highly inclined orbit of the planet suggests it may have been formed around another star in the cluster, then was later "exchanged" into its current arrangement. The likelihood of close encounters between stars in a globular cluster can disrupt planetary systems, some of which break loose to become free floating planets. Even close orbiting planets can become disrupted, potentially leading to orbital decay and an increase in orbital eccentricity and tidal effects.
See also
- Extragalactic Distance Scale
- Leonard-Merritt mass estimator
- List of globular clusters
- Plummer model
- Super star cluster
- "Hubble Images a Swarm of Ancient Stars". HubbleSite News Desk (Space Telescope Science Institute). 1999-07-01. Retrieved 2006-05-26.
- Harris, William E. (February 2003). "CATALOG OF PARAMETERS FOR MILKY WAY GLOBULAR CLUSTERS: THE DATABASE". Retrieved 2009-12-23.
- Frommert, Hartmut (August 2007). "Milky Way Globular Clusters". SEDS. Retrieved 2008-02-26.
- Ashman, Keith M.; Zepf, Stephen E. (1992). "The formation of globular clusters in merging and interacting galaxies". Astrophysical Journal, Part 1 384: 50–61. Bibcode:1992ApJ...384...50A. doi:10.1086/170850.
- Barmby, P.; Huchra, J. P. (2001). "M31 Globular Clusters in the Hubble Space Telescope Archive. I. Cluster Detection and Completeleness". The Astronomical Journal 122 (5): 2458–2468. arXiv:astro-ph/0107401. Bibcode:2001AJ....122.2458B. doi:10.1086/323457.
- McLaughlin, Dean E. ; Harris, William E.; Hanes, David A. (1994). "The spatial structure of the M87 globular cluster system". Astrophysical Journal 422 (2): 486–507. Bibcode:1994ApJ...422..486M. doi:10.1086/173744.
- Dauphole, B.; Geffert, M.; Colin, J.; Ducourant, C.; Odenkirchen, M.; Tucholke, H.-J.; Geffert; Colin; Ducourant; Odenkirchen; Tucholke (1996). "The kinematics of globular clusters, apocentric distances and a halo metallicity gradient". Astronomy and Astrophysics 313: 119–128. Bibcode:1996A&A...313..119D.
- Harris, William E. (1991). "Globular cluster systems in galaxies beyond the Local Group". Annual Review of Astronomy and Astrophysics 29 (1): 543–579. Bibcode:1991ARA&A..29..543H. doi:10.1146/annurev.aa.29.090191.002551.
- Dinescu, D. I.; Majewski, S. R.; Girard, T. M.; Cudworth, K. M. (2000). "The Absolute Proper Motion of Palomar 12: A Case for Tidal Capture from the Sagittarius Dwarf Spheroidal Galaxy". The Astronomical Journal 120 (4): 1892–1905. arXiv:astro-ph/0006314. Bibcode:2000astro.ph..6314D. doi:10.1086/301552.
- Lotz, Jennifer M.; Miller, Bryan W.; Ferguson, Henry C. (September 2004). "The Colors of Dwarf Elliptical Galaxy Globular Cluster Systems, Nuclei, and Stellar Halos". The Astrophysical Journal 613 (1): 262–278. arXiv:astro-ph/0406002. Bibcode:2004ApJ...613..262L. doi:10.1086/422871.
- van den Bergh, Sidney (November 2007). "Globular Clusters and Dwarf Spheroidal Galaxies". MNRAS (Letters), in press 385 (1): L20. arXiv:0711.4795. Bibcode:2008MNRAS.385L..20V. doi:10.1111/j.1745-3933.2008.00424.x.
- Sharp, N. A. "M22, NGC6656". REU program/NOAO/AURA/NSF. Retrieved 2006-08-16.
- Boyd, Richard N. (2008). An introduction to nuclear astrophysics. University of Chicago Press. p. 376. ISBN 0-226-06971-0.
- Ashman, Keith M.; Zepf, Stephen E. (1998). Globular cluster systems. Cambridge astrophysics series 30. Cambridge University Press. p. 2. ISBN 0-521-55057-2.
- Shapley, Harlow (1918). "Globular Clusters and the Structure of the Galactic System". Publications of the Astronomical Society of the Pacific 30 (173): 42+. Bibcode:1918PASP...30...42S. doi:10.1086/122686.
- Hogg, Helen Battles Sawyer (1965). "Harlow Shapley and Globular Clusters". Publications of the Astronomical Society of the Pacific 77 (458): 336–46. Bibcode:1965PASP...77..336S. doi:10.1086/128229.
- Piotto, G.; et al. (May 2007). "A Triple Main Sequence in the Globular Cluster NGC 2808". The Astrophysical Journal 661 (1): L53–L56. arXiv:astro-ph/0703767. Bibcode:2007ApJ...661L..53P. doi:10.1086/518503.
- Chaboyer, B. "Globular Cluster Age Dating". Astrophysical Ages and Times Scales, ASP Conference Series 245. pp. 162–172. Bibcode:2001ASPC..245..162C.
- Piotto, Giampaolo (June 2009). "Observations of multiple populations in star clusters". The Ages of Stars, Proceedings of the International Astronomical Union, IAU Symposium 258. pp. 233–244. Bibcode:2009IAUS..258..233P. doi:10.1017/S1743921309031883.
- Weaver, D.; Villard, R.; Christensen, L. L.; Piotto, G.; Bedin, L. (2007-05-02). "Hubble Finds Multiple Stellar 'Baby Booms' in a Globular Cluster". Hubble News Desk. Retrieved 2007-05-01.
- Elmegreen, B. G.; Efremov, Y. N. (1999). "A Universal Formation Mechanism for Open and Globular Clusters in Turbulent Gas". Astrophysical Journal 480 (2): 235. Bibcode:1997ApJ...480..235E. doi:10.1086/303966.
- Burkert, Andreas; Tremaine, Scott (April 1, 2010). "A correlation between central supermassive black holes and the globular cluster systems of early-type galaxies". arXiv:1004.0137 [astro-ph.CO]. "A possible explanation is that both large black-hole masses and large globular cluster populations are associated with recent major mergers.".
- "Young and Exotic Stellar Zoo: ESO's Telescopes Uncover Super Star Cluster in the Milky Way". ESO. 2005-03-22. Retrieved 2007-03-20.
- "ESA/Hubble Picture of the Week". Engulfed by Stars Near the Milky Way’s Heart. Retrieved 28 June 2011.
- Talpur, Jon (1997). "A Guide to Globular Clusters". Keele University. Retrieved 2007-04-25.
- University of Durham - Department of Physics - The Hertzsprung-Russell Diagram of a Globular Cluster
- ESO - eso0107 - Ashes from the Elder Brethren
- Sigurdsson, Steinn (1992). "Planets in globular clusters?". Astrophysical Journal 399 (1): L95–L97. Bibcode:1992ApJ...399L..95S. doi:10.1086/186615.
- Arzoumanian, Z.; Joshi, K.; Rasio, F. A.; Thorsett, S. E.; Joshi; Rasio; Thorsett (1999). "Orbital Parameters of the PSR B1620-26 Triple System". Proceedings of the 160th colloquium of the International Astronomical Union 105: 525. arXiv:astro-ph/9605141. Bibcode:1996astro.ph..5141A.
- Bekki, K.; Freeman, K. C. (December 2003). "Formation of ω Centauri from an ancient nucleated dwarf galaxy in the young Galactic disc". Monthly Notices of the Royal Astronomical Society 346 (2): L11–L15. arXiv:astro-ph/0310348. Bibcode:2003MNRAS.346L..11B. doi:10.1046/j.1365-2966.2003.07275.x.
- Forbes, Duncan A.; Bridges, Terry (January 25, 2010). "Accreted versus In Situ Milky Way Globular Clusters". arXiv:1001.4289 [astro-ph.GA].
- van der Marel, Roeland (2002-03-03). "Black Holes in Globular Clusters". Space Telescope Science Institute. Retrieved 2006-06-08.
- "Spot the Difference — Hubble spies another globular cluster, but with a secret". Picture of the Week. ESA/Hubble. Retrieved 5 October 2011.
- Green, Simon F.; Jones, Mark H.; Burnell, S. Jocelyn (2004). An introduction to the sun and stars. Cambridge University Press. p. 240. ISBN 0-521-54622-2.
- van Albada, T. S.; Baker, Norman (1973). "On the Two Oosterhoff Groups of Globular Clusters". Astrophysical Journal 185: 477–498. Bibcode:1973ApJ...185..477V. doi:10.1086/152434.
- Harris, W. E. (1976). "Spatial structure of the globular cluster system and the distance to the galactic center". Astronomical Journal 81: 1095–1116. Bibcode:1976AJ.....81.1095H. doi:10.1086/111991.
- Lee, Y. W.; Yoon, S. J. (2002). "On the Construction of the Heavens". An Aligned Stream of Low-Metallicity Clusters in the Halo of the Milky Way 297 (5581): 578–81. arXiv:astro-ph/0207607. Bibcode:2002Sci...297..578Y. doi:10.1126/science.1073090. PMID 12142530.
- Leonard, P. J. t. (1989). "Stellar collisions in globular clusters and the blue straggler problem". The Astrophysical Journal 98: 217. Bibcode:1989AJ.....98..217L. doi:10.1086/115138.
- Rubin, V. C.; Ford, W. K. J. (1999). "A Thousand Blazing Suns: The Inner Life of Globular Clusters". Mercury 28: 26. Bibcode:1999Mercu..28d..26M. Retrieved 2006-06-02.
- Savage, D.; Neal, N.; Villard, R.; Johnson, R.; Lebo, H. (2002-09-17). "Hubble Discovers Black Holes in Unexpected Places". HubbleSite (Space Telescope Science Institute). Retrieved 2006-05-25.
- Finley, Dave (2007-05-28). "Star Cluster Holds Midweight Black Hole, VLA Indicates". NRAO. Retrieved 2007-05-29.
- Baumgardt, Holger; Hut, Piet; Makino, Junichiro; McMillan, Steve; Portegies Zwart, Simon (2003). "On the Central Structure of M15". Astrophysical Journal Letters 582 (1): 21. arXiv:astro-ph/0210133. Bibcode:2003ApJ...582L..21B. doi:10.1086/367537.
- Baumgardt, Holger; Hut, Piet; Makino, Junichiro; McMillan, Steve; Portegies Zwart, Simon (2003). "A Dynamical Model for the Globular Cluster G1". Astrophysical Journal Letters 589 (1): 25. arXiv:astro-ph/0301469. Bibcode:2003ApJ...589L..25B. doi:10.1086/375802. Retrieved 2006-09-13.
- Shapley, H. (1917). "Studies based on the colors and magnitudes in stellar clusters. I,II,III". Astrophysical Journal 45: 118–141. Bibcode:1917ApJ....45..118S. doi:10.1086/142314.
- Martin, Schwarzschild (1958). Structure and Evolution of Stars. Princeton University Press. ISBN 0-486-61479-4.
- Sandage, A.R. (1957). "Observational Approach to Evolution. III. Semiempirical Evolution Tracks for M67 and M3". Astrophysical Journal 126: 326. Bibcode:1957ApJ...126..326S. doi:10.1086/146405.
- Hansen, B. M. S.; Brewer, J.; Fahlman, G. G.; Gibson, B. K.; Ibata, R.; Limongi, M.; Rich, R. M.; Richer, H. B.; Shara, M. M.; Stetson, P. B. (2002). "The White Dwarf Cooling Sequence of the Globular Cluster Messier 4". Astrophysical Journal Letters 574 (2): L155. arXiv:astro-ph/0205087. Bibcode:2002ApJ...574L.155H. doi:10.1086/342528.
- "Ashes from the Elder Brethren — UVES Observes Stellar Abundance Anomalies in Globular Clusters" (Press release). 2001-03-01. Retrieved 2006-05-26.
- Leonard, Peter J. T. (1989). "Stellar collisions in globular clusters and the blue straggler problem". The Astronomical Journal 98: 217–226. Bibcode:1989AJ.....98..217L. doi:10.1086/115138.
- "Appearances can be deceptive". ESO Picture of the Week. Retrieved 12 February 2013.
- Benacquista, Matthew J. (2006). "Globular cluster structure". Living Reviews in Relativity. Retrieved 2006-08-14.
- Staneva, A.; Spassova, N.; Golev, V. (1996). "The Ellipticities of Globular Clusters in the Andromeda Galaxy". Astronomy and Astrophysics Supplement 116 (3): 447–461. Bibcode:1996A&AS..116..447S. doi:10.1051/aas:1996127.
- Frenk, C. S.; White, S. D. M. (1980). "The ellipticities of Galactic and LMC globular clusters". Monthly Notices of the Royal Astronomical Society 286 (3): L39–L42. arXiv:astro-ph/9702024. Bibcode:1997astro.ph..2024G.
- Buonanno, R.; Corsi, C. E.; Buzzoni, A.; Cacciari, C.; Ferraro, F. R.; Fusi Pecci, F.; Corsi; Buzzoni; Cacciari; Ferraro; Fusi Pecci (1994). "The Stellar Population of the Globular Cluster M 3. I. Photographic Photometry of 10 000 Stars". Astronomy and Astrophysics 290: 69–103. Bibcode:1994A&A...290...69B.
- Djorgovski, S.; King, I. R. (1986). "A preliminary survey of collapsed cores in globular clusters". Astrophysical Journal 305: L61–L65. Bibcode:1986ApJ...305L..61D. doi:10.1086/184685.
- Ashman, Keith M.; Zepf, Stephen E. (1998). Globular cluster systems. Cambridge astrophysics series 30. Cambridge University Press. p. 29. ISBN 0-521-55057-2.
- Binney, James; Merrifield, Michael (1998). Galactic astronomy. Princeton series in astrophysics. Princeton University Press. p. 371. ISBN 0-691-02565-7.
- Vanbeveren, D. (2001). The influence of binaries on stellar population studies. Astrophysics and space science library 264. Springer. p. 397. ISBN 0-7923-7104-6.
- Spitzer, L., Jr. (June 2–4, 1986). "Dynamical Evolution of Globular Clusters". In P. Hut and S. McMillan. The Use of Supercomputers in Stellar Dynamics, Proceedings of a Workshop Held at the Institute for Advanced Study 267. Princeton, USA: Springer-Verlag, Berlin Heidelberg New York. p. 3. Bibcode:1986LNP...267....3S. doi:10.1007/BFb0116388.
- Gnedin, Oleg Y.; Lee, Hyung Mok; Ostriker, Jeremiah P. (September 1999). "Effects of Tidal Shocks on the Evolution of Globular Clusters". The Astrophysical Journal 522 (2): 935–949. arXiv:astro-ph/9806245. Bibcode:1999ApJ...522..935G. doi:10.1086/307659.
- Bahcall, John N.; Piran, Tsvi; Weinberg, Steven (2004). Dark matter in the universe (2nd ed.). World Scientific. p. 51. ISBN 981-238-841-9.
- "Stellar Sorting in Globular Cluster 47". Hubble News Desk. 2006-10-04. Retrieved 2006-10-24.
- Baldwin, Emily (2008-04-29). "Old globular clusters surprisingly young". Astronomy Now Online. Retrieved 2008-05-02.
- Secker, Jeff (1992). "A Statistical Investigation into the Shape of the Globular cluster Luminosity Distribution". Astronomical Journal 104 (4): 1472–1481. Bibcode:1992AJ....104.1472S. doi:10.1086/116332.
- Benacquista, Matthew J. (2002-02-20). "Relativistic Binaries in Globular Clusters: 5.1 N-body". Living Reviews in Relativity. Retrieved 2006-10-25.
- Hut, Piet; Makino, Jun. "Maya Open Lab". The Art of Computational Science. Retrieved 2012-03-26.
- Heggie, D. C.; Giersz, M.; Spurzem, R.; Takahashi, K. (1998). "Dynamical Simulations: Methods and Comparisons". In Johannes Andersen. Highlights of Astronomy Vol. 11A, as presented at the Joint Discussion 14 of the XXIIIrd General Assembly of the IAU, 1997. Kluwer Academic Publishers. p. 591. Bibcode:1997astro.ph.11191H.
- Benacquista, Matthew J. (2006). "Relativistic Binaries in Globular Clusters". Living Reviews in Relativity 9.
- J. Goodman and P. Hut, ed. (1985). Dynamics of Star Clusters (International Astronomical Union Symposia). Springer. ISBN 90-277-1963-2.
- Hasani Zonoozi, Akram; et al. (March 2011). "Direct N-body simulations of globular clusters – I. Palomar 14". Monthly Notices of the Royal Astronomical Society 411 (3): 1989–2001. arXiv:1010.2210. Bibcode:2011MNRAS.411.1989Z. doi:10.1111/j.1365-2966.2010.17831.x.
- Zhou, Yuan; Zhong, Xie Guang (June 1990). "The core evolution of a globular cluster containing massive black holes". Astrophysics and Space Science 168 (2): 233–241. Bibcode:1990Ap&SS.168..233Y. doi:10.1007/BF00636869.
- Pooley, Dave. "Globular Cluster Dynamics: the importance of close binaries in a real N-body system". UW-Madison. Retrieved 2008-12-11.
- "Globular Cluster M10". ESA/Hubble Picture of the Week. Retrieved 18 June 2012.
- Ortolani, S.; Bica, E.; Barbuy, B.; Bica; Barbuy (1995). "BH 176 and AM-2: globular or open clusters?". Astronomy and Astrophysics 300: 726. Bibcode:1995A&A...300..726O.
- Huxor, A. P.; Tanvir, N. R.; Irwin, M. J.; R. Ibata (2005). "A new population of extended, luminous, star clusters in the halo of M31". Monthly Notices of the Royal Astronomical Society 360 (3): 993–1006. arXiv:astro-ph/0412223. Bibcode:2005MNRAS.360.1007H. doi:10.1111/j.1365-2966.2005.09086.x.
- Lauchner, A.; Wilhelm, R.; Beers, T. C.; Allende Prieto, C. (December 2003). "A Search for Kinematic Evidence of Tidal Tails in Globular Clusters". American Astronomical Society Meeting 203, #112.26. American Astronomical Society. Bibcode:2003AAS...20311226L.
- Di Matteo, P.; Miocchi, P.; Capuzzo Dolcetta, R. (May 2004). "Formation and Evolution of Clumpy Tidal Tails in Globular Clusters". American Astronomical Society, DDA meeting #35, #03.03. American Astronomical Society. Bibcode:2004DDA....35.0303D.
- Staude, Jakob (2002-06-03). "Sky Survey Unveils Star Cluster Shredded By The Milky Way". Image of the Week. Sloan Digital Sky Survey. Retrieved 2006-06-02.
- Kravtsov, V. V. (2001). "Globular Clusters and Dwarf Spheroidal Galaxies of the Outer Galactic Halo: on the Putative Scenario of their Formation" (PDF). Astronomical and Astrophysical Transactions 20 (1): 89–92. Bibcode:2001A&AT...20...89K. doi:10.1080/10556790108208191. Retrieved 2010-03-02.
- Gonzalez, Guillermo; Brownlee, Donald; Ward, Peter (July 2001). "The Galactic Habitable Zone: Galactic Chemical Evolution". Icarus 152 (1): 185–200. arXiv:astro-ph/0103165. Bibcode:2001Icar..152..185G. doi:10.1006/icar.2001.6617
- Sigurdsson, S. et al. (2008). Planets Around Pulsars in Globular Clusters. In Fischer, D.; Rasio, F. A.; Thorsett, S. E. et al. "Extreme Solar Systems, ASP Conference Series, proceedings of the conference held 25-29 June, 2007, at Santorini Island, Greece". Extreme Solar Systems 398: 119. Bibcode:2008ASPC..398..119S
- Spurzem, R. et al. (May 2009). "Dynamics of Planetary Systems in Star Clusters". The Astrophysical Journal 697 (1): 458–482. arXiv:astro-ph/0612757. Bibcode:2009ApJ...697..458S. doi:10.1088/0004-637X/697/1/458
General resources
- NASA Astrophysics Data System has a collection of past articles, from all major astrophysics journals and many conference proceedings.
- SCYON is a newsletter dedicated to star clusters.
- MODEST is a loose collaboration of scientists working on star clusters.
- Binney, James; Tremaine, Scott (1987). Galactic Dynamics (First ed.). Princeton, New Jersey: Princeton University Press. ISBN 0-691-08444-0.
- Heggie, Douglas; Hut, Piet (2003). The Gravitational Million-Body Problem: A Multidisciplinary Approach to Star Cluster Dynamics. Cambridge University Press. ISBN 0-521-77486-1.
- Spitzer, Lyman (1987). Dynamical Evolution of Globular Clusters. Princeton, New Jersey: Princeton University Press. ISBN 0-691-08460-2.
Review articles
- Elson, Rebecca; Hut, Piet; Inagaki, Shogo (1987). Dynamical evolution of globular clusters. Annual review of astronomy and astrophysics 25 565. Bibcode: 1987ARA&A..25..565E
- Meylan, G.; Heggie, D. C. (1997). Internal dynamics of globular clusters. The Astronomy and Astrophysics Review 8 1. Bibcode: 1997A&ARv...8....1M
|Wikimedia Commons has media related to: Globular clusters|
- Globular Clusters, SEDS Messier pages
- Milky Way Globular Clusters
- Catalogue of Milky Way Globular Cluster Parameters by William E. Harris, McMaster University, Ontario, Canada
- A galactic globular cluster database by Marco Castellani, Rome Astronomical Observatory, Italy
- Key stars have different birthdays article describes how stars in globular clusters are born in several bursts, rather than all at once
- Globular Clusters Blog News, papers and preprints on Galactic Globular Clusters
- Globular Clusters Group on CiteULike
- Clickable Messier Object table including globular clusters | http://en.wikipedia.org/wiki/Globular_cluster | 13 |
52 | Students will learn about angles by using protractors.
Main Curriculum Tie:
Mathematics Grade 4
Geometric measurement: understand concepts of angle and measure angles. 5.
Recognize angles as geometric shapes that are formed wherever two rays share a common endpoint, and understand concepts of angle measurement:
- Chalkboard protractor
- 10-15 different art
For each group:
- Transparency sheets
- Wax pencil
For each student:
Background For Teachers:
Prior to this activity, students should have an understanding of right
angle, acute angle, obtuse angle, and vertex.
The size of an angle depends on the opening between the two sides of
the angle. Angles are measured in units referred to as degrees and labeled
with the ˚ symbol.
The size of an angle can be described in relation to a complete circle
(360˚), 1/2 of a circle (180˚), or 1/4 of a circle (90˚).
It is important to teach students how to extend rays of an angle when
using a protractor. This not only helps them measure but, helps in the
construction of angles. It is also important to make sure students
understand how to use the interior and exterior numbers on a protractor.
When selecting art prints, try to select a wide variety. See Art
Print Suggestions for ideas.
Twizzler Pull Aparts or Wikki Stix can be used in place of paper and
pencil when constructing angles.
This lesson usually takes about three days to teach and assess.
Intended Learning Outcomes:
1. Demonstrate a positive learning attitude toward mathematics.
2. Become mathematical problem solvers.
Invitation to Learn
Have each student spread out his/her fingers and look at his/her hand.
Use the following questions to promote discussion and thinking about
angles, “Do you think people with larger hands have larger angles
between their fingers?” Take student predictions and explain that at the
end of the lesson we will use our class as a sampling to answer the
question, “Can you use your fingers to make a 90˚ angle?” (thumb and
Developing “angle sense”
Use foam or fraction circles to help students visualize angle size in
relation to a 360˚ circle. Start with a whole circle, 1/2s, 1/4s, and 1/8s.
Add along the way (e.g., 1/4 = 90˚ so it will take four 1/4 pieces to equal
the 360˚ whole).
- Give each student an angle
wheel to help further develop “angle
sense.” Before moving on, make sure they understand that angles
are measured in degrees.
- Allow students to experiment with their wheels,
asking them to
look for patterns. For example, the larger the angle the greater the
measurement in degrees.
- Place students in pairs. Partners take turns displaying
while the other partner estimates the measurement of the angle.
- Display sets of angles on the
chalkboard and ask, “Would the
angle wheel be an effective measurement tool to measure these
- Introduce the protractor using the chalkboard protractor. Be
sure to model several examples of measuring angles with a
protractor before handing out individual protractors to each
student. Explain interior and exterior numbers on the protractor. Show students
how to extend rays when necessary for easier
measuring. Does extending the rays of the angle change the
measurement of the angle? Practice measuring angles in
isolation before moving on to measuring angles in the prints.
(Math books include such angles and work well for practicing.)
- Place students
in pairs or groups of three. Using 10-15
different art prints, each pair/group measures an angle from
each piece of art. Record the measurement of an angle from
each print, including a description of the object it belongs to, in
math journals. Allow each group three to five minutes with each
print before passing the print to another group.
- Numbering each print with
a Post-it® note helps make sure
each group has a chance to work with all of the prints. It also
helps the students organize and record information about the
prints in their journals.
- Use transparency sheets to protect the prints.
extension will be necessary for easier measuring of angles.
The students may only write on the transparencies with wax
- Instruct students to select their choice of prints to answer
following questions in their math journals. They must choose a
different print to answer each question.
- How does the artist use angles
to create the overall feel in the
piece of art?
- How does the artist use angles to create depth and/or perspective?
- Which styles/types of art use sharper, more definite angles?
your favorite print. Did the artist use a variety of
- How did the use of angles affect the feel the piece?
- Using the information on
how to measure an existing angle, ask
students how they could use a protractor to create their own
angles. Model angle construction using the chalkboard protractor.
For guided practice, have a few students suggest angles for the
class to construct. The Angle Assessment worksheet may also be
used for guided practice.
- Students use what they have learned by spreading
apart and measuring the angles between them. This works best
when done in pairs. Instruct students to use their protractors,
pencil, and paper to neatly measure and construct the angles
between two of their fingers. Record and compare data to answer the question
posed at the beginning of the lesson.
- The following extension possibilities
work well for students who
need extra support:
- Use a clock manipulative as an example of angles to
recognizing different angles in their surroundings.
- Start with a straight
line (180˚) and progressively create a new
angle every 10˚. This helps students see the correlation
between angle size and the protractor.
- The following extension possibilities
work well for students who
need extra challenges:
- Design a perfect circle using a protractor.
- Use a protractor to design
runs for a ski resort. How will the
angles of the more difficult runs compare to the beginner
- Use a protractor to design ramps for a skate park. How does
the degree of difficulty relate to the measurement of the
- Research the relationship between landslides, glaciers,
erosion, and slope angles.
- Give students a protractor
transparency to take home and
demonstrate their new skills to their families.
- Send family members on an
angle scavenger hunt. For example,
find something in the house that has an angle measurement
between 120˚ and 140˚.
- Have each student choose three to five of the angle
they measured and recorded from the art prints. Using the
measurements, students construct the angles and include them in
their own piece of hand drawn abstract art. Students could also
create a piece that uses angles to create the illusion of depth or perspective
(refer to M.C. Escher print Ascending and
Descending). The use of protractors must be incorporated in their
- Use protractors to complete the Angle Assessment worksheet.
Hartshorn, R. & Boren, S. (1990). Experiential learning of mathematics;
Eric Digest #ED321967
This lesson includes the angle wheel because it serves as a concrete
representation of angles. Research suggests that incorporating the use of
manipulatives in mathematic instruction is “useful in the transition from
concrete to abstract taught in steps, semi-concrete to semi-abstract.” In
this lesson, the angle wheel serves as the concrete and is introduced
before moving on to the abstract—measuring of angles on paper.
Gresham, G., Sloan T., and Vinson B. (1997). Reducing mathematics anxiety
in fourth grade
at risk students. Retrieved January 2, 2005, from Athens Stage College, School
Education Web site: http://www.Athens.edu/vinsobm/research_4.html.
Research also suggests that the use of mathematical manipulatives
reduces the level of math anxiety in high risk students.
Lou, Y., Abrami, P.C., Spence, J.C., Paulsen C., Chambers B., & d’Apollonio,S.
Within-class grouping: a meta analysis. Review of Educational
Research, 66(4), 423-458.
The activities in this lesson plan were designed to be completed in
pairs or groups of three. Research on cooperative learning and group
work indicates that “small teams of three to four members” are more
effective than larger groups. This lesson was created with the intention of
maximizing the benefits of cooperative learning.
Created Date :
Jan 27 2006 08:55 AM | http://www.uen.org/Lessonplan/preview.cgi?LPid=15234 | 13 |
93 | ||This article includes a list of references, but its sources remain unclear because it has insufficient inline citations. (January 2013)|
In arithmetic, long division is a standard division algorithm suitable for dividing simple or complex multidigit numbers that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps. As in all division problems, one number, called the dividend, is divided by another, called the divisor, producing a result called the quotient. It enables computations involving arbitrarily large numbers to be performed by following a series of simple steps. The abbreviated form of long division is called short division, which is almost always used instead of long division when the divisor has only one digit.
Place in education
Inexpensive calculators and computers have become the most common way to solve division problems, eliminating a traditional mathematical exercise, and decreasing the educational opportunity to show how to do so by paper and pencil techniques. (Internally, those devices use one of a variety of division algorithms). In the United States, long division has been especially targeted for de-emphasis, or even elimination from the school curriculum, by reform mathematics, though traditionally introduced in the 4th or 5th grades.
The process is begun by dividing the left-most digit of the dividend by the divisor. The quotient (rounded down to an integer) becomes the first digit of the result, and the remainder is calculated (this step is notated as a subtraction). This remainder carries forward when the process is repeated on the following digit of the dividend (notated as 'bringing down' the next digit to the remainder). When all digits have been processed and no remainder is left, the process is complete.
An example is shown below, representing the division of 500 by 4 (with a result of 125).
125 (Explanations) 4)500 4 (4 × 1 = 4) 10 (5 - 4 = 1) 8 (4 × 2 = 8) 20 (10 - 8 = 2) 20 (4 × 5 = 20) 0 (20 - 20 = 0)
In the above example, the first step is to find the shortest sequence of digits starting from the left end of the dividend, 500, that the divisor 4 goes into at least once; this shortest sequence in this example is simply the first digit, 5. The largest number that the divisor 4 can be multiplied by without exceeding 5 is 1, so the digit 1 is put above the 5 to start constructing the quotient. Next, the 1 is multiplied by the divisor 4, to obtain the largest whole number (4 in this case) that is a multiple of the divisor 4 without exceeding the 5; this product of 1 times 4 is 4, so 4 is placed underneath the 5. Next the 4 under the 5 is subtracted from the 5 to get the remainder, 1, which is placed under the 4 under the 5. This remainder 1 is necessarily smaller than the divisor 4. Next the first as-yet unused digit in the dividend, in this case the first digit 0 after the 5, is copied directly underneath itself and next to the remainder 1, to form the number 10. At this point the process is repeated enough times to reach a stopping point: The largest number by which the divisor 4 can be multiplied without exceeding 10 is 2, so 2 is written above the 0 that is next to the 5 – that is, directly above the last digit in the 10. Then the latest entry to the quotient, 2, is multiplied by the divisor 4 to get 8, which is the largest multiple of 4 that does not exceed 10; so 8 is written below 10, and the subtraction 10 minus 8 is performed to get the remainder 2, which is placed below the 8. This remainder 2 is necessarily smaller than the divisor 4. The next digit of the dividend (the last 0 in 500) is copied directly below itself and next to the remainder 2, to form 20. Then the largest number by which the divisor 4 can be multiplied without exceeding 20 is ascertained; this number is 5, so 5 is placed above the last dividend digit that was brought down (i.e., above the rightmost 0 in 500). Then this new quotient digit 5 is multiplied by the divisor 4 to get 20, which is written at the bottom below the existing 20. Then 20 is subtracted from 20, yielding 0, which is written below the 20. We know we are done now because two things are true: there are no more digits to bring down from the dividend, and the last subtraction result was 0.
If the last remainder when we ran out of dividend digits had been something other than 0, there would have been two possible courses of action. (1) We could just stop there and say that the dividend divided by the divisor is the quotient written at the top with the remainder written at the bottom; equivalently we could write the answer as the quotient followed by a fraction that is the remainder divided by the divisor. Or, (2) we could extend the dividend by writing it as, say, 500.000... and continue the process (using a decimal point in the quotient directly above the decimal point in the dividend), in order to get a decimal answer, as in the following example.
31.75 4)127.00 124 3 0 (0 is written because 4 does not go into 3, using whole numbers.) 30 (0 is added in order to make 3 divisible by 4; the 0 is accounted for by adding a decimal point in the quotient.) 28 (7 × 4 = 28) 20 (an additional zero is brought down) 20 (5 × 4 = 20) 0
In this example, the decimal part of the result is calculated by continuing the process beyond the units digit, "bringing down" zeros as being the decimal part of the dividend.
This example also illustrates that, at the beginning of the process, a step that produces a zero can be omitted. Since the first digit 1 is less than the divisor 4, the first step is instead performed on the first two digits 12. Similarly, if the divisor were 13, one would perform the first step on 127 rather than 12 or 1.
Basic procedure for long division by longhand
- When dividing two numbers, for example, n divided by m, n is the dividend and m is the divisor; the answer is the quotient.
- Find the location of all decimal points in the dividend and divisor.
- If necessary, simplify the long division problem by moving the decimals of the divisor and dividend by the same number of decimal places, to the right, (or to the left) so that the decimal of the divisor is to the right of the last digit.
- When doing long division, keep the numbers lined up straight from top to bottom under the tableau.
- After each step, be sure the remainder for that step is less than the divisor. If it is not, there are three possible problems: the multiplication is wrong, the subtraction is wrong, or a greater quotient is needed.
- In the end, the remainder, r, is added to the growing quotient as a fraction, r/m.
Example with multi-digit divisor
A divisor of any number of digits can be used. In this example, 37 is to be divided into 1260257. First the problem is set up as follows:
Digits of the number 1260257 are taken until a number greater than 37 occurs. So 1 and 12 are less than 37, but 126 is greater. Next, the greatest multiple of 37 less than 126 is computed. So 3 × 37 = 111 < 126, but 4 × 37 > 126. This is written underneath the 126 and the multiple of 37 is written on the top where the solution will appear:
3 37)1260257 111
Note carefully which columns these digits are written into - the 3 is put in the same column as the 6 in the dividend 1260257.
The 111 is then subtracted from the above line, ignoring all digits to the right:
3 37)1260257 111 15
Now digits are copied down from the dividend and appended to the result of 15 until a number greater than 37 is obtained. 150 is greater so only the 0 is copied:
3 37)1260257 111 150
The process repeats: the greatest multiple of 37 less than 150 is subtracted. This is 148 = 4 × 37, so a 4 is added to the solution line. Then the result of the subtraction is extended by digits taken from the dividend:
34 37)1260257 111 150 148 225
Notice that two digits had to be used to extend 2, as 22 < 37.
This is repeated until 37 divides the last line exactly:
34061 37)1260257 111 150 148 225 222 37
Mixed mode long division
m - yd - ft - in 1 - 634 1 9 r. 15" 37) 50 - 600 - 0 - 0 37 22880 66 348 13 23480 66 348 17600 222 37 333 5280 128 29 15 22880 111 348 == ===== 170 === 148 22 66 ==
Each of the four columns is worked in turn. Starting with the miles: 50/37 = 1 remainder 13. No further division is possible, so perform a long multiplication by 1,760 to convert miles to yards, the result is 22,880 yards. Carry this to the top of the yards column and add it to the 600 yards in the dividend giving 23,480. Long division of 23,480 / 37 now proceeds as normal yielding 634 with remainder 22. The remainder is multiplied by 3 to get feet and carried up to the feet column. Long division of the feet gives 1 remainder 29 which is then multiplied by twelve to get 348 inches. Long division continues with the final remainder of 15 inches being shown on the result line.
Non-decimal radix
The same method and layout is used for binary, octal and hexadecimal. An address range of 0xf412df divided into 0x12 parts is:
0d8f45 r. 5 12 ) f412df ea a1 90 112 10e 4d 48 5f 5a 5
Binary is of course trivial because each digit in the result can only be 1 or 0:
1110 r. 11 1101) 10111001 1101 10100 1101 1110 1101 11
Interpretation of decimal results
When the quotient is not an integer and the division process is extended beyond the decimal point, one of two things can happen. (1) The process can terminate, which means that a remainder of 0 is reached; or (2) a remainder could be reached that is identical to a previous remainder that occurred after the decimal points were written. In the latter case, continuing the process would be pointless, because from that point onward the same sequence of digits would appear in the quotient over and over. So a bar is drawn over the repeating sequence to indicate that it repeats forever.
Notation in non-English-speaking countries
China, Japan and India use the same notation as English-speakers. Elsewhere, the same general principles are used, but the figures are often arranged differently.
Latin America
In Latin America (except Mexico, Colombia, Venezuela and Brazil), the calculation is almost exactly the same, but is written down differently as shown below with the same two examples used above. Usually the quotient is written under a bar drawn under the divisor. A long vertical line is sometimes drawn to the right of the calculations.
500 ÷ 4 = 125 (Explanations) 4 (4 × 1 = 4) 10 (5 - 4 = 1) 8 (4 × 2 = 8) 20 (10 - 8 = 2) 20 (4 × 5 = 20) 0 (20 - 20 = 0)
127 ÷ 4 = 31.75 124 3 0 (0 is written because 4 does not go into 3, using whole numbers.) 30 (a 0 is added in order to make 3 divisible by 4; the 0 is accounted for by adding a decimal point in the quotient) 28 (7 × 4 = 28) 20 (an additional zero is added) 20 (5 × 4 = 20) 0
In Mexico, the US notation is used, except that only the result of the subtraction is annotated and the calculation is done mentally, as shown below:
125 (Explanations) 4)500 10 (5 - 4 = 1) 20 (10 - 8 = 2) 0 (20 - 20 = 0)
127|4 −124 31,75 3 − 0 30 −28 20 −20 0
Same procedure applies in Mexico, only the result of the subtraction is annotated and the calculation is done mentally.
In Spain, Italy, France, Portugal, Romania, Turkey, Greece, Belgium, and Russia, the divisor is to the right of the dividend, and separated by a vertical bar. The division also occurs in the column, but the quotient (result) is written below the divider, and separated by the horizontal line.
127|4 −124|31,75 3 − 0 30 −28 20 −20 0
In France, a long vertical bar separates the dividend and subsequent subtractions from the quotient and divisor, as in the example below of 6359 divided by 17, which is 374 with a remainder of 1.
Decimal numbers are not divided directly, the dividend and divisor are multiplied by a power of ten so that the division involves two whole numbers. Therefore, if one were dividing 12,7 by 0,4 (commas being used instead of decimal points), the dividend and divisor would first be changed to 127 and 4, and then the division would proceed as above.
In Germany, the notation of a normal equation is used for dividend, divisor and quotient (cf. first section of Latin American countries above, where it's done virtually the same way):
127 : 4 = 31,75 −124 3 −0 30 −28 20 −20 0
In the Netherlands, the following notation is used:
12 / 135 \ 11,25 132 3 0 30 24 60 60 0
Rational numbers
Long division of integers can easily be extended to include non-integer dividends, as long as they are rational. This is because every rational number has a recurring decimal expansion. The procedure can also be extended to include divisors which have a finite or terminating decimal expansion (i.e. decimal fractions). In this case the procedure involves multiplying the divisor and dividend by the appropriate power of ten so that the new divisor is an integer – taking advantage of the fact that a ÷ b = (ca) ÷ (cb) – and then proceeding as above.
See also
- Arbitrary-precision arithmetic
- Egyptian multiplication and division
- Elementary arithmetic
- Fourier division
- Polynomial long division
- Shifting nth root algorithm – for finding square root or any nth root of a number
- Short division
- Implementing Multiple-Precision Arithmetic, Part 2. Algorithm for long division and C++ implementation at sputsoft.com
- Long Division Algorithm
- Long Division and Euclid’s Lemma | http://en.wikipedia.org/wiki/Long_division | 13 |
63 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
An antibody (Ab), also known as an immunoglobulin (Ig), is a large Y-shaped protein produced by B-cells that is used by the immune system to identify and neutralize foreign objects such as bacteria and viruses. The antibody recognizes a unique part of the foreign target, called an antigen. Each tip of the "Y" of an antibody contains a paratope (a structure analogous to a lock) that is specific for one particular epitope (similarly analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize its target directly (for example, by blocking a part of a microbe that is essential for its invasion and survival). The production of antibodies is the main function of the humoral immune system.
Antibodies are secreted by a type of white blood cell called a plasma cell which is contained in blood serum. Antibodies can occur in two physical forms, a soluble form that is secreted from the cell, and a membrane-bound form that is attached to the surface of a B cell and is referred to as the B cell receptor (BCR). The BCR is only found on the surface of B cells and facilitates the activation of these cells and their subsequent differentiation into either antibody factories called plasma cells, or memory B cells that will survive in the body and remember that same antigen so the B cells can respond faster upon future exposure. In most cases, interaction of the B cell with a T helper cell is necessary to produce full activation of the B cell and, therefore, antibody generation following antigen binding. Soluble antibodies are released into the blood and tissue fluids, as well as many secretions to continue to survey for invading microorganisms.
Antibodies are glycoproteins belonging to the immunoglobulin superfamily; the terms antibody and immunoglobulin are often used interchangeably. Antibodies are typically made of basic structural units—each with two large heavy chains and two small light chains. There are several different types of antibody heavy chains, and several different kinds of antibodies, which are grouped into different isotypes based on which heavy chain they possess. Five different antibody isotypes are known in mammals, which perform different roles, and help direct the appropriate immune response for each different type of foreign object they encounter.
Though the general structure of all antibodies is very similar, a small region at the tip of the protein is extremely variable, allowing millions of antibodies with slightly different tip structures, or antigen binding sites, to exist. This region is known as the hypervariable region. Each of these variants can bind to a different target, known as an antigen. This enormous diversity of antibodies allows the immune system to recognize an equally wide variety of antigens. The large and diverse population of antibodies is generated by random combinations of a set of gene segments that encode different antigen binding sites (or paratopes), followed by random mutations in this area of the antibody gene, which create further diversity. Antibody genes also re-organize in a process called class switching that changes the base of the heavy chain to another, creating a different isotype of the antibody that retains the antigen specific variable region. This allows a single antibody to be used by several different parts of the immune system.
Surface immunoglobulin (Ig) is attached to the membrane of the effector B cells by its transmembrane region, while antibodies are the secreted form of Ig and lack the trans membrane region so that antibodies can be secreted into the bloodstream and body cavities. As a result, surface Ig and antibodies are identical except for the transmembrane regions. Therefore, they are considered two forms of antibodies: soluble form or membrane-bound form (Parham 21-22).
The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors. These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences.
|IgA||2||Found in mucosal areas, such as the gut, respiratory tract and urogenital tract, and prevents colonization by pathogens. Also found in saliva, tears, and breast milk.|
|IgD||1||Functions mainly as an antigen receptor on B cells that have not been exposed to antigens. It has been shown to activate basophils and mast cells to produce antimicrobial factors.|
|IgE||1||Binds to allergens and triggers histamine release from mast cells and basophils, and is involved in allergy. Also protects against parasitic worms.|
|IgG||4||In its four forms, provides the majority of antibody-based immunity against invading pathogens. The only antibody capable of crossing the placenta to give passive immunity to fetus.|
|IgM||1||Expressed on the surface of B cells (monomer) and in a secreted form (pentamer) with very high avidity. Eliminates pathogens in the early stages of B cell mediated (humoral) immunity before there is sufficient IgG.|
Antibodies can come in different varieties known as isotypes or classes. In placental mammals there are five antibody isotypes known as IgA, IgD, IgE, IgG and IgM. They are each named with an "Ig" prefix that stands for immunoglobulin, another name for antibody, and differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table.
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, are known as naïve B cells and express only the IgM isotype in a cell surface bound form. B cells begin to express both IgM and IgD when they reach maturity—the co-expression of both these immunoglobulin isotypes renders the B cell 'mature' and ready to respond to antigen. B cell activation follows engagement of the cell bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA or IgG, that have defined roles in the immune system.
Antibodies are heavy (~150 kDa) globular plasma proteins. They have sugar chains added to some of their amino acid residues. In other words, antibodies are glycoproteins. The basic functional unit of each antibody is an immunoglobulin (Ig) monomer (containing only one Ig unit); secreted antibodies can also be dimeric with two Ig units as with IgA, tetrameric with four Ig units like teleost fish IgM, or pentameric with five Ig units, like mammalian IgM.The variable parts of an antibody are its V regions, and the constant part is its C region.
The Ig monomer is a "Y"-shaped molecule that consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds. Each chain is composed of structural domains called immunoglobulin domains. These domains contain about 70-110 amino acids and are classified into different categories (for example, variable or IgV, and constant or IgC) according to their size and function. They have a characteristic immunoglobulin fold in which two beta sheets create a “sandwich” shape, held together by interactions between conserved cysteines and other charged amino acids.
- For more details on this topic, see Immunoglobulin heavy chain.
There are five types of mammalian Ig heavy chain denoted by the Greek letters: α, δ, ε, γ, and μ. The type of heavy chain present defines the class of antibody; these chains are found in IgA, IgD, IgE, IgG, and IgM antibodies, respectively. Distinct heavy chains differ in size and composition; α and γ contain approximately 450 amino acids, while μ and ε have approximately 550 amino acids.
In birds, the major serum antibody, also found in yolk, is called IgY. It is quite different from mammalian IgG. However, in some older literature and even on some commercial life sciences product websites it is still called "IgG", which is incorrect and can be confusing.
Each heavy chain has two regions, the constant region and the variable region. The constant region is identical in all antibodies of the same isotype, but differs in antibodies of different isotypes. Heavy chains γ, α and δ have a constant region composed of three tandem (in a line) Ig domains, and a hinge region for added flexibility; heavy chains μ and ε have a constant region composed of four immunoglobulin domains. The variable region of the heavy chain differs in antibodies produced by different B cells, but is the same for all antibodies produced by a single B cell or B cell clone. The variable region of each heavy chain is approximately 110 amino acids long and is composed of a single Ig domain.
- For more details on this topic, see Immunoglobulin light chain.
In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). A light chain has two successive domains: one constant domain and one variable domain. The approximate length of a light chain is 211 to 217 amino acids. Each antibody contains two light chains that are always identical; only one type of light chain, κ or λ, is present per antibody in mammals. Other types of light chains, such as the iota (ι) chain, are found in lower vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei).
CDRs, Fv, Fab and Fc RegionsEdit
Some parts of an antibody have unique functions. The arms of the Y, for example, contain the sites that can bind two antigens (in general, identical) and, therefore, recognize specific foreign objects. This region of the antibody is called the Fab (fragment, antigen binding) region. It is composed of one constant and one variable domain from each heavy and light chain of the antibody. The paratope is shaped at the amino terminal end of the antibody monomer by the variable domains from the heavy and light chains. The variable domain is also referred to as the FV region and is the most important region for binding to antigens. More specifically, variable loops of β-strands, three each on the light (VL) and heavy (VH) chains are responsible for binding to the antigen. These loops are referred to as the complementarity determining regions (CDRs). The structures of these CDRs have been clustered and classified by Chothia et al. and more recently by North et al. In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes.
The base of the Y plays a role in modulating immune cell activity. This region is called the Fc (Fragment, crystallizable) region, and is composed of two heavy chains that contribute two or three constant domains depending on the class of the antibody. Thus, the Fc region ensures that each antibody generates an appropriate immune response for a given antigen, by binding to a specific class of Fc receptors, and other immune molecules, such as complement proteins. By doing this, it mediates different physiological effects including recognition of opsonized particles, lysis of cells, and degranulation of mast cells, basophils and eosinophils.
Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures.
At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: they prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway.
Activation of complementEdit
Antibodies that bind to surface antigens on, for example, a bacterium attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Secondly, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly.
Activation of effector cellsEdit
To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region.
Those cells which recognize coated pathogens have Fc receptors which, as the name suggests, interacts with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens.
Humans and higher primates also produce “natural antibodies” which are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue.
Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes.
The region (locus) of a chromosome that encodes an antibody is large and contains several distinct genes for each domain of the antibody—the locus containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences, between the variable domains, are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below.
- For more details on this topic, see V(D)J recombination.
Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. Interestingly, the rearrangement of several subgenes (e.i. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells .
After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain.
Somatic hypermutation and affinity maturationEdit
Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains.
This serves to increase the diversity of the antibody pool and impacts the antibody’s antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells.
Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naïve B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function, therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment.
Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype.
A group of antibodies can be called monovalent (or specific) if they have affinity for the same epitope, or for the same antigen (but potentially different epitopes on the molecule), or for the same strain of microorganism (but potentially different antigens on or in it). In contrast, a group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of polyvalent IgG. In contrast, monoclonal antibodies are monovalent for the same epitope.
Disease diagnosis and therapyEdit
Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected, or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed. In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis. Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women. Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests. Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer. Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual.
Rhesus factor, also known as Rhesus D (RhD) antigen, is an antigen found on red blood cells; individuals that are Rhesus-positive (Rh+) have this antigen on their red blood cells and individuals that are Rhesus-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn.
Rho(D) immune globulin antibodies are specific for human Rhesus D (RhD) antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rhesus-negative mother has a Rhesus-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. Importantly, this occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rhesus antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself.
Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography.
In research, purified antibodies are used in many applications. They are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cell express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISPOT techniques.
The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enables computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen.
- See also: History of immunology
The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that "if two substances give rise to two different antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper.
The study of antibodies began in 1890 when Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as “side chains”) on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction was the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization.
In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies were made of protein. The biochemical properties of antigen-antibody binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depended more on their shape than their chemical composition. In 1948, Astrid Fagreaus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies.
Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein was the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA) and David S. Rowe and John L. Fahey identified IgD, and IgE was identified by Kimishige Ishizaka and Teruko Ishizaka as a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies.
- Antibody mimetic
- Anti-mitochondrial antibodies
- Anti-nuclear antibodies
- GAmma globulin
- Humoral immunity
- Immunosuppressive drug
- Intravenous immunoglobulin (IVIg)
- Magnetic immunoassay
- Monoclonal antibody
- Neutralizing antibody
- Secondary antibodies
- Single-domain antibody
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 Charles Janeway (2001). Immunobiology., 5th, Garland Publishing. (electronic full text via NCBI Bookshelf).
- ↑ Litman GW, Rast JP, Shamblott MJ, Haire RN, Hulst M, Roess W, Litman RT, Hinds-Frey KR, Zilch A, Amemiya CT (January 1993). Phylogenetic diversification of immunoglobulin genes and the antibody repertoire. Mol. Biol. Evol. 10 (1): 60–72.
- ↑ 3.0 3.1 3.2 3.3 3.4 3.5 Pier GB, Lyczak JB, Wetzler LM (2004). Immunology, Infection, and Immunity, ASM Press.
- ↑ Borghesi L, Milcarek C (2006). From B cell to plasma cell: regulation of V(D)J recombination and antibody secretion. Immunol. Res. 36 (1–3): 27–32.
- ↑ Parker D (1993). T cell-dependent B cell activation. Annu Rev Immunol 11 (1): 331–360.
- ↑ 6.0 6.1 6.2 6.3 Rhoades RA, Pflanzer RG (2002). Human Physiology, 4th, Thomson Learning.
- ↑ 7.0 7.1 7.2 7.3 Market E, Papavasiliou FN (October 2003). V(D)J recombination and the evolution of the adaptive immune system. PLoS Biol. 1 (1): E16.
- ↑ 8.0 8.1 Diaz M, Casali P (2002). Somatic immunoglobulin hypermutation. Curr Opin Immunol 14 (2): 235–240.
- ↑ Parker D (1993). T cell-dependent B cell activation. Annu. Rev. Immunol. 11 (1): 331–360.
- ↑ 10.0 10.1 10.2 Wintrobe, Maxwell Myer (2004). Wintrobe's clinical hematology, John G. Greer, John Foerster, John N Lukens, George M Rodgers, Frixos Paraskevas, 11, 453–456, Hagerstown, MD: Lippincott Williams & Wilkins.
- ↑ Tolar P, Sohn HW, Pierce SK (February 2008). Viewing the antigen-induced initiation of B-cell activation in living cells. Immunol. Rev. 221 (1): 64–76.
- ↑ Wintrobe, Maxwell Myer (2004). Wintrobe's clinical hematology, John G. Greer, John Foerster, John N Lukens, George M Rodgers, Frixos Paraskevas, 11, 453–456, Hagerstown, MD: Lippincott Williams & Wilkins.
- ↑ Underdown B, Schiff J (1986). Immunoglobulin A: strategic defense initiative at the mucosal surface. Annu Rev Immunol 4 (1): 389–417.
- ↑ 14.0 14.1 Geisberger R, Lamers M, Achatz G (2006). The riddle of the dual expression of IgM and IgD. Immunology 118 (4): 060526021554006––.
- ↑ Chen K, Xu W, Wilson M, He B, Miller NW, Bengtén E, Edholm ES, Santini PA, Rath P, Chiu A, Cattalini M, Litzman J, B Bussel J, Huang B, Meini A, Riesbeck K, Cunningham-Rundles C, Plebani A, Cerutti A (2009). Immunoglobulin D enhances immune surveillance by activating antimicrobial, proinflammatory and B cell-stimulating programs in basophils. Nature Immunology 10 (8): 889–898.
- ↑ 16.0 16.1 16.2 16.3 Woof J, Burton D (2004). Human antibody-Fc receptor interactions illuminated by crystal structures.. Nat Rev Immunol 4 (2): 89–99.
- ↑ Goding J (1978). Allotypes of IgM and IgD receptors in the mouse: a probe for lymphocyte differentiation. Contemp Top Immunobiol 8: 203–43.
- ↑ Mattu T, Pleass R, Willis A, Kilian M, Wormald M, Lellouch A, Rudd P, Woof J, Dwek R (1998). The glycosylation and structure of human serum IgA1, Fab, and Fc regions and the role of N-glycosylation on Fc alpha receptor interactions. J Biol Chem 273 (4): 2260–2272.
- ↑ Roux K (1999). Immunoglobulin structure and function as revealed by electron microscopy. Int Arch Allergy Immunol 120 (2): 85–99.
- ↑ Barclay A (2003). Membrane proteins with immunoglobulin-like domains - a master superfamily of interaction molecules. Semin Immunol 15 (4): 215–223.
- ↑ Putnam FW, Liu YS, Low TL (1979). Primary structure of a human IgA1 immunoglobulin. IV. Streptococcal IgA1 protease, digestion, Fab and Fc fragments, and the complete amino acid sequence of the alpha 1 heavy chain. J Biol Chem 254 (8): 2865–74.
- ↑ Al-Lazikani B, Lesk AM, Chothia C (1997). Standard conformations for the canonical structures of immunoglobulins. J Mol Biol 273 (4): 927–948.
- ↑ North B, Lehmann A, Dunbrack RL (2010). A new clustering of antibody CDR loop conformations. J Mol Biol 406 (2): 228–256.
- ↑ Heyman B (1996). Complement and Fc-receptors in regulation of the antibody response. Immunol Lett 54 (2–3): 195–199.
- ↑ Borghesi L, Milcarek C (2006). From B cell to plasma cell: regulation of V(D)J recombination and antibody secretion. Immunol Res 36 (1–3): 27–32.
- ↑ 26.0 26.1 Ravetch J, Bolland S (2001). IgG Fc receptors. Annu Rev Immunol 19 (1): 275–290.
- ↑ Rus H, Cudrici C, Niculescu F (2005). The role of the complement system in innate immunity. Immunol Res 33 (2): 103–112.
- ↑ Racaniello, Vincent. Natural antibody protects against viral infection. Virology Blog. URL accessed on 2010-01-22.
- ↑ Milland J, Sandrin MS (December 2006). ABO blood group and related antigens, natural antibodies and transplantation. Tissue Antigens 68 (6): 459–466.
- ↑ Mian I, Bradwell A, Olson A (1991). Structure, function and properties of antibody binding sites. J Mol Biol 217 (1): 133–151.
- ↑ Fanning LJ, Connor AM, Wu GE (1996). Development of the immunoglobulin repertoire. Clin. Immunol. Immunopathol. 79 (1): 1–14.
- ↑ 32.0 32.1 Nemazee D (2006). Receptor editing in lymphocyte development and central tolerance. Nat Rev Immunol 6 (10): 728–740.
- ↑ Peter Parham. "The Immune System. 2nd ed. Garland Science: New York, 2005. pg.47-62
- ↑ PMID 22234685 (PMID 22234685&query_hl=14&itool=pubmed_docsum 22234685)
- ↑ Bergman Y, Cedar H (2004). A stepwise epigenetic process controls immunoglobulin allelic exclusion. Nat Rev Immunol 4 (10): 753–761.
- ↑ Honjo T, Habu S (1985). Origin of immune diversity: genetic variation and selection. Annu Rev Biochem 54 (1): 803–830.
- ↑ 37.0 37.1 Or-Guil M, Wittenbrink N, Weiser AA, Schuchhardt J (2007). Recirculation of germinal center B cells: a multilevel selection strategy for antibody maturation. Immunol. Rev. 216: 130–41.
- ↑ Neuberger M, Ehrenstein M, Rada C, Sale J, Batista F, Williams G, Milstein C (March 2000). Memory in the B-cell compartment: antibody affinity maturation. Philos Trans R Soc Lond B Biol Sci 355 (1395): 357–360.
- ↑ Stavnezer J, Amemiya CT (2004). Evolution of isotype switching. Semin. Immunol. 16 (4): 257–275.
- ↑ Durandy A (2003). Activation-induced cytidine deaminase: a dual role in class-switch recombination and somatic hypermutation. Eur. J. Immunol. 33 (8): 2069–2073.
- ↑ Casali P, Zan H (2004). Class switching and Myc translocation: how does DNA break?. Nat. Immunol. 5 (11): 1101–1103.
- ↑ Lieber MR, Yu K, Raghavan SC (2006). Roles of nonhomologous DNA end joining, V(D)J recombination, and class switch recombination in chromosomal translocations. DNA Repair (Amst.) 5 (9–10): 1234–1245.
- ↑ page 22 in: (2007) Autoantibodie, Amsterdam ; Boston: Elsevier.
- ↑ 44.0 44.1 Farlex dictionary > monovalent Citing: The American Heritage Science Dictionary, Copyright 2005
- ↑ 45.0 45.1 Farlex dictionary > polyvalent Citing: The American Heritage Medical Dictionary. 2004
- ↑ Animated depictions of how antibodies are used in ELISA assays. Cellular Technology Ltd.—Europe. URL accessed on 2007-05-08.
- ↑ Animated depictions of how antibodies are used in ELISPOT assays. Cellular Technology Ltd.—Europe. URL accessed on 2007-05-08.
- ↑ Stern P (2006). Current possibilities of turbidimetry and nephelometry. Klin Biochem Metab 14 (3): 146–151.
- ↑ 49.0 49.1 Dean, Laura (2005). "Chapter 4: Hemolytic disease of the newborn" Blood Groups and Red Cell Antigens, NCBI Bethesda (MD): National Library of Medicine (US),.
- ↑ Feldmann M, Maini R (2001). Anti-TNF alpha therapy of rheumatoid arthritis: what have we learned?. Annu Rev Immunol 19 (1): 163–196.
- ↑ Doggrell S (2003). Is natalizumab a breakthrough in the treatment of multiple sclerosis?. Expert Opin Pharmacother 4 (6): 999–1001.
- ↑ Krueger G, Langley R, Leonardi C, Yeilding N, Guzzo C, Wang Y, Dooley L, Lebwohl M (2007). A human interleukin-12/23 monoclonal antibody for the treatment of psoriasis. N Engl J Med 356 (6): 580–592.
- ↑ Plosker G, Figgitt D (2003). Rituximab: a review of its use in non-Hodgkin's lymphoma and chronic lymphocytic leukaemia. Drugs 63 (8): 803–843.
- ↑ Vogel C, Cobleigh M, Tripathy D, Gutheil J, Harris L, Fehrenbacher L, Slamon D, Murphy M, Novotny W, Burchmore M, Shak S, Stewart S (2001). First-line Herceptin monotherapy in metastatic breast cancer. Oncology Suppl 2 (Suppl. 2): 37–42.
- ↑ LeBien TW (1 July 2000). Fates of human B-cell precursors. Blood 96 (1): 9–23.
- ↑ Ghaffer A. Immunization. Immunology — Chapter 14. University of South Carolina School of Medicine. URL accessed on 2007-06-06.
- ↑ Urbaniak S, Greiss M (2000). RhD haemolytic disease of the fetus and the newborn. Blood Rev 14 (1): 44–61.
- ↑ 58.0 58.1 Fung Kee Fung K, Eason E, Crane J, Armson A, De La Ronde S, Farine D, Keenan-Lindsay L, Leduc L, Reid G, Aerde J, Wilson R, Davies G, Désilets V, Summers A, Wyatt P, Young D (2003). Prevention of Rh alloimmunization. J Obstet Gynaecol Can 25 (9): 765–73.
- ↑ Tini M, Jewell UR, Camenisch G, Chilov D, Gassmann M (2002). Generation and application of chicken egg-yolk antibodies. Comp. Biochem. Physiol., Part a Mol. Integr. Physiol. 131 (3): 569–574.
- ↑ Cole SP, Campling BG, Atlaw T, Kozbor D, Roder JC (1984). Human monoclonal antibodies. Mol. Cell. Biochem. 62 (2): 109–20.
- ↑ Kabir S (2002). Immunoglobulin purification by affinity chromatography using protein A mimetic ligands prepared by combinatorial chemical synthesis. Immunol Invest 31 (3–4): 263–278.
- ↑ 62.0 62.1 Brehm-Stecher B, Johnson E (2004). Single-cell microbiology: tools, technologies, and applications. Microbiol Mol Biol Rev 68 (3): 538–559.
- ↑ Williams N (2000). Immunoprecipitation procedures. Methods Cell Biol 62: 449–453.
- ↑ Kurien B, Scofield R (2006). Western blotting. Methods 38 (4): 283–293.
- ↑ Scanziani E (1998). Immunohistochemical staining of fixed tissues. Methods Mol Biol 104: 133–140.
- ↑ Reen DJ. (1994). Enzyme-linked immunosorbent assay (ELISA). Methods Mol Biol. 32: 461–466.
- ↑ Kalyuzhny AE (2005). Chemistry and biology of the ELISPOT assay. Methods Mol Biol. 302: 015–032.
- ↑ Whitelegg N.R.J., Rees A.R. (2000). WAM: an improved algorithm for modeling antibodies on the WEB. Protein Engineering 13 (12): 819–824.
- ↑ Marcatili P, Rosi A,Tramontano A (2008). PIGS: automatic prediction of antibody structures. Bioinformatics 24 (17): 1953–1954.
Prediction of Immunoglobulin Structure (PIGS)
- ↑ Sivasubramanian A, Sircar A, Chaudhury S, Gray J J (2009). Toward high-resolution homology modeling of antibody Fv regions and application to antibody–antigen docking. Proteins 74 (2): 497–514.
- ↑ 71.0 71.1 71.2 Lindenmann, Jean (1984). Origin of the Terms 'Antibody' and 'Antigen'. Scand. J. Immunol. 19 (4): 281–5.
- ↑ Padlan, Eduardo (February 1994). Anatomy of the antibody molecule. Mol. Immunol. 31 (3): 169–217.
- ↑ New Sculpture Portraying Human Antibody as Protective Angel Installed on Scripps Florida Campus. URL accessed on 2008-12-12.
- ↑ Protein sculpture inspired by Vitruvian Man. URL accessed on 2008-12-12.
- ↑ Emil von Behring — Biography. URL accessed on 2007-06-05.
- ↑ AGN (1931). The Late Baron Shibasaburo Kitasato. Canadian Medical Association Journal 25 (2).
- ↑ Winau F, Westphal O, Winau R (2004). Paul Ehrlich--in search of the magic bullet. Microbes Infect. 6 (8): 786–789.
- ↑ Silverstein AM (2003). Cellular versus humoral immunology: a century-long dispute. Nat. Immunol. 4 (5): 425–428.
- ↑ Van Epps HL (2006). Michael Heidelberger and the demystification of antibodies. J. Exp. Med. 203 (1).
- ↑ Marrack, JR (1938). Chemistry of antigens and antibodies, 2nd, London: His Majesty's Stationery Office.
- ↑ The Linus Pauling Papers: How Antibodies and Enzymes Work. URL accessed on 2007-06-05.
- ↑ Silverstein AM (2004). Labeled antigens and antibodies: the evolution of magic markers and magic bullets. Nat. Immunol. 5 (12): 1211–1217.
- ↑ Edelman GM, Gally JA (1962). The nature of Bence-Jones proteins. Chemical similarities to polypetide chains of myeloma globulins and normal gamma-globulins. J. Exp. Med. 116 (2): 207–227.
- ↑ Stevens FJ, Solomon A, Schiffer M (1991). Bence Jones proteins: a powerful tool for the fundamental study of protein chemistry and pathophysiology. Biochemistry 30 (28): 6803–6805.
- ↑ 85.0 85.1 Raju TN (1999). The Nobel chronicles. 1972: Gerald M Edelman (b 1929) and Rodney R Porter (1917-85). Lancet 354 (9183).
- ↑ Hochman J, Inbar D, Givol D (1973). An active antibody fragment (Fv) composed of the variable portions of heavy and light chains. Biochemistry 12 (6): 1130–1135.
- ↑ Tomasi TB (1992). The discovery of secretory IgA and the mucosal immune system. Immunol. Today 13 (10): 416–418.
- ↑ Preud'homme JL, Petit I, Barra A, Morel F, Lecron JC, Lelièvre E (2000). Structural and functional properties of membrane and secreted IgD. Mol. Immunol. 37 (15): 871–887.
- ↑ Johansson SG (2006). The discovery of immunoglobulin E. Allergy and asthma proceedings : the official journal of regional and state allergy societies 27 (2 Suppl 1): S3–6.
- ↑ Hozumi N, Tonegawa S (1976). Evidence for somatic rearrangement of immunoglobulin genes coding for variable and constant regions. Proc. Natl. Acad. Sci. U.S.A. 73 (10): 3628–3632.
- Mike's Immunoglobulin Structure/Function Page at University of Cambridge
- Antibodies as the PDB molecule of the month Discussion of the structure of antibodies at RCSB Protein Data Bank
- Microbiology and Immunology On-line Textbook at University of South Carolina
- A hundred years of antibody therapy History and applications of antibodies in the treatment of disease at University of Oxford
- How Lymphocytes Produce Antibody from Cells Alive!
- Antibody applications Fluorescent antibody image library, University of Birmingham
Immune system / Immunology / Psychoneuroimmunology
|Systems||Adaptive immune system vs. Innate immune system • Humoral immune system vs. Cellular immune system • Complement system (Anaphylatoxins)|
|Antibodies and antigens||Antibody (Monoclonal antibodies, Polyclonal antibodies, Autoantibody) • Allotype • Isotype • Idiotype • Antigen (Superantigen)|
|Immune cells||White blood cells (T cell, B cell, NK cell, Mast cell, Basophil, Eosinophil) • Phagocyte (Neutrophil, Macrophage, Dendritic cell) • Antigen-presenting cell • Reticuloendothelial system|
|Immunity vs. tolerance||Immunity • Autoimmunity • Allergy • Tolerance (Central) • Immunodeficiency|
|Immunogenetics||Somatic hypermutation • V(D)J recombination • Immunoglobulin class switching • MHC / HLA|
|Other||Cytokines • Inflammation • Opsonin|
Immune system proteins
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | http://psychology.wikia.com/wiki/Antibodies | 13 |
54 | Download and purchase eBook first in Languages Series
Purchase and download eBook
Science in Ancient Artwork
The Great Pyramid: Measurements
Charles William Johnson
The measurements regarding the Great Pyramid have always been under debate. Depending what, when, where and, how measurements were taken has produced conflicting measurements regarding the length of the base, slope and height of the Great Pyramid, as well as its angle of inclination. Given the fact that the Great Pyramid has not been in mint condition for millenia, it is understandable that such measurements have varied to the point of contradiction. The base measurement has been cited anywhere from 693 feet, through 756 feet, to 765 feet, depending upon which position is taken as the boundaries of the pyramid's base. The pyramid's height is impossible to measure exactly, given the fact that the capstone no longer exists. The pyramid's angle of inclination has been computated as of a few stones found which possibly reveal the original angle, although again condition and the size of those stones make for calculations and not exact measurements.
Nonetheless, recently, measurements have been offered by the Director of the Giza Plateau Mapping Project, Mark Lehner, in a work entitled, The Complete Pyramids: Solving the Ancient Mysteries (Thames and Hudson, 1997). Let us examine the measurements offered in this scholarly work in the light of the numbers of the ancient reckoning system that we have been studying in the Earth/matriX: Science in Ancient Artwork series.
The base length of the Great Pyramid is calculated as 230.33 meters or 756 feet, according to professor Lehner, and its height once rose to 146.59 meters or 481 feet (Ibid., page 108). The angle of inclination or slope of its faces are 51° 50' 40", again according to that same author. Much debate has revolved around employing measurements in the metric system or the English system; or, whether one should employ the numbers of the ancient cubit or, even some other unit of measurement. According to our reasoning, it does not matter which particular system of measurement is chosen for the discussion, given the fact that the geometrical outlay of the Great Pyramid allows for accomodating any system of measurement. This obtains from a comprehension of the mathematics of the ancient reckoning system and the very nature of geometry itself involving angles of inclination and the trigonometric table.
As we view the numbers offered by professor Lehner, we cannot help but immediately note that the metric measurement of 230.33 reminds us of the maya long count number 2304 (fractal), the alautun of 23,040,000,000. Nor can we avoid observing the possibility of relating the 146.59 measurement in meters to the Sothic cycle number 1460 or 1461 years. In most of our previous studies, we have employed the 756 feet measurement for the base length of the Great Pyramid, along with the 481 feet height measurement. The possibility of analyzing the angle of inclination has been dealt with previously by us (Cfr., Earth/matriX, Extract No.7, Pyramids of Egypt: Precession Numbers and Degrees of Angles).
Given the previously cited measurements, the following may be observed. The 51° 50' 40" angle translates into the decimal system as 51.844444°, which when halved yields the number 25.922222. This particular number reminds us of the Platonic Cycle number of 25,920 years. Previously, we have discussed how the numbers of the angle of inclination of the pyramids of ancient kemi may have been reflecting just such a knowledge of the Precession. However, for computations in many of the ancient reckoning systems, it is widely held that the ancients avoided employing fractional numbers (with the exception of numbers regarding the reciprocal of seven). In this case, then, we might expect the angle of inclination to have been conceived (if not technologically achieved in the stone carving) as of 51.84°, or that of 51° 50' 24"0 (a difference of 0° 00' 16"0).
However, before we discuss such a variable, let us comprehend the relationship of the numbers offered by the Giza Plateau Mapping Project. The first question that comes to mind is whether or not it is possible, according to the trigonometric table, to obtain a pyramidal structure of the measurements offered. The answer would appear to be negative. Consider the following:
In this case, the 146.4576827 appears to be more relational to a reciprocal of seven number: 146.4285714 with a lesser difference (146.4576827 - 146.4285714 = .0291113).
In other words, it is impossible, trigonometrically speaking, to have a pyramidal structure with the given measurements of base length 230.33 meters, height 146.59 meters and an angle of inclination of 51.844444 degrees according to the posits of the Pythagorean Theorem.
In other words, it is impossible, trigonometrically speaking, to have a pyramidal structure with the given measurements of 756 feet base length, 481 feet height, and an angle of inclination of the slope of 51.844444 degrees.
The measurements of the Great Pyramid undergo such scrutiny because it has become a case of deciding whether the Great Pyramid was built with such great precision out of design or by happenstance. Although we are talking about an enormously, unimaginable monumental stone structure, at the same time we are considering the exact preciseness with which it appears to have been executed. Hence, the reason for considering the exactness of the numbers of the measurements. The numbers of the ancient reckoning system may assist us in comprehending the variations of the numbers offered in the current measurements.
The measurements of the Giza Plateau Mapping Project, which are admitted calculations, do not appear to conform to the historically significant numbers of the ancient reackoning system. However, if we allow for some adjustments of those numbers in terms of the ancient numbers, then the projected measurements and their adjustments appear to become more relevant.
If we allow for an angle of inclination of 51.84 degrees, the relationships of the numbers change. And, we must remember that if the angle of inclination remains as a constant, it does not matter which set of measurements for the length/height of the pyramidal structure is employed; the proportion remains constant in terms of trigonometry.
If 51.844444 degrees, but 230.4 meters:
756 feet, but with 51.84 degree angle of inclination:
Depending upon whether one begins with the baseline or the height of the Great Pyramid, the numbers change correspondingly.
There is another reason that one might want to consider the original angle of inclination designed into the Great Pyramid as having been 51.84 degrees. Consider the fact that if the baseline measurement of 756 feet is correct (the one that is most commonly accepted), then the following obtains:
We recognize in this fractal the Nineveh number of 195,955,200,000,000.
In the sine of the 51.84 number, .786288432, distinct ancient historically significant numbers appear: .786 (288)(432). Therefore, 288 being a maya fractal; 432, being the Consecration number.
Notice, however, when the 756 and 481 numbers given by professor Lehner are viewed in relation to the 360c of the ancient reckoning system:
146.5904762 / .636243386 = 230.4000001
Hence, the baseline length of the Great Pyramid may be of any unit of measurement (230.33 meters; 230.4 meters; 756 feet; 500 cubits; etc.), with the angle of inclination remaining constant. Essentially, it does not matter even if one measures the baseline wrong; the angle pre-determines the proportional measurements of the triangle. This becomes even more evident when we employ the cited measurement of 500 cubits for the length of the base of the Great Pyramid:
An unsuspecting relationship developes from this possibility:
But, if we subtract this total amount from another ancient Sothic number (1649.457812 fractal) distinguished in previous Earth/matriX essays (Cfr., Essay No.73), the appearance of a number relational to the maya companion number obtains (1366560):
Regarding any of the adjusted possibilities presented above, one could obviously obtain computations where the ancient historically significant numbers would become relational to one another.
Your comments and suggestions are greatly appreciated: | http://www.earthmatrix.com/great/pyramid.htm | 13 |
56 | I hope you are having fun with our AutoLISP exercises. Last time, we were introduced to use AutoLISP variables and asking users for input. This time we will learn about asking for more user input, then using the input in mathematical equation. The calculation result result will be used to draw an object.
Our challenge now is to create a program that can draw a regular polygon by defining number of sides and the area. Interesting isn’t it?
writing Calculation in AutoLISP
Writing calculation in AutoLISP is quite different. But should not be difficult to understand.
If we want to calculate 2+5, then we write:
(+ 2 5)
Let us try it in AutoCAD command line. Type in AutoCAD the line above. Yes you can use AutoLISP command in AutoCAD. You need to type it exactly like above.
Now let’s try another one. Type each line then press [enter].
(setq a 2)
(setq b 5)
(+ a b)
What we did is set variable a value to 2, b to 5, then calculate a+b.
(setq c (+ a b)
This one means c = a + b.
Not so difficult, isn’t it? Refer to other calculation function in AfraLISP site here. Now let us see the polygon formula.
Drawing Polygon Method
How can we draw polygon by defining the area? I did some searches and find this formula.
source: Math Open Reference page.
If we know the area and number of sides, we can calculate the apothem. What is apothem? See image below.
How can we draw a polygon when we know the apothem and number of sides? Using polygon tool of course. We create it using circumscribed method.
Calculating Apothem Length
We need to change the formula to get the apothem like below:
Functions in Our Formula
We will use these function in our formula.
- square root can be written as (sqrt a),
- multiplication as (* a b),
- division as (/ a b),
- cosine of an angle as (cos ang),
- sine of an angle as (sin ang).
With a, b, and ang are variables.
The bad news is AutoLISP doesn’t have tan function. But there is work around. Tangen is sin/cos. Tan (pi/N) can be written as:
(setq ang (/ pi N))
(/ (sin ang) (cos ang))
Pi is built in variable, so we will just use it.
Besides the sequence, you should be familiar with the command. If you want to try writing the equation by yourself, go ahead. It can be a good exercise to get familiar with mathematical function in AutoLISP. You can find the equation below later.
The complete equation can be written. I use variable a for area and n for number of sides. I also use apt to keep the equation value (the apothem length), and ang to keep (pi/n).
(setq ang (/ pi n))
(setq apt (sqrt (/ (/ a (/ (sin ang) (cos ang))) n)))
Or you can write in a single line, which looks more confusing.
(setq apt (sqrt (/ (/ a (/ (sin(/ pi n)) (cos(/ pi n)))) n)))
Asking For User Input
We already use getpoint to ask user to pick a point. Or they can type the coordinate. Now we need three user input: number of sides, area, and center point.
- Number of sides is an integer. You don’t accept 2.5 as number of sides, don’t you? We can use GETINT to ask for integer.
- Area can have decimal numbers, so it is a real number. We can use GETREAL for this purpose.
- You already use GETPOINT to ask for a point right?
Let us try. In AutoCAD command line, type
type integer number. It should return the number you entered. Try again. This time type a decimal number.Let’s say 3.2. What will AutoCAD say?
Requires an integer value.
Using the right user input function will also reduce probability of errors.
Writing the Complete Code
Now we can write the complete code.
- You need to ask the user the number of polygon sides, the expected polygon area.
- Then you calculate the value.
- You need to ask one more input: the polygon center.
- Finally you can write the polygon command.
Now we have complete data, what we will put in our program. I strongly suggest you to try writing the program first. You can check and compare the code later.
Below is the complete code I created. If you have problem, you can copy the code below.
; This LISP will create regular polygon by defining polygon area and number of sides
; Created by Edwin Prakoso
; Website: http://cad-notes.com
(defun c:pba (/ a n apt ptloc)
(setq n (getint "
Number of Polygon Sides: "))
(setq a (getreal "
Expected Polygon Area: "))
(setq ang (/ pi n))
(setq apt (sqrt (/ (/ a (/ (sin ang) (cos ang))) n))) ;calculating apothem for circumscribed polygon
(setq ptloc (getpoint "
Pick Location: "))
(command "_POLYGON" n ptloc "C" apt)
How are you doing so far? I would like to know what do you think after you have done the exercises.
Next, we are going to create an AutoLISP program to label coordinate. | http://www.cad-notes.com/2010/12/autolisp-exercise-create-regular-polygon-by-defining-area/ | 13 |
73 | Numbers are used for calculating and counting.
To count objects, we use numbers i.e. 1, 2, 3, 4,………, 10 etc.
Related Links :
● Various Types of Numbers
● Operations On Whole Numbers
These are called counting numbers and are also called natural numbers.
When we add 0 to the set of counting numbers, we get whole numbers. So, 0, 1, 2, 3,….,10 are called whole numbers.
So, 1 is the smallest natural number and 0 is the smallest whole number. But there is no largest whole number or natural number because each number has its successor.
Every whole number is made up of one or more of the symbols 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. These symbols are called digits or figures.
In numbers we will learn more about the basic operations, unitary method, multiple and factors etc…………
Addition Of Whole Numbers.
Subtraction Of Whole Numbers.
Multiplication Of Whole Numbers.
Properties Of Multiplication.
Division Of Whole Numbers.
Properties Of Division.
Representation of Integers on a Number Line.
Addition of Integers on a Number Line.
Rules to Add Integers.
Rules to Subtract Integers.
● Multiplication is Repeated Addition.
Multiplication of Fractional Number by a Whole Number.
Multiplication of a Fraction by Fraction.
Properties of Multiplication of Fractional Numbers.
Worksheet on Multiplication on Fraction.
Division of a Fraction by a Whole Number.
Division of a Fractional Number.
Division of a Whole Number by a Fraction.
Properties of Fractional Division.
Worksheet on Division of Fractions.
Simplification of Fractions.
Worksheet on Simplification of Fractions.
Word Problems on Fraction.
Worksheet on Word Problems on Fractions.
Decimal Place Value Chart.
Expanded form of Decimal Fractions.
Like Decimal Fractions.
Unlike Decimal Fraction.
Equivalent Decimal Fractions.
Changing Unlike to Like Decimal Fractions.
Comparison of Decimal Fractions.
Conversion of a Decimal Fraction into a Fractional Number.
Conversion of Fractions to Decimals Numbers.
Addition of Decimal Fractions.
Subtraction of Decimal Fractions.
Multiplication of a Decimal Numbers.
Multiplication of a Decimal by a Decimal.
Properties of Multiplication of Decimal Numbers.
Division of a Decimal by a Whole Number.
Division of Decimal Fractions
Division of Decimal Fractions by Multiples.
Division of a Decimal by a Decimal.
Division of a whole number by a Decimal.
Conversion of fraction to Decimal Fraction.
Simplification in Decimals.
Word Problems on Decimal.
● Rounding Numbers.
Round off to Nearest 10.
Round off to Nearest 100.
Round off to Nearest 1000.
Rounding off Decimal Fractions.
Correct to One Decimal Place.
Correct to Two Decimal Place.
Worksheet on Rounding off number.
● Multiples and Factors.
Repeated Prime Factors.
Highest Common Factor (H.C.F).
Examples on Highest Common Factor (H.C.F).
Greatest Common Factor (G.C.F).
Examples of Greatest Common Factor (G.C.F).
To find Highest Common Factor by using Prime Factorization
Examples to find Highest Common Factor by using Prime
To find Highest Common Factor by using Division Method.
Examples to find Highest Common Factor of two numbers by
using Division Method.
To find the Highest Common Factor of three numbers by
using Division Method.
● Divisibility Rules.
Divisible by 2.
Divisible by 3.
Divisible by 4.
Divisible by 5.
Divisible by 6.
Divisible by 7.
Divisible by 8.
Divisible by 10.
Divisible by 11.
To Convert a Percentage into a Fraction
● Profit and Loss.
To Convert a Fraction into a Percentage
To find the percent of a given number
To find what Per cent is one Number of another Number
To Calculate a Number when its Percentage is Known
Formulas of Profit and Loss.
● Simple Interest.
To find Cost Price or Selling Price when Profit or Loss is
Worksheet on Profit and Loss.
Word Problems on Simple Interest.
In Simple Interest when the Time is given in Months and
To find Principal when Time Interest and Rate are given.
To find Rate when Principal Interest and Time are given .
To find Time when Principal Interest and Rate are given.
Worksheet on Simple Interest.
Converting the Temperature from Celsius to Fahrenheit.
Converting the Temperature from Fahrenheit to Celsius.
Worksheet on Temperature.
Worksheet on Average.
● Speed Distance and Time.
To find Speed when Distance and Time are given.
To find the Distance when Speed and Time are given.
To find Time when Distance and Speed are given.
Worksheet on Speed, Distance and Time.
● Unitary Method.
Worksheet on Measurement.
5th Grade Math Problems
From Numbers to HOME PAGE | http://www.math-only-math.com/Numbers-a.html | 13 |
252 | A fraction (from Latin: fractus, "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters.
A common, vulgar, or simple fraction (for example , , and 3/17) consists of an integer numerator, displayed above a line (or before a slash), and a non-zero integer denominator, displayed below (or after) that line. The numerator represents a number of equal parts and the denominator indicates how many of those parts make up a whole. For example, in the fraction 3/4, the numerator, 3, tells us that the fraction represents 3 equal parts, and the denominator, 4, tells us that 4 parts make up a whole. The picture to the right illustrates or 3/4 of a cake. Numerators and denominators are also used in fractions that are not simple, including compound fractions, complex fractions, and mixed numerals.
Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, or negative exponents (as in 0.01, 1%, and 10−2 respectively, all of which are equivalent to 1/100). An integer such as the number 7 can be thought of as having an implied denominator of one: 7 equals 7/1.
Other uses for fractions are to represent ratios and to represent division. Thus the fraction 3/4 is also used to represent the ratio 3:4 (the ratio of the part to the whole) and the division 3 ÷ 4 (three divided by four).
In mathematics the set of all numbers which can be expressed in the form a/b, where a and b are integers and b is not zero, is called the set of rational numbers and is represented by the symbol Q, which stands for quotient. The test for a number being a rational number is that it can be written in that form (i.e., as a common fraction). However, the word fraction is also used to describe mathematical expressions that are not rational numbers, for example algebraic fractions (quotients of algebraic expressions), and expressions that contain irrational numbers, such as √2/2 (see square root of 2) and π/4 (see proof that π is irrational).
Forms of fractions
Common, vulgar, or simple fractions
A common fraction (also known as a vulgar fraction or simple fraction) is a rational number written as a/b or , where the integers a and b are called the numerator and the denominator, respectively. The numerator represents a number of equal parts, and the denominator, which cannot be zero, indicates how many of those parts make up a unit or a whole. In the examples 2/5 and 7/3, the slanting line is called a solidus or forward slash. In the examples and , the horizontal line is called a vinculum or, informally, a "fraction bar".
Writing simple fractions
Scientific publishing distinguishes four ways to set fractions, together with guidelines on use:
- special fractions: fractions that are presented as a single character with a slanted bar, with roughly the same height and width as other characters in the text. Generally used for simple fractions, such as: ½, ⅓, ⅔, ¼, and ¾. Since the numerals are smaller, legibility can be an issue, especially for small-sized fonts. These are not used in modern mathematical notation, but in other contexts;
- case fractions: similar to special fractions, but with a horizontal bar, thus making them upright. An example would be , but rendered with the same height as other characters;
- shilling fractions: 1/2, so called because this notation was used for pre-decimal British currency (£sd), as in 2/6 for a half crown, meaning two shillings and six pence. While the notation "two shillings and six pence" did not represent a fraction, the forward slash is now used in fractions, especially for fractions inline with prose (rather than displayed), to avoid uneven lines. It is also used for fractions within fractions (complex fractions) or within exponents to increase legibility;
- built-up fractions: . This notation uses two or more lines of ordinary text, and results in a variation in spacing between lines when included within other text. While large and legible, these can be disruptive, particularly for simple fractions or within complex fractions.
A ratio is a relationship between two or more numbers that can be sometimes expressed as a fraction. Typically, a number of items are grouped and compared in a ratio, specifying numerically the relationship between each group. Ratios are expressed as "group 1 to group 2 ... to group n". For example, if a car lot had 12 vehicles of which
- 2 are white,
- 6 are red,
- 4 are yellow
The ratio of red to white to yellow cars is 6 to 2 to 4. The ratio of yellow cars to white cars is 4 to 2 and may be expressed as 4:2 or 2:1.
A ratio may be typically converted to a fraction when it is expressed as a ratio to the whole. In the above example, the ratio of yellow cars to the total cars in the lot is 4:12 or 1:3. We can convert these ratios to a fraction and say that 4/12 of the cars or 1/3 of the cars in the lot are yellow. Therefore, if a person randomly chose one car on the lot, then there is a one in three chance or probability that it would be yellow.
Proper and improper common fractions
Common fractions can be classified as either proper or improper. When the numerator and the denominator are both positive, the fraction is called proper if the numerator is less than the denominator, and improper otherwise. In general, a common fraction is said to be a proper fraction if the absolute value of the fraction is strictly less than one—that is, if the fraction is between -1 and 1 (but not equal to -1 or 1). It is said to be an improper fraction (U.S., British or Australian) or top-heavy fraction (British, occasionally North America) if the absolute value of the fraction is greater than or equal to 1. Examples of proper fractions are 2/3, -3/4, and 4/9; examples of improper fractions are 9/4, -4/3, and 8/3.
Mixed numbers
A mixed numeral (often called a mixed number, also called a mixed fraction) is the sum of a non-zero integer and a proper fraction. This sum is implied without the use of any visible operator such as "+". For example, in referring to two entire cakes and three quarters of another cake, the whole and fractional parts of the number are written next to each other: .
This is not to be confused with the algebra rule of implied multiplication. When two algebraic expressions are written next to each other, the operation of multiplication is said to be "understood". In algebra, for example is not a mixed number. Instead, multiplication is understood where .
To avoid confusion, the multiplication is often explicitly expressed. So may be written as
An improper fraction is another way to write a whole plus a part. A mixed number can be converted to an improper fraction as follows:
- Write the mixed number as a sum .
- Convert the whole number to an improper fraction with the same denominator as the fractional part, .
- Add the fractions. The resulting sum is the improper fraction. In the example, .
Similarly, an improper fraction can be converted to a mixed number as follows:
- Divide the numerator by the denominator. In the example, , divide 11 by 4. 11 ÷ 4 = 2 with remainder 3.
- The quotient (without the remainder) becomes the whole number part of the mixed number. The remainder becomes the numerator of the fractional part. In the example, 2 is the whole number part and 3 is the numerator of the fractional part.
- The new denominator is the same as the denominator of the improper fraction. In the example, they are both 4. Thus .
Mixed numbers can also be negative, as in , which equals .
Reciprocals and the "invisible denominator"
The reciprocal of a fraction is another fraction with the numerator and denominator reversed. The reciprocal of , for instance, is . The product of a fraction and its reciprocal is 1, hence the reciprocal is the multiplicative inverse of a fraction. Any integer can be written as a fraction with the number one as denominator. For example, 17 can be written as , where 1 is sometimes referred to as the invisible denominator. Therefore, every fraction or integer except for zero has a reciprocal. The reciprocal of 17 is .
Complex fractions
In a complex fraction, either the numerator, or the denominator, or both, is a fraction or a mixed number, corresponding to division of fractions. For example, and are complex fractions. To reduce a complex fraction to a simple fraction, treat the longest fraction line as representing division. For example:
If, in a complex fraction, there is no clear way to tell which fraction line takes precedence, then the expression is improperly formed, and meaningless.
Compound fractions
A compound fraction is a fraction of a fraction, or any number of fractions connected with the word of, corresponding to multiplication of fractions. To reduce a compound fraction to a simple fraction, just carry out the multiplication (see the section on multiplication). For example, of is a compound fraction, corresponding to . The terms compound fraction and complex fraction are closely related and sometimes one is used as a synonym for the other.
Decimal fractions and percentages
A decimal fraction is a fraction whose denominator is not given explicitly, but is understood to be an integer power of ten. Decimal fractions are commonly expressed using decimal notation in which the implied denominator is determined by the number of digits to the right of a decimal separator, the appearance of which (e.g., a period, a raised period (•), a comma) depends on the locale (for examples, see decimal separator). Thus for 0.75 the numerator is 75 and the implied denominator is 10 to the second power, viz. 100, because there are two digits to the right of the decimal separator. In decimal numbers greater than 1 (such as 3.75), the fractional part of the number is expressed by the digits to the right of the decimal (with a value of 0.75 in this case). 3.75 can be written either as an improper fraction, 375/100, or as a mixed number, .
Decimal fractions can also be expressed using scientific notation with negative exponents, such as 6.023×10−7, which represents 0.0000006023. The 10−7 represents a denominator of 107. Dividing by 107 moves the decimal point 7 places to the left.
Decimal fractions with infinitely many digits to the right of the decimal separator represent an infinite series. For example, 1/3 = 0.333... represents the infinite series 3/10 + 3/100 + 3/1000 + ... .
Another kind of fraction is the percentage (Latin per centum meaning "per hundred", represented by the symbol %), in which the implied denominator is always 100. Thus, 51% means 51/100. Percentages greater than 100 or less than zero are treated in the same way, e.g. 311% equals 311/100, and -27% equals -27/100.
The related concept of permille or parts per thousand has an implied denominator of 1000, while the more general parts-per notation, as in 75 parts per million, means that the proportion is 75/1,000,000.
Whether common fractions or decimal fractions are used is often a matter of taste and context. Common fractions are used most often when the denominator is relatively small. By mental calculation, it is easier to multiply 16 by 3/16 than to do the same calculation using the fraction's decimal equivalent (0.1875). And it is more accurate to multiply 15 by 1/3, for example, than it is to multiply 15 by any decimal approximation of one third. Monetary values are commonly expressed as decimal fractions, for example $3.75. However, as noted above, in pre-decimal British currency, shillings and pence were often given the form (but not the meaning) of a fraction, as, for example 3/6 (read "three and six") meaning 3 shillings and 6 pence, and having no relationship to the fraction 3/6.
Special cases
- A unit fraction is a vulgar fraction with a numerator of 1, e.g. . Unit fractions can also be expressed using negative exponents, as in 2−1 which represents 1/2, and 2−2 which represents 1/(22) or 1/4.
- An Egyptian fraction is the sum of distinct positive unit fractions, for example . This definition derives from the fact that the ancient Egyptians expressed all fractions except , and in this manner. Every positive rational number can be expanded as an Egyptian fraction. For example, can be written as Any positive rational number can be written as a sum of unit fractions in infinitely many ways. Two ways to write are and .
Arithmetic with fractions
Equivalent fractions
Multiplying the numerator and denominator of a fraction by the same (non-zero) number results in a fraction that is equivalent to the original fraction. This is true because for any non-zero number , the fraction . Therefore, multiplying by is equivalent to multiplying by one, and any number multiplied by one has the same value as the original number. By way of an example, start with the fraction . When the numerator and denominator are both multiplied by 2, the result is , which has the same value (0.5) as . To picture this visually, imagine cutting a cake into four pieces; two of the pieces together () make up half the cake ().
Dividing the numerator and denominator of a fraction by the same non-zero number will also yield an equivalent fraction. This is called reducing or simplifying the fraction. A simple fraction in which the numerator and denominator are coprime (that is, the only positive integer that goes into both the numerator and denominator evenly is 1) is said to be irreducible, in lowest terms, or in simplest terms. For example, is not in lowest terms because both 3 and 9 can be exactly divided by 3. In contrast, is in lowest terms—the only positive integer that goes into both 3 and 8 evenly is 1.
Using these rules, we can show that = = = .
A common fraction can be reduced to lowest terms by dividing both the numerator and denominator by their greatest common divisor. For example, as the greatest common divisor of 63 and 462 is 21, the fraction can be reduced to lowest terms by dividing the numerator and denominator by 21:
The Euclidean algorithm gives a method for finding the greatest common divisor of any two positive integers.
Comparing fractions
Comparing fractions with the same denominator only requires comparing the numerators.
- because 3>2.
If two positive fractions have the same numerator, then the fraction with the smaller denominator is the larger number. When a whole is divided into equal pieces, if fewer equal pieces are needed to make up the whole, then each piece must be larger. When two positive fractions have the same numerator, they represent the same number of parts, but in the fraction with the smaller denominator, the parts are larger.
One way to compare fractions with different numerators and denominators is to find a common denominator. To compare and , these are converted to and . Then bd is a common denominator and the numerators ad and bc can be compared.
- ? gives
It is not necessary to determine the value of the common denominator to compare fractions. This short cut is known as "cross multiplying" – you can just compare ad and bc, without computing the denominator.
Multiply top and bottom of each fraction by the denominator of the other fraction, to get a common denominator:
The denominators are now the same, but it is not necessary to calculate their value – only the numerators need to be compared. Since 5×17 (= 85) is greater than 4×18 (= 72), .
Also note that every negative number, including negative fractions, is less than zero, and every positive number, including positive fractions, is greater than zero, so every negative fraction is less than any positive fraction.
The first rule of addition is that only like quantities can be added; for example, various quantities of quarters. Unlike quantities, such as adding thirds to quarters, must first be converted to like quantities as described below: Imagine a pocket containing two quarters, and another pocket containing three quarters; in total, there are five quarters. Since four quarters is equivalent to one (dollar), this can be represented as follows:
Adding unlike quantities
To add fractions containing unlike quantities (e.g. quarters and thirds), it is necessary to convert all amounts to like quantities. It is easy to work out the chosen type of fraction to convert to; simply multiply together the two denominators (bottom number) of each fraction.
For adding quarters to thirds, both types of fraction are converted to twelfths, thus: .
Consider adding the following two quantities:
First, convert into fifteenths by multiplying both the numerator and denominator by three: . Since equals 1, multiplication by does not change the value of the fraction.
Second, convert into fifteenths by multiplying both the numerator and denominator by five: .
Now it can be seen that:
is equivalent to:
This method can be expressed algebraically:
And for expressions consisting of the addition of three fractions:
This method always works, but sometimes there is a smaller denominator that can be used (a least common denominator). For example, to add and the denominator 48 can be used (the product of 4 and 12), but the smaller denominator 12 may also be used, being the least common multiple of 4 and 12.
The process for subtracting fractions is, in essence, the same as that of adding them: find a common denominator, and change each fraction to an equivalent fraction with the chosen common denominator. The resulting fraction will have that denominator, and its numerator will be the result of subtracting the numerators of the original fractions. For instance,
Multiplying a fraction by another fraction
To multiply fractions, multiply the numerators and multiply the denominators. Thus:
Why does this work? First, consider one third of one quarter. Using the example of a cake, if three small slices of equal size make up a quarter, and four quarters make up a whole, twelve of these small, equal slices make up a whole. Therefore a third of a quarter is a twelfth. Now consider the numerators. The first fraction, two thirds, is twice as large as one third. Since one third of a quarter is one twelfth, two thirds of a quarter is two twelfth. The second fraction, three quarters, is three times as large as one quarter, so two thirds of three quarters is three times as large as two thirds of one quarter. Thus two thirds times three quarters is six twelfths.
A short cut for multiplying fractions is called "cancellation". In effect, we reduce the answer to lowest terms during multiplication. For example:
A two is a common factor in both the numerator of the left fraction and the denominator of the right and is divided out of both. Three is a common factor of the left denominator and right numerator and is divided out of both.
Multiplying a fraction by a whole number
Place the whole number over one and multiply.
This method works because the fraction 6/1 means six equal parts, each one of which is a whole.
Mixed numbers
When multiplying mixed numbers, it's best to convert the mixed number into an improper fraction. For example:
In other words, is the same as , making 11 quarters in total (because 2 cakes, each split into quarters makes 8 quarters total) and 33 quarters is , since 8 cakes, each made of quarters, is 32 quarters in total.
To divide a fraction by a whole number, you may either divide the numerator by the number, if it goes evenly into the numerator, or multiply the denominator by the number. For example, equals and also equals , which reduces to . To divide a number by a fraction, multiply that number by the reciprocal of that fraction. Thus, .
Converting between decimals and fractions
To change a common fraction to a decimal, divide the denominator into the numerator. Round the answer to the desired accuracy. For example, to change 1/4 to a decimal, divide 4 into 1.00, to obtain 0.25. To change 1/3 to a decimal, divide 3 into 1.0000..., and stop when the desired accuracy is obtained. Note that 1/4 can be written exactly with two decimal digits, while 1/3 cannot be written exactly with any finite number of decimal digits.
To change a decimal to a fraction, write in the denominator a 1 followed by as many zeroes as there are digits to the right of the decimal point, and write in the numerator all the digits in the original decimal, omitting the decimal point. Thus 12.3456 = 123456/10000.
Converting repeating decimals to fractions
Decimal numbers, while arguably more useful to work with when performing calculations, sometimes lack the precision that common fractions have. Sometimes an infinite number of repeating decimals is required to convey the same kind of precision. Thus, it is often useful to convert repeating decimals into fractions.
The preferred way to indicate a repeating decimal is to place a bar over the digits that repeat, for example 0.789 = 0.789789789… For repeating patterns where the repeating pattern begins immediately after the decimal point, a simple division of the pattern by the same number of nines as numbers it has will suffice. For example:
- 0.5 = 5/9
- 0.62 = 62/99
- 0.264 = 264/999
- 0.6291 = 6291/9999
- 0.05 = 5/90
- 0.000392 = 392/999000
- 0.0012 = 12/9900
In case a non-repeating set of decimals precede the pattern (such as 0.1523987), we can write it as the sum of the non-repeating and repeating parts, respectively:
- 0.1523 + 0.0000987
Then, convert the repeating part to a fraction:
- 0.1523 + 987/9990000
Fractions in abstract mathematics
In addition to being of great practical importance, fractions are also studied by mathematicians, who check that the rules for fractions given above are consistent and reliable. Mathematicians define a fraction as an ordered pair (a, b) of integers a and b ≠ 0, for which the operations addition, subtraction, multiplication, and division are defined as follows:
- (when c ≠ 0)
In addition, an equivalence relation is specified as follows: ~ if and only if .
These definitions agree in every case with the definitions given above; only the notation is different.
More generally, a and b may be elements of any integral domain R, in which case a fraction is an element of the field of fractions of R. For example, when a and b are polynomials in one indeterminate, the field of fractions is the field of rational fractions (also known as the field of rational functions). When a and b are integers, the field of fractions is the field of rational numbers.
Algebraic fractions
If the numerator and the denominator are polynomials, as in , the algebraic fraction is called a rational fraction (or rational expression). An irrational fraction is one that contains the variable under a fractional exponent or root, as in .
The terminology used to describe algebraic fractions is similar to that used for ordinary fractions. For example, an algebraic fraction is in lowest terms if the only factors common to the numerator and the denominator are 1 and –1. An algebraic fraction whose numerator or denominator, or both, contain a fraction, such as , is called a complex fraction.
Rational numbers are the quotient field of integers. Rational expressions are the quotient field of the polynomials (over some integral domain). Since a coefficient is a polynomial of degree zero, a radical expression such as √2/2 is a rational fraction. Another example (over the reals) is , the radian measure of a right angle.
The term partial fraction is used when decomposing rational expressions into sums. The goal is to write the rational expression as the sum of other rational expressions with denominators of lesser degree. For example, the rational expression can be rewritten as the sum of two fractions: + . This is useful in many areas such as integral calculus and differential equations.
Radical expressions
A fraction may also contain radicals in the numerator and/or the denominator. If the denominator contains radicals, it can be helpful to rationalize it (compare Simplified form of a radical expression), especially if further operations, such as adding or comparing that fraction to another, are to be carried out. It is also more convenient if division is to be done manually. When the denominator is a monomial square root, it can be rationalized by multiplying both the top and the bottom of the fraction by the denominator:
The process of rationalization of binomial denominators involves multiplying the top and the bottom of a fraction by the conjugate of the denominator so that the denominator becomes a rational number. For example:
Even if this process results in the numerator being irrational, like in the examples above, the process may still facilitate subsequent manipulations by reducing the number of irrationals one has to work with in the denominator.
Pronunciation and spelling
||This section may contain original research. (March 2012)|
When reading fractions, it is customary in English to pronounce the denominator using ordinal nomenclature, as in "fifths" for fractions with a 5 in the denominator. Thus, for 3/5, we would say "three fifths" and for 5/32, we would say "five thirty-seconds". This generally applies to whole number denominators greater than 2, though large denominators that are not powers of ten are often read using the cardinal number. Therefore, 1/123 might be read "one one hundred twenty-third" but is often read "one over one hundred twenty-three". In contrast, because one million is a power of ten, 1/1,000,000 is commonly read "one-millionth" or "one one-millionth".
The denominators 1, 2, and 4 are special cases. The fraction 3/1 may be read "three wholes". The fraction 3/2 is usually read "three-halves", but never "three seconds". The fraction 3/4 may be read either "three fourths" or "three-quarters". Furthermore, since most fractions are used grammatically as adjectives of a noun, the fractional modifier is hyphenated. This is evident in standard prose in which one might write about "every two-tenths of a mile", "the quarter-mile run", or the Three-Fifths Compromise. When the fraction's numerator is one, then the word "one" may be omitted, such as "every tenth of a second" or "during the final quarter of the year".
The earliest fractions were reciprocals of integers: ancient symbols representing one part of two, one part of three, one part of four, and so on. The Egyptians used Egyptian fractions ca. 1000 BC. About 4,000 years ago Egyptians divided with fractions using slightly different methods. They used least common multiples with unit fractions. Their methods gave the same answer as modern methods. The Egyptians also had a different notation for dyadic fractions in the Akhmim Wooden Tablet and several Rhind Mathematical Papyrus problems.
The Greeks used unit fractions and later continued fractions and followers of the Greek philosopher Pythagoras, ca. 530 BC, discovered that the square root of two cannot be expressed as a fraction. In 150 BC Jain mathematicians in India wrote the "Sthananga Sutra", which contains work on the theory of numbers, arithmetical operations, operations with fractions.
The method of putting one number below the other and computing fractions first appeared in Aryabhatta's work around 499 CE. In Sanskrit literature, fractions, or rational numbers were always expressed by an integer followed by a fraction. When the integer is written on a line, the fraction is placed below it and is itself written on two lines, the numerator called amsa part on the first line, the denominator called cheda “divisor” on the second below. If the fraction is written without any particular additional sign, one understands that it is added to the integer above it. If it is marked by a small circle or a cross (the shape of the “plus” sign in the West) placed on its right, one understands that it is subtracted from the integer. For example, Bhaskara I writes
- ६ १ २
- १ १ १०
- ४ ५ ९
- 6 1 2
- 1 1 1०
- 4 5 9
to denote 6+1/4, 1+1/5, and 2–1/9
Al-Hassār, a Muslim mathematician from Fez, Morocco specializing in Islamic inheritance jurisprudence during the 12th century, first mentions the use of a fractional bar, where numerators and denominators are separated by a horizontal bar. In his discussion he writes, "... for example, if you are told to write three-fifths and a third of a fifth, write thus, ." This same fractional notation appears soon after in the work of Leonardo Fibonacci in the 13th century.
"The introduction of decimal fractions as a common computational practice can be dated back to the Flemish pamphlet De Thiende, published at Leyden in 1585, together with a French translation, La Disme, by the Flemish mathematician Simon Stevin (1548-1620), then settled in the Northern Netherlands. It is true that decimal fractions were used by the Chinese many centuries before Stevin and that the Persian astronomer Al-Kāshī used both decimal and sexagesimal fractions with great ease in his Key to arithmetic (Samarkand, early fifteenth century)."
While the Persian mathematician Jamshīd al-Kāshī claimed to have discovered decimal fractions himself in the 15th century, J. Lennart Berggren notes that he was mistaken, as decimal fractions were first used five centuries before him by the Baghdadi mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century.
Pedagogical tools
In primary schools, fractions have been demonstrated through Cuisenaire rods, fraction bars, fraction strips, fraction circles, paper (for folding or cutting), pattern blocks, pie-shaped pieces, plastic rectangles, grid paper, dot paper, geoboards, counters and computer software.
See also
- H. Wu, The Mis-Education of Mathematics Teachers, Notices of the American Mathematical Society, Volume 58, Issue 03 (March 2011), page 374
- Weisstein, Eric W., "Common Fraction", MathWorld.
- Galen, Leslie Blackwell (March 2004), "Putting Fractions in Their Place", American Mathematical Monthly 111 (3)
- World Wide Words: Vulgar fractions
- Weisstein, Eric W., "Improper Fraction", MathWorld.
- Math Forum - Ask Dr. Math:Can Negative Fractions Also Be Proper or Improper?
- New England Compact Math Resources
- Trotter, James (1853). A complete system of arithmetic. p. 65.
- Barlow, Peter (1814). A new mathematical and philosophical dictionary.
- "Fraction - Encyclopedia of Mathematics". Encyclopediaofmath.org. 2012-04-06. Retrieved 2012-08-15.
- Eves, Howard ; with cultural connections by Jamie H. (1990). An introduction to the history of mathematics (6th ed. ed.). Philadelphia: Saunders College Pub. ISBN 0-03-029558-0.
- Milo Gardner (December 19, 2005). "Math History". Retrieved 2006-01-18. See for examples and an explanation.
- (Filliozat 2004, p. 152)
- Cajori, Florian (1928), A History of Mathematical Notations (Vol.1), La Salle, Illinois: The Open Court Publishing Company, p. 269
- (Cajori 1928, pg.89)
- A Source Book in Mathematics 1200-1800. New Jersey: Princeton University Press. 1986. ISBN 0-691-02397-2.
- Die Rechenkunst bei Ğamšīd b. Mas'ūd al-Kāšī. Wiesbaden: Steiner. 1951.
- Berggren, J. Lennart (2007). "Mathematics in Medieval Islam". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. p. 518. ISBN 978-0-691-11485-9.
- While there is some disagreement among history of mathematics scholars as to the primacy of al-Uqlidisi's contribution, there is no question as to his major contribution to the concept of decimal fractions. "MacTutor's al-Uqlidisi biography". Retrieved 2011-11-22.
|Look up denominator in Wiktionary, the free dictionary.|
|Look up numerator in Wiktionary, the free dictionary.|
- "Fraction, arithmetical". The Online Encyclopaedia of Mathematics.
- Weisstein, Eric W., "Fraction", MathWorld.
- "Fraction". Encyclopedia Britannica.
- "Fraction (mathematics)". Citizendium.
- "Fraction". PlanetMath. | http://en.wikipedia.org/wiki/Numerator | 13 |
270 | How the parabola is used for focusing waves, studying motion, describing orbits, surveying and making bridges, and how to draw one
Parabola is, of course, from Greek, and refers to a particular plane curve. The word parabolh means a "comparison," literally "a throwing beside." The accent is on the last syllable. It is the same word as parable, but Pythagoras gave it a technical mathematical meaning as the fundamental operation in his method of "application of areas," a geometrical substitute for what we would now call algebra. When the curves known as conic sections were described by Apollonius, the classification of the curves rested on a certain comparison of areas. With the parabola, the two areas were equal, so the curve was so-named. With the hyperbola, an area was an excess, uperbolh, while with the ellipse, it was a lack, elliyh. One of the areas was what we would now call y2, while the other was 2px. Thus, for a parabola, y2 = 2px, which is the equation of the curve. For details, see Sir Thomas Heath, A History of Greek Mathematics (New York: Dover, 1981), vol. II, pp 134-138. Apollonius of Perga is famous for his masterful study of the conic sections, including these definitions, but their basic properties were already known by Euclid, around 300 BC, who published a book on them. As we shall see, the parabola is second only to the circle in usefulness.
The wonderful curves known as the conic sections--circle, ellipse, parabola and hyperbola--are most conveniently studied in geometry as sections of a cone, in general an oblique cone, with a circular base. By "section" is meant the curve of intersection of a plane with the surface of the cone. If the plane is parallel to a generator of the cone, the curve will be a parabola. This definition is otherwise of very little practical use, so we need not consider it further. Except for enjoyment, we now use algebra instead of geometry to study conic sections. In analytic geometry, the conic sections are described by the general quadric, Ax2 + Bxy + Cy2 + Dx + Ey + F = 0. The discriminant Δ = B2 - 4AC is an invariant under rotation of the coordinates. If Δ < 0, the curve is an ellipse; if Δ > 0, the curve is a hyperbola; if Δ = 0, the curve is a parabola. Degenerate conic sections that are parallel or intersecting straight lines are also described by this equation.
To draw a parabola, a good method is shown at the right. The span AB and the height VD are given. Extend VD to C, making VC = VD, and draw CA and CB, which will be the tangents to the parabola at points A and B. Now divide CA and CB into any number of equal segments (8 in the diagram) by any of the methods familiar to draftsmen. Connecting points as shown will draw additional tangents to the parabola, which makes the curve easy to draw freehand or with a French curve. In fact, the parabola is quite evident to the eye in the diagram. Methods of drawing curves that give tangents are always more useful than those that give points only. When actually drawing a parabola, it is only necessary to draw short segments of the tangents. The theory of this method depends on some obscure properties of tangents to a parabola, so it is best to regard it as a revealed wonder. More methods of drawing parabolas are presented below.
If you take a straight line, which is called the directrix, and some point not on the line, which is called the focus, then a parabola is the locus of the points that are equidistant from the focus and the directrix. This is called the focal property of the parabola. The distance from the directrix is, of course, the perpendicular distance. This definition was not used by Apollonius, who did not even use the concept of the focus, which in Latin means fireplace or hearth. The focusing property of concave mirrors was well-known in antiquity, and they were used for kindling fires. The line through the focus perpendicular to the directrix is the axis of the parabola.
All of the conic sections can be defined as curves such that the ratio of the distances from a point on the curve to a straight line (the directrix) and a point (the focus) is a constant, called the eccentricity e. This is called the focal definition. If the directrix is taken as the y-axis, and the focus is at (p,0), then this definition gives [(x - p)2 + y2]1/2/x = e. Multiplying out, x2(1 - e2) + y2 - 2px + p2 = 0. It's easy to see that this quadric is an ellipse if e < 1, a hyperbola if e > 1, and a parabola if e = 1. The ellipse and hyperbola are called central conics, for which the origin is usually taken at the centre. If this is done by making the substitution x = x' + h in the equation, then h = p/(1 - e2). For an ellipse, this puts the directrices at x' = ±(a/e); that is, on the two sides. As these directrices recede to infinity, the ellipse becomes a circle. We can show that e = c/a, which is the usual definition of the eccentricity of an ellipse. Here, a is the semimajor axis and c is the focal distance. The directrices of a hyperbola are between the branches, again at distances ±(a/e), but here e > 1. The focal definitions of the ellipse and hyperbola are not of much use in applications, in contrast with the focal definition of the parabola.
If you make a concave mirror whose cross-sections are parabolas, and direct its axis toward the sun, the sunlight will accurately converge on the focus, creating great heat. This is the basis of the solar furnace, a quite practical device for creating great heat in a small volume. Such a mirror is a three-dimensional surface called a paraboloid, the locus of points equidistant from a focus and a plane, or a parabola rotated about its axis. If you consider plane wavefronts incident on the mirror, reflected at the mirror and converging on the focus, the path lengths along any ray to the focus will be the same, so the waves will all be in phase there. This is a proof from the properties of light waves that the tangent to the parabola at the point where a ray from the focus strikes makes equal angles with the incident and reflected rays. Alternatively, the normal at that point (perpendicular to the tangent) bisects the angle between the two rays.
Paraboloids are found very commonly as reflectors of waves, in satellite antennas ("dishes"), searchlights and listening devices. They bring any kind of waves to a focus, whether light, radio waves, sound, or even water waves. A bay with a parabolic beach will concentrate waves at the focus. I do not know of any examples of this, but similar effects of ocean wave refraction are well known. All of these uses depend on the focal property. A spherical concave mirror also has this property, but it is only approximate. To do better than a spherical mirror, a paraboloid must be rather accurate. In small sizes, it is easier to make an accurate spherical mirror than an accurate paraboloid, and it is also better than a bad paraboloid. The paraboloid must be accurately pointed to profit from its advantages. Otherwise, a spherical mirror is actually superior.
From the focal definition, it is easy to get an analytical representation of the parabola. Choose rectangular axes with the origin at the vertex of the parabola, the point halfway between the focus and the directrix (it must be halfway, of course!). Let the x-axis be the axis of the parabola, and the y-axis perpendicular to it. Let p be the distance from the focus to the directrix. Thus, the focus is at (p/2,0). Consider any point (x,y). Its distance from the directrix is x + p/2, while its distance from the focus is [(x - p/2)2 + y2]1/2. Setting these equal, squaring and simplifying, we find y2 = 2px, the equation of the parabola. This is a very useful definition, since we may now use all the power of analytical geometry and calculus to answer any questions we might have about normals, tangents, and so forth.
If we let x = p/2 (at the focus), we find y = p. The parameter p is called the semi-latus rectum or parameter of the parabola, and specifies its size, as the radius specifies the size of a circle. In fact, the center is the focus of a circle, and its radius is defined analogously to p. Just as all circles are similar, so are all parabolas. Parabolas differ in size, not in shape. If stretched or shrunk uniformly, any parabola can be made to coincide with any other (as is true for circles, and obviously untrue for ellipses). Unlike hyperbolas, parabolas do not have asymptotes. They continue to expand without approaching a straight line.
If we take the derivative of y2 = 2px with respect to x, we find 2yy' = 2p, or y' = p/y or y' = x/2y (y' = dy/dx here). This says that the slope of the tangent at (x,y) is y' = dy/dx = p/y. Since the normal is perpendicular to the tangent, its slope is dy/dx = -y/p. These facts can be used to prove that the angle between the line from the focus to (x,y) and the normal is equal to the angle between the normal and the line y = constant, showing that the paraboloidal mirror obeys the law of reflection. In the figure at the right, the distance BC is equal to p, since the slope of the normal is -y/p, so the distance FC is x - p/2 + p = x + p/2. The distance FP is also equal to x + p/2, which can be found from Pythagoras' Theorem and the equation of the parabola. Therefore, the triangle FPC is isosceles, so the angles at the ends of its base PC are equal. But angle FCP = angle APC (since FC and PA are parallel), so angle FPC = angle CPA, and these are just the angles of incidence and reflection.
It is not necessary to have the vertex of the parabola in the reflector. If an off-axis portion of the paraboloid is used, the waves will still come together at the focus, but will be changed in direction. In this case, the paraboloid is much better than a spherical mirror, since the spherical mirror does not work well so far from the axis.
Suppose we have a number of parallel lines cutting a circle. The mid-points of the chords thus formed all lie on a straight line that passes through the center of the circle, a line we call a diameter. The parabola has a similar property, which is very useful in applications. If you connect the mid-points of the chords formed by parallel lines, you will get a line parallel to the axis of the parabola that is also called a diameter. A diameter of a parabola is always parallel to its axis. The tangent to the parabola at the point where a diameter intersects the curve, and the diameter, form two axes, generally oblique, to which the parabola can be referred. If y' (not the derivative, just another y) is measured along the tangent, and x' along the diameter, then y'2 = 2p'x', just as the parabola is described by y2 = 2px with respect to rectangular coordinates with origin at the vertex. Of course, p ≠ p' in general. A relation between them can be obtained, but is usually not necessary in applications.
If you are given the tangents at two points that are the ends of a parabolic arc, a diameter is a line joining the point where the two tangents intersect, and the mid-point of the chord joining the two tangent points. Then x' and y' axes can be found (the y' axis is the tangent at the end of the diameter) and the parabola between the two given points can be determined from its equation.
A tangent can be drawn to a parabola from an external point as shown in the figure. Suppose that the focus F and the directrix are known, and point P is given. Draw a circle with centre P and radius PF. This circle cuts the directrix a point E. Bisect the line FE at Q, and draw PQ extended. This will be the tangent to the parabola. A horizontal line through E intersects the tangent at R, the tangent point on the parabola. To prove this, join PF, PE and FR. It is clear that the right triangles PEQ and PFQ are congruent, as are REQ and RFQ, since each pair of triangles has two sides equal. Since ER = FR, R must be a point on the parabola, from its focal property. Moreover, PR bisects the angle between the focal radius and the perpendicular from the directix, so PR is a tangent. That this is the condition for the tangent can be seen by considering a point R' on the parabola near R. If R' is on the tangent, then it is also on the parabola in the limit. This construction, and its kite-shaped figure, is very similar to the corresponding theorem for the tangent to an ellipse.
If we know a point R on the parabola, and wish to draw the tangent through it, draw the focal radius and the perpendicular to the directrix from R, and bisect the angle formed.
Let's take as polar coordinates the radius from the focus, r, and the angle θ measured clockwise from the vertex to the radius r. Then, x = p/2 - r cos θ. We found above that r = x + p/2, so we have r = p - r cos θ, or r = p / (1 + cos θ), which is the polar equation of the parabola, again containing the single constant p. Now 1 + cos θ = 2 cos2 (θ/2), and sec2(θ/2) = 1 + tan2(θ/2), so we have r = (p/2)[1 + z2], where z = tan (θ/2). At the vertex, z = 0, while at the ends of the latus rectum, z = ± 1. This is often a convenient way to parameterize the equation.
Any conic section can be described by a polar equation r = p / (1 + e cos θ), where e is the eccentricity, the ratio of the distance from the center to the focus divided by the distance from the center to the vertex, usually expressed as e = c/a, as in the case of an ellipse with semiaxes a and b and c = √(a2 + b2). For a parabola, we have the limit as c and a both go to infinity, and e = 1. An ellipse has e < 1, a hyperbola e > 1. A circle, of course, has e = 0. Circles are easily drawn and their curvature and arc length are easily determined, making them very useful in practice.
The area of a parabolic sector (the area cut off between the parabola and a chord) is 2/3 the area of the parallelogram containing it, as shown in the figure. This is easily proved by integration in the case of rectangular coordinates, and the result for a general diameter then follows. Archimedes was the first to find this result, using the method of application of areas. The radius of curvature of a parabola is given by R = (p + 2x)3/2 / p1/2, which can be found by differentiation and the general formula for the curvature of a plane curve. At the vertex, x = 0, we see that R = p. This shows that the focus is located at R/2 from the vertex, as it would be for a circular mirror of radius R. The arc length of a parabola y = x2/2a from the vertex to a point (x,y) is s = (x/2)√(1 + x2/a2) + (a/2)l[x/a + √[1 + x2/a2)].
The motion of bodies under the influence of gravity alone leads naturally to parabolas, although in this application very little use is made of the geometric properties of parabolas. The algebraic representation is the most convenient. Often, no actual parabola is involved, only an abstraction of one, as when the time is one variable. Let's choose axes with x horizontal and y (perversely) upwards. A body dropped from the origin at t = 0 acquires velocity v = -gt at a uniform rate of -g m/s per second, called the acceleration of gravity, the absolute value of which is everywhere close to 9.8 m/s2 or 32.2 ft/s2. The distance travelled is the integral of the velocity with respect to time, or y = -gt2/2. This represents an abstract parabola of distance and time 2y/g = -x2, with parameter 1/g. Any quadratic dependence gives us such an abstract parabola, but the only curve involved would be in a graph of distance versus time, represented in two space dimensions. This curve is indeed a parabola, opening downwards, familiar to every student of elementary physics.
If a body is projected upwards with an initial velocity V, then at some time t = V/g, it comes to rest and then begins to fall back. The motion is described by y = Vt - gt2/2 at any time, so if this time is substituted, the height of the turning point is found to be y = H = V2/2g. By a proper choice of the three constants in the general quadratic y = at2 + bt + c, motions under gravity (or any constant acceleration) in one dimension with arbitrary initial position, velocity and time can be described. These facts are of considerable practical importance, since constant acceleration is often a reasonable approximation. One of the most useful special relations is v = √2gh.
The next degree of complexity is motion in two dimensions under the influence of gravity. As Galileo first clearly demonstrated, motions in the perpendicular directions are independent, and may be considered separately. In the vertical (y) direction, we assume motion with constant acceleration -g, as has just been described. In the horizontal (x) direction, we assume motion with constant velocity vx, described by x = vxt + x0. There are five parameters: two initial coordinates, two initial velocities, and one initial time. For simplicity, we make choices that eliminate as many as possible of them, but it is always easy to include them when necessary. For example, the motion may start from (0,0) at t = 0, leaving only the two initial velocities to be specified. If v is the initial velocity, and θ is the angle of projection, then vx = v cos θ and vy = v sin θ. The gunner's term "point blank" meant θ = 0. Galileo made a useful mathematical instrument, the gunner's quadrant, to aid the necessary calculations.
This problem is of interest because through the years people have enjoyed heaving rocks and bullets at each other. This projectile motion is a staple of elementary physics, where it serves to illustrate the nature of motion under constant acceleration. Where the velocities are small, it can be a pretty good approximation, but for larger velocities, such as exist in all practical artillery and firearms, it is rather useless, since air resistance introduces accelerations in addition to gravity, and these accelerations can even be larger and dominate the motion. It was long believed that a projectile moved more or less straight forward until its momentum was exhausted (violent motion), and then fell to earth (natural motion). This is actually not a bad description of what happens in practice, but teaches us no mechanics. Niccolo Tartaglia seems to have appreciated the approximately parabolic motion in his study of artillery, but Galileo understood things more in the modern way. Taking air resistance into account, the path of a projectile can be quite accurately calculated, but parabolas have little to do with it. The first use of electronic digital computers was in calculating artillery trajectories to produce firing tables in World War II.
The sketch at the right shows what happens in parabolic projectile motion. The curve in the diagram is not an accurate parabola, by the way, just an elliptical arc masquerading as one for ease of drawing. Everything can be worked out with a little algebra. Time is introduced as a parameter. When it is eliminated, the parabola appears. The interesting things are the range and the maximum altitude and how they depend on the initial velocity and angle of projection. The maximum range occurs for θ = 45°. This result can be demonstrated in a number of ways, including calculus. Its determination was one of Tartaglia's triumphs. In this special case, the range is 2p and the maximum height is p/2. The focus F is at the same level as the points of projection and impact. The maximum height is reached when θ = 90°, and is H = v2/4g, a quarter of the maximum range. This is not a popular procedure, since it is hard on the gunner, but high angles can be used to get over walls.
Uniformly accelerated motion is conveniently demonstrated by some device such as an Atwood's Machine, where the force of gravity is reduced and the accelerated mass increased to produce a smaller acceleration. If masses m and M > m are suspended from the ends of a cord around a pulley, the force on mass m + M is just (M - m) g, giving a = [(M - m)/(M + m)]g. The low velocities eliminate air resistance as a disturbing factor, but there is friction in the pulley to take into account. This friction can usually be made negligible. Time can be measured by high voltage pulses that cause sparks recording the position at equal intervals of time. Let the unit of time be the interval between sparks, and let the motion start at t = 0 at s = 0 (s will be the distance moved, = -y). The distances at the successive sparks will be: 0, a/2, 4a/2, 9a/2, 16a/2, 25a/2 and so on. The distance moved in each interval is found from the differences: a/2, 3a/2, 5a/2, 7a/2, 9a/2 and so on. Remarkably, these distances are proportional to the odd integers 1, 3, 5, 7, 9, .... The second differences, which are the additional distance covered in each interval, are constant and their value is a, the acceleration. This shows how to analyze the tape from an Atwood's machine or any similar apparatus. It also shows how to create a parabola by differences, which has practical applications.
A jet of water usually describes a good parabola, since air resistance is small. By finding the maximum range of the jet, the initial velocity can be determined, and from that the rate of flow of water. To do this, you need the area of the jet at the point where the velocity has been determined. If there is no restriction at the point where the water issues from the hose, just assume that the area is the cross-sectional area of the hose. If there is some kind of aperture or nozzle, additional consideration is required. The coefficient of discharge from a sharp-edged orifice is about 0.62 (that is, Q = 0.62AV). Water jets will illustrate many properties of trajectories. Water issuing from a hole in the side of a tank describes a parabola. Its velocity as it issues is given by v = √(2gh), where h is the vertical distance from the hole to the surface of the still water in the tank. It is the same velocity the water would have if it had fallen that far.
It is one of the wonders of mathematics that the orbits of a body about an inverse-square center of attraction are the conic sections studied by Apollonius. He would have been delighted at this connection, but it was only hinted at by Kepler, and logically demonstrated by Newton, nearly 2000 years afterwards. The Newtonian gravitational attraction between bodies of masses m and M is GMm/r2, where G is the Newtonian gravitational constant, 6.670 x 10-8 cm3/gm cm2. This constant is not known to great accuracy because it is difficult to make astronomical measurements in terms of grams and centimeters. If the solar mass, the radius of the earth's orbit, and the mean solar day are taken as units, then the constant is much more accurately known as the Gaussian gravitational constant, G = k2, where k = 0.01720209895.
The total energy of a body of mass m is E = mv2/2 - GMm/r, the sum of its kinetic energy and the potential energy of the attraction. If E < 0, as for the planets, then the body is trapped and its orbit is an ellipse with the sun in one focus. If a body is dawdling a long way from the sun, its total energy is close to E = 0. As it is drawn inwards, its kinetic energy increases at the expense of potential energy (the attraction of the sun). It swings around the sun, with maximum velocity at the point of closest approach, perihelion. If the perihelion distance is taken as q, then this velocity can be calculated from setting E = 0 in the equation above. It is just the velocity required at that point to escape from the sun. This behavior is typical of long-period comets, like Ikeya-Zhang (C2000/C1). Short-period comets, like Halley's, move in elliptical orbits. Arguments with Jupiter seem to be responsible for most short-period comets, since they can transfer energy to the massive planet, lowering their total energy and becoming "captured." The orbit of a body with E = 0 turns out to be a parabola with vertex at perihelion and parameter p = 2q.
Bodies with E > 0, that approach the solar system from great distances, will have hyperbolic orbits, that deflect their paths by amounts that depend on how close their initial path would come to the sun if there were no attraction. Such bodies have not been reliably identified, so we apparently have few visitors from deep space. The parabolic orbit is at the boundary between hyperbolic and elliptical orbits, between positive and negative total energy.
Since the gravitational force is central, the angular momentum L = mvr is a constant. Since m is a constant, so is h = vr = r2(dθ/dt) = 2(dA/dt), where A is the area swept out by the radius vector. This is Kepler's Area Law, and determines how a planet moves in its orbit. The solution of Newton's Law gives p = 2q = (h/k)2 for an orbit (k is the Gaussian gravitational constant). Hence, h = k√(2q). As we showed above, r can be expressed as r = q(1 + z2) where z = tan(θ/2). dz/dθ = sec2(θ/2)/2 = (1 + z2)/2. Now, dθ/dt = h/r2 = k√(2q)/q2(1 + z2)2. We can solve this for dt as dt = q3/2(1 + z2)2dθ/2k√2 = (q3/2/2k√2)(1 + z2)dz. We have now expressed dt in terms of z and known constants. It is easy to integrate, and the result is t - T = (q3/2/2k√2)(z + z3/3), where we have used the initial condition that z = 0 at t = T. z = 0 is perihelion, so T is the time of perihelion passage. By solving the cubic, we can find z, and therefore θ, at any time t. In fact, θ = 2 tan-1 z and r = q(1 + z2). To determine the orbit in space, we need the two angles that specify the plane of the orbit in space (longitude of the ascending node and inclination), and the location of perihelion (argument of perihelion). For details, see any text on celestial mechanics, such as T. E. Sterne, An Introduction to Celestial Mechanics (New York: Interscience, 1960), sec. 3.2.
Parabolas are used in surveying for decorative curves, and for the vertical curves used to connect different rates of gradient. In either case, we are given two tangents and points on them that are to be connected by a parabolic arc. It is inconvenient to find the radius of curvature or length of a parabolic arc, which discourages its wider use. Parabolas are not used as transition curves in transportation design for this reason (there are much better transition curves).
Let's consider vertical curves first. Parabolas are used because they are easy to implement. Practically, they differ very little from circular arcs, and are much easier to compute in this case. The rate of gradient g is specified as so many feet per station of 100 feet (or the equivalent in metric measure, as, say millimeters per station of 10 meters). In U.S. engineering, a 1% grade corresponds to g = 1.000 ft per station, and is a typical gradient. The constant difference 1.000 is simply added to the elevation at the preceding station to find the elevation at the current station, corresponding to a constant velocity in the kinematic case.
Suppose the grade changes from g to g' at a certain station N. This will be the vertex of the curve. There is generally a specification of the maximum change in gradient per station, such as r = 0.1 ft in summits and r = 0.05 ft in sags. Therefore, the length of the curve in stations must be greater than (g' - g)/r. We take the next larger even number n and select it as the length of the parabolic arc, half on either side of the vertex. Then the actual change in gradient per station will be a = (g' - g)/n. We know that the changes are proportional to 1, 3, 5, ..., so we add increments a/2, 3a/2, 5a/2, ... to the gradients at each station. This means we start with a/2, then add a to each of the following differences up to the last. We will then have a/2 left over, which when added to the gradient at the last point of the curve will give the gradient g'. This is really very easy to do, especially if the curve begins and ends at even stations. There is usually little trouble in arranging this, and it makes the computations easy. Of course, it can be carried out in any unnecessarily complicated case simply by using the equation of the parabola.
An example is shown at the right, as it would appear in a field notebook. An 0.4% gradient ends at station 24, where a 1.0% gradient begins. We select n = 8, which gives a difference of 0.60/8 = 0.075 ft per station. The vertical curve begins at station 20 (PC), elevation 1020.75 ft, and ends at station 28 (PT), elevation 1026.35 ft. The differences are shown for each station, with the elevations computed successively. At station 28, the computed value checks with the actual value, which also checks all the intermediate elevations. The resulting parabolic arc is very close to a circular arc of radius 10 000/a ft, or about 133,000 ft in this case, with the same tangents, PC and PT.
Horizontal parabolic arcs are generally laid out without instruments, since approximate curves are satisfactory. Parabolic curves can be visually pleasing, and are a relief from strict circular curves. You will need some way of marking points, such as chaining pins or small stakes, string and a measuring tape to lay them out in your garden.
The principle of laying out a parabola on the ground is shown in the diagram at the left. Given are points A and E at the ends of the arc, and the tangents meeting at the vertex V. A, B, C, D and E are points on the parabola. Stretch a string from A to E, then from its midpoint K set point C on the parabola halfway between K and V. KV is a diameter of the parabola, and all distances from the tangent AF to the parabola are laid out parallel to this direction. Axes x' and y' with origin at A are shown, and the equation of the parabola with respect to them is y'2 = 2p'x'. The offsets JB, VC, GD and FE are proportional to the squares of the distances AJ, AV, AG and AF. In fact, VC = EF/4 since AF = 2 AV. Any point on the parabola can be found by proportion, and the parabola laid out by offsets from the tangent.
Another method is perhaps somewhat easier in the field. After locating point C, a string is stretched on chord AC and point B located at a distance CV/4 from the midpoint along a diameter, which is also halfway between the chord and the tangent at A. The same thing can be done with chord CE, locating point D. This is called laying out the parabola by mid-ordinates, and can be repeated as often as necessary.
A parabolic arc can be used to round off a square corner. The result is not far from a circular arc, but may be esthetically more pleasing. Parabolas have many applications in design, where they soften the rigidity of circles and straight lines.
If you hang a flexible chain loosely between two supports, the curve formed by the chain looks like a parabola, but isn't. It is a catenary, a more glamorous curve which can be represented algebraically by hyperbolic functions [y = A (cosh kx - 1)]. In this case, the vertical load on the chain is uniform with respect to arc length. A whirling skipping rope is another example of a catenary.
The load on a suspension bridge is (approximately) uniform with respect to the horizontal distance. In this case, the curve is a parabola, as we shall demonstrate. Suppose, then, that the bridge is length L between the towers, with a uniform load of w lb/ft (or kgf/m), so that the total weight of the bridge is wL. The sketch at the right is a free-body diagram of the right half of the bridge, with lowest point of the suspension cable at O and the highest point at A. The sag is the distance s. The cable pulls on the tower with a total force T, made up from horizontal component H and vertical component V. Consideration of the total forces in x and y directions shows that T = H at O, and that V equals the weight of the half-span. At any point between O and A, similar relations obtain, with V decreasing steadily as O is approached and equal to the total weight to the left. The ratio of V to H is the slope of the cable, so if the curve is represented as y = f(x), dy/dx = V/H = wx/H.
Integrating this differential equation, we find that y = (w/2H)x2 is the equation of the curve, which is, therefore, a parabola with parameter w/2H. To complete the solution, we need the value of H. This can be obtained by taking moments about point A, since the net moment must be zero. Two forces are eliminated, and the remainder give Hs = (wL/2)(L/4), or H = wL2/8s. If we use point O instead, we find that (wL/2)(L/2) = Hs + (wL/2)(L/4), or H = wL2/8s. Another method is simply to substitute x = L/2 in the equation of the curve, which gives s = wL2/8H, the same result. Then, the equation of the curve is x2 = (L2/4s)y. The parameter p = L2/8s. We know the shape of the curve if we know L and s; w does not matter. The tension in the cable at any point x is T = (wL/2)[(L/4s)2 + (2x/L)2]1/2, proportional to w. The horizontal tension in the suspension cables is resisted by back stays, which must be securely restrained.
The main span of the Golden Gate Bridge at San Francisco has L = 4200 ft, and s = 526 ft, so L/4s = 2.0 (approximately). This makes the maximum T = (W/2)√5 = 1.12W, where W is the total weight of the bridge. At the middle, T = W. Therefore, a uniform cable is efficiently used. The ratio L/s for bridges was traditionally between 1/12 and 1/15 for large bridges, up to 1/10 for small bridges. The smaller values make a bridge more stable and less affected by oscillations of the roadway (one section can rise while a neighboring one can fall--a kind of buckling). The buckling tendency of a suspension bridge makes it unsuitable for concentrated live loads, and requires that the roadway be well stiffened. The handrails of early bridges were trussed for this purpose, but more drastic measures were sometimes required. The collapse of the Tacoma Narrows Bridge in 1941 is the most recent example of such failure, as Ellet's 1010 ft Ohio River Bridge at Wheeling (L/s = 13) was in the 1850's the first. This weakness was well appreciated in the 19th century. John Roebling built a remarkable suspension bridge at Niagara Falls in 1852-53 that carried a railway and a roadway on two decks of a very stiff roadway, formed by a Pratt truss 18 ft deep. The span between towers was 821'-4", 14 times the sag. The total area of the four cables, each formed by 3640 #9 iron wires, was 60.4 in2. There were additional stays from the towers, top and bottom, and 624 suspenders of 1-3/8" wire rope. This was the only successful railway suspension bridge.
Suspension bridges were introduced by James Finley of Pennsylvania in 1796, giving good service on common roads, with spans of 200 ft and under. They were of wood, with wrought-iron suspension chains. A notable one was the Wills Creek bridge in Allegheny County, Pennsylvania, with a 151'-6" clear span and L/s = 6, built in 1820. The chain was composed of charcoal iron links 7 ft to 10 ft long, looped and welded at the ends, and was supported on timber posts. Suspenders descended from each joint to the roadway. John A. Roebling introduced wire cables, which superseded the iron links used by Telford, Brunel and others in their famous bridges. Roebling built a canal aqueduct at Pittsburgh in 1845 that had seven 160 ft spans and a sag of 14.5 ft. It used steel wire rope made at his Saxonburg factory, established in 1841. Each span weighed 376 tons, with water. It is curious that when a heavy barge passes through a canal aqueduct, the load is not increased, since the boat displaces its weight of water, which is pushed off the bridge. Each of the 7" diameter cables had 1866 tons strength provided by the 53 in2 cross-sectional area, so the actual tension of 549 tons gave a factor of safety of 3.4, adequate for canal aqueducts because of the lack of impulsive live loads.
If the sag is small, as for stretched wires, the parabola is a good approximation to the catenary, and accurate enough for engineering work. Suppose we have a pole line with 40 poles to the mile, or a span of 132 ft. If the sag is 1 ft, then L/4s = 33, and T = 33W, approximately. A #8 AWG copper wire weighs about 0.05 lb/ft, so W = 6.6 lb, and T = 218 lb. This corresponds to a tensile stress of 16,800 psi, which is comfortably less than the yield stress of 48,000 psi. Hence, the wire can be stretched to a tension of 218 lb, ensuring that both the sag and the tensile stress will be satisfactory. When the ratio of span to sag is greater than 10, the catenary or parabola can be drawn as a circle.
The parabolic suspension bridge can be changed into a parabolic arch by reversing all the forces (i.e., turning the bridge upside down). However, the cable is unstable under compression and immediately buckles. It must be replaced by a rib capable of resisting compression, such as a parabola of stone blocks or reinforced concrete. Buckling is still a danger, however, especially when live loads are present. The resultant force in the rib follows a line that must not be allowed to leave the rib. The dead load on an arch is not usually uniform, but least in the middle and increasing in the haunches, aiding the stability of the arch. Circular and elliptical arches are much more common than parabolic arches, and are to me more tasteful. A parabolic arch of small sag would be indistinguishable from a segmental circular arch, however. Dams are often arches turned on their sides, so we might look for parabolas even there.
The Elbonians in the Dilbert strip appear to wear paraboloidal hats.
If the directrix and focus are given, a parabola can be constructed using the focal property, as shown at the right. A line PQ perpendicular to the axis is drawn, then a circular arc of radius DQ with center at F intersects this line at a point P on the parabola. Point P is then equidistant from the directrix and the focus.
The span L and sag s are more commonly given than the focus and directrix. A convenient construction for this case is shown at the left. A rectangle is drawn as shown, and each side is divided into N equal parts. In the figure, N = 5. The line from the vertex V to a point with x = ns/N, y = L/2 is intersected with a horizontal line at height y = n(L/2)/N to determine points P on the parabola. It is easy to show that they really do lie on a parabola. By similar triangles, x / (ns/N) = y / (L/2), so x = (2s/NL)ny. But y = n(L/2)/N. Eliminating n, we find that x = (4s/L2)y2, or y2 = (L2/4s)x, a parabola with parameter p = L2/8s.
If c is the distance of the focus F from the vertex V, then c = p/2 = L2/16s, or (c/s) = (L/4s)2 and cs = (L/4)2. L/4 is said to be a mean proportional between c and s. If you draw a circle with c + s as a diameter, then L/4 is the length of a line perpendicular to this diameter from the point where c and s meet to the circle, the altitude of a right-angled triangle constructed in the semicircle.
It is possible to generate a parabola by paper-folding. In the diagram at the right, DD' is the directrix of the desired parabola, and F its focus. The vertex V is located halfway between F and the directrix. A line parallel to the axis, such as the one passing through point A, is a diameter of the parabola. Lift the corner D of the sheet of paper, and fold so that A falls on F. Crease the paper, and unfold it. The crease ff' is seen to make equal angles with the equal lines AP and PF, so it is a tangent to the parabola at point P. The intersection of the fold ff' with the diameter is a point on the parabola. This can be repeated for as many points as desired. The parabola is generated as a direct consequence of its focal property in this construction.
A parabola can be generated on a drawing board with a T-square and string. Press one pin in the board to represent the focus. Hold the T-square vertically against this pin and place a second pin in the T-square a distance d above the first pin. Tie a length d + 2p of string between the two pins, where 2p will be the semilatus rectum of the parabola. Put the pencil at the bottom of the loop made by the string when the T-square is against the focus pin. This is the vertex of the parabola. Keeping the pencil against the edge of the T-square and the string, move the T-square to the right to generate the parabola. That a parabola is indeed generated can be seen from the focus-directrix definition, since the distance of the pencil from the focus will always be equal to the distance of the pencil directly above the directrix. An incorrect description of this construction will be found in the References.
Consider a cylindrical container of water rotating about the axis of the cylinder with angular velocity ω. Suppose the water is rotating at the same rate as the container, which will eventually be the case if the rotation is uniform. Each element dm of the water is subject to the force of gravity g dm, acting downward, and to the centrifugal force (v2/x) dm = ω2x dm, where x is the horizontal distance from the axis. Since both forces are proportional to dm, it is as if each element were subject to a gravitational acceleration g' that is the vector sum of the two accelerations, as shown in the figure at the left. The surface of the water will be perpendicular to this direction, with a tangent line as shown. Therefore, we have dy/dx = ω2x/g, which very easily integrates to y = (ω2/2g)x2, or x2 = 2(g/ω2)y, a parabola with parameter g/ω2.
When the container is at rest, the water surface is plane. As it begins to rotate, the surface is depressed in the center and rises at the edges. At some rotational speed, the vertex of the paraboloid will reach the bottom of the container, and the water will have risen to twice its original height. With a further increase in angular velocity, the center of the bottom will be dry, and this dry area will expand until finally most of the water is pressed against the sides. The cylinder does not have to be circular--the water surface will be a paraboloid in any case.
The parabola has some interesting mensuration properties, and is a good curve to practice calculus upon. Earlier, we found that the area of a parabolic segment is 2/3 the area of the parallelogram containing it. The figure shows the centroid C of a parabolic segment, with its coordinates x bar and y bar. To find x bar, integrate x dA = x (h - y) dx from 0 to a, using the equation of the curve, and divide by the area. This integral is ph2/2, so x bar = 3a/8. Similarly, y dA integrated from 0 to h gives (2/5)ah2, so that y bar = 3h/5. The volume of the paraboloid of height h and radius of upper face a is found from Pappus' theorem to be 2π(xbar)A = πha2/2, exactly half of the volume of the cylinder. The volume of the paraboloid is then equal to the volume of what is not paraboloid, which is where the water is when the cylinder is rotating. This is the basis for the statement that the water rises to twice its original level when the bottom just becomes dry. From this result, it is easy to find the capacity of a paraboloidal bucket. It is harder to find a paraboloidal bucket. A rotating pool of mercury would provide a perfect paraboloidal reflector for a telescope.
A Lissajous figure is formed when sine waves of different frequencies are supplied to the X and Y axes of an oscilloscope. If the frequencies are the same, we observe an ellipse that becomes a straight line if the phases are equal, and a circle if the phases differ by 90° If the frequencies differ slightly, the figure alters constantly, going through all its shapes from straight lines to circles as the phase slowly varies. Specially attractive figures result when the two frequencies are in a ratio of small integers. They also move when the frequencies are not exactly equal.
Let a signal sin ωt be displayed on the X axis, and a signal cos 2ωt on the Y axis. Then, y = cos 2ωt = 1 + 2 sin2 ωt = 1 + 2x2, or x2 = (y - 1)/2, which is a parabola! If the frequencies differ slightly, the figure will rotate and will appear to be a saddle-shaped figure, with the parabola appearing at intervals.
A viscous fluid flowing in a cylindrical pipe tends to stick to the walls, so the fluid velocity is smallest there, and largest at the center of the pipe. If the velocity is plotted as a function of position, the surface that results is a paraboloid. This is true only when the velocity is small enough that the flow is laminar, or non-turbulent. For more details, see any text on Fluid Mechanics.
The Gateway Arch on the riverbank in St. Louis was architect Eero Saarinen's (1910-1961) first independent commission. He won a 1947 design competition, and the arch was finally completed 28 October 1965, with 75% federal money, and opened in 1967. Saarinen was accused of copying a proposed arch of Mussolini's, but this was a circular free arch that probably could not have stood if it had been built, since there was no weight on the haunches. It is in an area where the history of St. Louis has been ruthlessly swept away (in the name of the Jefferson National Expansion Memorial), and replaced by sterility, incongruity and vulgarity (in the form of gambling boats). Riverfront traffic disappeared when the railways came, and local industry later evaporated. St. Louis never became the gateway that was wished; that distinction was reserved for Kansas City. The Old Court House cowers between tall, cheap skyscrapers just to the west, and the Eads Bridge is a short distance north, both happily surviving. The nearby Busch Stadium hosts howling masses, but there are no gladiatorial combats, just ball games. Nevertheless, the 630-foot-high structure is impressive, though totally useless, an amazing spectacle devoid of meaning, like a tale told by an idiot. It shares this nature with the Eiffel Tower in Paris, equally useless and equally sublime. On the other hand, it is a spiritual expression far better than what is seen in most American cities. Visitors may ride to the top in capsules in each leg to enjoy an elevated view of the area through small windows, but without snack bar or toilets. There are also elevators and a flight of 1760 steps for the staff. It must support its own weight and resist wind loads, which it apparently does quite well. The deflection in a 150 mph wind is estimated at 18". The curve is an approximate catenary, selected for esthetic rather than practical reasons. A TV documentary says that the curve was narrowed near the top, so that it would appear to "soar." That is, the weight per unit length is not constant, but smaller at the top. This would make the curve flatter. The arch is an equilateral triangle in cross-section, with sides from 54 ft at the base to 17 ft at the top, and 630 ft apart at the base. The outer skin is type 304 stainless steel, and the lower parts of the legs are strengthened with reinforced concrete. For further information, photographs and tickets, consult St. Louis Arch. It is taller than the London Eye (450 ft), the Washington Monument (555 ft) or the San Jacinto Monument (570 ft), but shorter than the Eiffel Tower (984 ft). Although it is a catenary, perhaps we can regard it as a monument to all Parabolas.
There is a Wikipedia article "Parabola" that is worth reading. A link to www.maverickexperiments.com/, where a bad explanation of the construction using a T-square and string will be found, is included.
Composed by J. B. Calvert
Created 3 May 2002
Last revised 18 July 2011 | http://mysite.du.edu/~jcalvert/math/parabola.htm | 13 |
56 | In physics, and more specifically kinematics, acceleration is the change in velocity over time. Because velocity is a vector, it can change in two ways: a change in magnitude and/or a change in direction.
In one dimension, i.e. a line, acceleration is the rate at which something speeds up or slows down. However, as a vector quantity, acceleration is also the rate at which direction changes. Acceleration has the dimensions L T?2.
In SI units, acceleration is measured in metres per second squared (m/s2).
In common speech, the term acceleration commonly is used for an increase in speed (the magnitude of velocity); a decrease in speed is called deceleration. In physics, a change in the direction of velocity also is an acceleration: for rotary motion, the change in direction of velocity results in centripetal (toward the center) acceleration; where as the rate of change of speed is a tangential acceleration.
In classical mechanics, for a body with constant mass, the acceleration of the body is proportional to the resultant (total) force acting on it (Newton's second law):
Figure 1. Acceleration is the rate of change of velocity. At any point on a trajectory, the magnitude of the acceleration is given by the rate of change of velocity in both magnitude and direction at that point. The true acceleration at time t is found in the limit as time interval ?t ? 0.
where F is the resultant force acting on the body, m is the mass of the body, and a is its acceleration.
Average and instantaneous acceleration
Instantaneous acceleration is the acceleration at a specific point in time.
Tangential and centripetal acceleration
The velocity of a particle moving on a curved path as a function of time can be written as:
Image:Acceleration equation 2.png
with v(t) equal to the speed of travel along the path, and ut a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time.
Taking into account both the changing speed v(t) and the changing direction of ut, the acceleration of a particle moving on a curved path on a planar surface can be written using the chain rule of differentiation as:
Image:Acceleration equation 4.png
where un is the unit (outward) normal vector to the particle's trajectory, and Ris its instantaneous radius of curvature based upon the osculating circle at time t. These components are called thetangential acceleration at and the radial acceleration, respectively. The negative of the radial acceleration is the centripetal acceleration ac, which points inward, toward the center of curvature.(See Figure 2.)
Extension of this approach to three-dimensional space curves that cannot be contained on a planar surface leads to the Frenet-Serret formulas.
Relation to relativity
After completing his theory of special relativity, Albert Einstein realized that forces felt by objects undergoing constant proper acceleration are actually feeling themselves being accelerated, so that, for example, a car's acceleration forwards would result in the driver feeling a slight pressure between himself and his seat. In the case of gravity, which Einstein concluded is not actually a force, this is not the case; acceleration due to gravity is not felt by an object in free-fall. This was the basis for his development of general relativity, a relativistic theory of gravity.
- Crew, Henry (2008). The Principles of Mechanics. BiblioBazaar, LLC. ISBN 0559368712.
- Bondi, Hermann (1980). Relativity and Common Sense. Courier Dover Publications. ISBN 0486240215.
- Lehrman, Robert L. (1998). Physics the Easy Way. Barron's Educational Series. ISBN 0764102362.
- Larry C. Andrews & Ronald L. Phillips (2003). Mathematical Techniques for Engineers and Scientists. SPIE Press. ISBN 0819445061.
- Ch V Ramana Murthy & NC Srinivas (2001). Applied Mathematics. New Delhi: S. Chand & Co.. ISBN 81-219-2082-5.
Note: This article uses material from the Wikipedia article [hhttp://en.wikipedia.org/w/index.php?title=Acceleration&oldid=354416256 Acceleration] that was accessed on April 9, 2010. The Author(s) and Topic Editor(s) associated with this article may have significantly modified the content derived from Wikipedia with original content or with content drawn from other sources. All content from Wikipedia has been reviewed and approved by those Author(s) and Topic Editor(s), and is subject to the same peer review process as other content in the EoE. The current version of the Wikipedia article may differ from the version that existed on the date of access. See the EoE’s Policy on the Use of Content from Wikipedia for more information. | http://www.eoearth.org/article/Acceleration | 13 |
67 | | Variables are places in computer memory for storing changing information. We use variables to keep records for our program, such as whether the user has pushed a button or not, how many times theyve pushed the button, how many times the microcontroller has flashed a light, how much time has passed since the last button push, and so forth.
Think of computer memory as a bunch of coffee cups that you can put a label on the outside and store things on the inside. Variables allow you to put your own names on the outside of the coffee cup and put things you want to remember inside of it. You can then use the stuff inside the cups by referring to them by name in your if statements and loops can then find the coffee cups by name and take different actions based on what's there. The real power comes not from the fact that you can place things in variables, but that you can replace them or vary them easily.
Before you use a variable in most languages BASIC, C, Wiring, etc.) you need to give it a name. This is generally done at the very beginning of your program or at the beginning of a routine. This is called declaring the variable.
Use a name that describes what youre using the variable to remember, because it will make your code much more readable. Adding "Var" to the ends of variable names, makes them easy to identify when reading the code. You can use any name you want for your variable as long as it does not start with a number, has no spaces, and isnt a keyword. When you try to run your program, the compiler will let you know if your variable name isnt allowed.
To store a value in a variable you put the name of the variable on the left side of an equation and the value you want to remember on the right, like so:
fooVar = 12 sensorVar = 250 switchVar = 125
Every variable has a data type. The data type of a variable determines how much memory the microcontroller needs for the variable, and how it will use the data stored in the variable. You declare both the name and the data type of the variable before you use it, like so:
char fooVar; int barVar; long timeVar;
byteVar var word switchVar var bit bigVar var word
dim byteVar as byte dim bigVar as integer dim fractionVar as single
Here are the data types for the PIC and BX-24, broken down by the amount of memory each needs:
|To understand how variables are stored in memory, it's useful to think about what memory is. a computer's memory is basically a matrix of switches, laid out in a regular grid, not unlike the switches you see on the back of a lot of electronic gear:
Each switch represents the smallest unit of memory, a bit (usually, we think in terms of bytes when talking about memory. a byte is simply eight bits). If the switch is on, the bit's value is 1. If it's off, the value is 0. Each bit has an address in the grid. We can envision a grid that represents that memory like this:
|Note: programmers like to start with 0 as the first number. So often, as in these arrays,the first element will be numbered 0 instead of 1.|
|Note that this grid is arranged in rows of 8 bits; each row represents a byte of memory in this illustration.
When we declare a variable, the microcontroller picks the next available address and sets aside as many bits are needed for the data type we declare. If we declare a byte, for example, it sets aside 8 bits. An integer, 16 bits. a string, one byte (eight bits) for every character of the string.
Let's say we made the following variable declarations at the top of our program:
byte thisVar; int biggerVar; byte anotherVar; long reallyBigVar;
The microcontroller might assign memory space for those variables something like this (the bits set aside for each variable are color-coded):
On the PIC,we only have bit size, byte size, and word-sized variables, so we might envision a grid like this:
switch1Var var bit switch2Var var bit switch3Var var bit thisVar var byte thatVar var word
Note that the space after the bit variables isn't filled by the next byte variable. The byte variable starts at the next byte in memory.
|For more on how memory is arranged in the PIC, see the notes on special function memory registers.|
|When you refer to the variable, the microcontroller checks to see what's in those bits, and gives them to you.
So if a bit can be only 0 or 1, how do we get values greater than 1?
When we count normally, we count in groups of ten. This is because we have ten fingers. So to represent two groups of ten, we write "20", meaning "2 tens and 0 ones". This counting system is called base ten, or decimal notation. Each digit place in base ten represents a power of ten: 100 is 102, 1000 is 103, etc.
Now, imagine we had only two fingers. We might count in groups of two. We'll call this base two, or binary notation. So two, for which we write "2" in base ten, would be "10" in base two, meaning one group of two and 0 ones. Each digit place in base two represents a power of two: 100 is 22, or 4 in base ten, 1000 is 23, or 8 in base ten, and so forth.
Any number we represent in decimal notation can be converted into binary notation by simply regrouping it in groups of two. Once we've got the number in binary form, we can store it in computer memory, letting each binary digit fill a bit o memory. So if the variable myVar from above were equal to 238 (in decimal notation), it would be 11101110 in binary notation. The bits in memory used to store myVar would look like this:
|There are three notation systems used in the BASIC to represent numbers: binary, decimal, and hexadecimal (base 16).In Hexadecimal notation, the letters A through F represent the decimal numbers 10 through 15. Furthermore, there is a system of notation called ASCII, which stands for American Standard Code for Information Interchange, which represents most alphanumeric characters from the romanized alphabet as number values. We'll deal with ASCII when we get to serial communication. For more, see this online table representing the decimal numbers 0 to 255 in decimal, binary, hexadecimal, and ASCII. While we'll work mostly in decimal notation, here are times when it's more convenient to represent numbers in ms other than base 10.
To represent a number in binary notation on the BX-24, use this notation:
myVar = bx1010_0011 ' 163 in decimal
On the PIC, the notation is as follows:
myVar = %10100011
In hexadecimal, use this notation:
myVar = &HA3 ' 163 in decimal
On the PIC:
myVar = $A3
In Wiring syntax for Wiring or Arduino, we only have decimal and hexadecimal formats. To write a number in hex format, write it like this:
myVar = 0xA3;
Variables can be local to a particular subroutine if they are declared in that subroutine. Local variables can't be used by subroutines outside the one that declares them, and the memory space allotted to them is released and the value lost when the subroutine ends. Variables can also be global to a module, in which case they are declared at the beginning of the module, outside all subroutines. Global variables are accessible to all subroutines in a module, and their value is maintained for the duration of the program. Usually you use global variables for values that will need to be kept in memory for future use by other subroutines, and local variables as a "scratch pad" to store values while calculating within a subroutine.
PicBasic Pro variables are all global in scope.
Doing Arithmetic With Variables
There are certain symbols you'll need to do math in a program. They're pretty much the same symbols as you use in other programming languages:
Data Type Conversions
On the PIC, when you're adding, subtracting, multiplying, or dividing values stored in variables, you can mix data types, as long as whatever variable holds the result is big enough. So you need to use data types that can fit the results you expect. For example:
sensorVar var byte counterVar var byte bigVar var word bigVar = 500 sensorVar = 200 counterVar = 70
sensorVar = sensorVar + counterVar
The result is 270, which is more than can fit in a byte. What happens in this case is that the variable rolls over. If we put 256 in the variable, it reads as 0, 257, reads as 1, and so forth. Counting this way, 270 would read as 14.
sensorVar = bigVar - 10
The result is 490. This doesn't fit in a byte variable, so the result will roll over. The result would read as 235.
In BX-Basic, when you're adding, subtracting, multiplying, or dividing values stored in variables, you must make sure that the variables you're operating on are of the same type. For example:
dim someVar as byte dim anotherVar as byte dim smallVar as byte dim bigVar as integer dim yetAnotherVar as integer Sub main() call delay(0.5) ' start program with a half-second delay someVar = anotherVar + smallVar ' allowed, because all three ' variables are data type byte somevar = anotherVar * 3 ' allowed, because the BX-24 ' interprets 3 as a byte someVar = anotherVar + bigVar ' not allowed, because ' bigVar is an integer, while ' anotherVar and someVar are bytes someVar = anotherVar - 568 ' not allowed, because 568 ' is larger than a byte yetAnotherVar = bigvar + 568 ' allowed, because 568 will fit ' in an integer variable yetAnotherVar = bigvar * someVar ' not allowed, because someVar ' is a byte and the other two ' variables are integers end sub
There are conversion functions to convert data types. For example:
bigVar = cInt(someVar) ' sets bigVar = an integer ' of equal value to someVar's value debug.print cStr(65) ' converts value to ASCII bytes, ' the character "6" and the character "5"
See the BX-24 system library for all the conversion functions.
In addition to variables, every programming language also includes constants, which are simply variables that dont change. Theyre a useful way to label and change numbers that get used repeatedly within your program. For example, in chapter 6 youll see an example program that runs a servo motor. Servo motors have a minimum and maximum pulse width that doesnt change, although each servos minimum and maximum might be somewhat different. Rather than change every occurrence of the minimum and maximum numbers in the program, we make them constants, so we only have to change the number in one place.
In PicBasic Pro, constants are declared at the beginning of your program, like so:
MinPulse con 100
Then you can refer to them in the program just like you do variables, like so:
PulseWidth = minPulse + angleVar
In BX Basic, we also have to declare the type of the constant, like so:
Const minPulse as single = 0.001
You dont have to use constants in your programs, but theyre handy to know about, and you will encounter them in other peoples programs.
In Wiring/Arduino, you declare constants using a function called define:
#define LEDpin 3 #define sensorMax 253
Note that defines are always preceded by a #, and are don't have a semicolon at the end of the line. Defines always come at the beginning of the program. They actually work a bit like aliases. What happens is that you define a number as a name, and before compiling, the compiler checks for all occurrences of that name in the program and replaces it with the number. This way, defines don't take up any memory, but you get all the convenience of a named constant. | http://tigoe.net/pcomp/variables.shtml | 13 |
54 | Stars can turn into varieties of things as they collapse, including white dwarfs, nuetron stars... A black hole is suggested to be the end product of a large star that is collapsing into itself. Due to the fact that gravitational acceleration is calculated by the formula :
where mB is the mass of the black hole, as the radius (r) of the star decreases, the gravitational field on it's surface increases. This causes a chain reaction in which a greater force is put on the star to collapse, thus decreases in size even further, and the gravity of it's surface increases. It is suggested that a star would have to have a mass equivalent to three times that of our sun to become a black hole. If though a star with an equivalent mass to the Earth were to collapse into a black hole, the space that all of the matter would take up would have a radius of less than 9mm. It is easy to see that the density of this would be huge-thus demonstrating why it would have such noticeable effects.The gravitational field created would have important effects to it's surrounding environment, producing signs for astronomers to observe when looking for a black hole.
Einstein's theory of general relativity, suggest that close to the star itself, strong distortions occur in the structure of space. He found that the acceleration was equal when caused by changing motion, compared to when changed by gravitational fields. From this we deduce that at the point of a gravitational field, space itself is curved such that moving particles follow the same path as they would if they were being accelerated. This has applications towards photons of light as well as any other particle.
The effect of this gravitational field produces a enhancement of the curvature of space, in terms of a phonton of light projected from the surface of the star that is not directly along the path of the normal. It becomes deflected, causing an increased angle compared to the angle that it was projected at. Similarly, light that 'grazes' the surface of a strong gravitational sphere is deflected in the same way. The stronger the gravitational field is (i.e.the denser that the star is), the greater the angle of deflection and the greater the velocity of the wave that has to be projected to escape the field. As the density increases, the field's pull is so great that the photon of light directed horizontally at the field is deflected into the orbit of the star.
The star's light may be projected from the surface of the star to escape it's gravitational field. When the projection's angle is equal to that of the normal, the light is projected radially, escaping deformation, yet when the light is projected at any other angle, it is deflected away from the normal. The stronger the gravitational field, the greater the deflection, and the smaller the angle becomes that light is allowed to project away from the surface at without being pulled into orbit. Thus as the star becomes more dense, it's gravitational field strength is increased, until eventually the angle at which light is allowed to project away from the star is 0 degrees. As light has the greatest velocity of any known thing, and is said to go at the natural speed limit (approx 300,000,000m per second), as soon as light cannot escape from the boundary of the decaying star, neither can anything else. At this point light from both from the star itself, and that hitting the field from other sources cannot escape, thus a black hole is born.
Black holes were first understood by Kurt Schwarzchild well over 60 years ago. He proposed the properties that he expected the outer limit of the black hole to exhibit. He gave his name to the radius in which a star has a strong enough gravitational field to trap photons of light. This Schwarzchild radius, as it became known, was only dependant on the mass of the star in question, and was proportional to it. For instance if a star had a mass of 5 times that of our sun, it's Schwarzchild radius would be 15 km. As soon as the collapsing star has shrunk beyond it's Schwarzchild radius, it is said to have passed it's event horizon, as no outside observations can be made into it. The photon-sphere however is the point when light is forced to orbit the star, but is not pulled into the event horizon.
The point at which the star's mass is centred is called the singularity. This in his equations lay at the very centre of the black hole, and is considered the centre of it's gravitational field. The singularity is infinitesimally small because mathematically it is found to be a single point.
Penrose diagrams of a Schwarchild black hole show an additional singularity. This is said to be of a white hole, which supposedly demonstrates the opposite feature to that of a black hole. Instead of the huge attractive force of gravity, it actually repels matter. It is suggested that it is the missing link between our universe and another, releasing the matter that for us is lost into the black hole. As this phenomena has never been observed, the theory is mainly by-passed.
As all stars are known to rotate, it is almost impossible that we would be able to find an example of the Schwarzchild black hole in nature. The relative equations to this fact were only discovered in 1963 by an Australian mathematician named Roy P. Kerr. He found them accidently whilst working on another problem, and found that although the spinning black hole held resemblances to the Schwarzchild model, there were also distinct differences. In this new type of black hole, a body that enters it would be forced to move in a spinning motion down towards the singularity, like water in a plughole. The limit at which light can still escape this dragging force, is known as the stationary limit. The momentum of the spin decreases the size of the event horizon, the limit between this and the stationary limit being the ergosphere. On a theoretical level, a body travelling faster than the speed of light within the ergosphere could escape it, yet there is no escape from being dragged around when still within it. The ergosphere is thought to produce an oval shape, being in contact with the poles of the event horizon, while on the equator having double the diameter of the event horizon.
It is also mathmatically possible that the speed of the spin of a black hole could cause the shrinking of the event horizon such that it disappears, and the singularity is left on view. This would cause a naked singularity. This would not display the usual gravitational traits of a black hole, and would be possible to blunder into without any previous warning. It also carries the implication that we could potentially travel freely in and out of the singularity, as the event horizon is no longer present. If this were the case, by going into the orbit of a naked singularity, time travel into the past could occur. In general this is conceived to be an impossible situation, as black hole properties are assumed by the size of the mass alone, the charge and spin having little effect.
These Kerr Black Holes would have a singularity that takes the form of a ring. It's singularity is not space-like, as demonstrated in the other model, but time-like instead. Only objects that enter the event horizon on it's equator would be subject to destruction via the singularity. The interior of the singularity is a area of negative space-time, implying the reversal of the force of gravity at this point. Another possible concept is that of objects within this plain having a negative radius, but no-one has yet been able to fathom out this idea rationally.
It has also been suggested that other black holes were created when the Big Bang occured. These black holes were tiny, some as light as 0.0000001kg. We know that the density of matter as it crosses the event horizon varies inversely to the mass of the black hole, such that black holes of this miniscule nature much have had enormous pressures applied to create them. These pressures were only thought to exist during the creation of the universe as we know it. There is no evidence of their existence, except for in the laws of quantum mechanics. It has been put forward by Hawking that these black holes could have evaporated.
It is known that the componants of particles can be split to particles and anti-particles. When this occurs, and the pair remeets, they annihilate each other, and energy is created. Similarly, energy can be converted into pairs of partcles. This is known as pair production, and only works because mass and energy are equivilant. Taking this idea further, matter can be created from nothing for very breif periods of time. As it occurs almost simultaneously, it does not violate the conservation laws. If this occured near to a black hole, and half of the pair were to fall into it, the inevitable annihilation could not occur, The other half of the pair would be able to escape; energy is created. This energy has to have a notable source, as energy cannot be created or lost. The source of such energy is the black hole itself. As it is robbed of energy, it is also robbed of it's equivilant, mass, thus the black hole evaporates due to pair production. This event would only have a noticable consequence on the very smallest of the black holes. If this process did occur, we would expect to see occaisional bursts of gamma radiation being emmited from these mini black holes.
As we obviously cannot see black holes, the only things we can do to assertain their existence are apply theoretical knowledge, and observe the things that we suspect they cause. Detection of black holes is most likely to occur when we find an invisible object that has a mass which could only possibly demonstrate one. Even then we are working on the assumption that white dwarfs and neutron stars are unable to survive at such a mass. One way of calculating the mass of an object we cannot see (thus cannot gauge luminoscity or magnitude), is to follow the orbit around it of a companion star. If this star is found to be part of a binary system,with an invisible partner, then the mass of the companion can be calculate via spectral and visual analysis. If this mass is found to be in excess of 3 solar masses, then a black hole is presumed to have been found.Another way is by examing the matter that they pull towards themselves. This matter forms an accretion disk,which due to the forces acting upon it becomes hot enough to emit X-rays. These in turn can be detected, and provide us with information on the fields acting upon them.
A black hole is said to encompass the four dimensions of space and time, thus as a body approaches the event horizon, time is distorted due to the force of the acceleration, and force of the field. To an outside observer it would slow gradually, and along with it, wavelengths, although maintaining velocity, are red-shifted. As the body becomes even closer to the event horizon, time appears to stop. Strong tidal forces would cause the body to be ripped apart. Upon reaching the event horizon, the body would never be seen again, and is thought by scientists to race irreversibly towards the singularity, and become infinitely more dense.
The bizarre nature of Einstein's equations suggest that black holes should theoretically lead to parallel univeses, i.e. one that is seperate from our own. There may be many different ones of these, each slightly different to the one we are presently existing in. This however is still very much only a hyperthetical situation.
Although black holes have never been seen as such, their effect on the surroundings is clear to see. Thus by a principle called Occam's Razor, i.e. that 'the explanation of any phenomenon that requires the fewest arbitrary assumptions is the most likely to be the correct one', we assume that black holes exist, and continue to make their own indivdual mark in the universe we live in.
By Anne-Marie Cumberlidge, Keele University- 1997.
'The Dynamic Universe' by Theodore P. Snow
'Exploration of the Universe' by Abell, Morrison, and Wolff
STIS image from HST public information. | http://www.astro.keele.ac.uk/workx/blackholes/index3.html | 13 |
105 | Water on Mars
||This article may be too long to read and navigate comfortably. (August 2011)|
Water on Mars exists almost exclusively as water ice. The Martian polar ice caps consist primarily of water ice, and further ice is contained in Martian surface rocks at more temperate latitudes (permafrost). A small amount of water vapor is present in the atmosphere. There are no bodies of liquid water on the Martian surface.
Current conditions on the planet surface do not support the long-term existence of liquid water. The average atmospheric pressure and temperature are far too low, leading to immediate freezing and resulting sublimation. Despite this, research suggests that in the past there was liquid water flowing on the surface, creating large areas similar to Earth's oceans. According to Steve Squyres, Principal investigator of the Mars Exploration Rover Missions (MER): "The idea [of liquid water on Mars has] been resolved. It's been resolved by Spirit, it's been resolved by Opportunity, it's been resolved by Curiosity, it's been amply resolved from orbit as well."
There are a number of direct and indirect proofs of water's presence either on or under the surface, e.g. stream beds, polar caps, spectroscopic measurement, eroded craters or minerals directly connected to the existence of liquid water (such as goethite), grey, crystalline hematite, phyllosilicates, opal, and sulfate. With the improved cameras on advanced Mars orbiters such as Viking, Mars Odyssey, Mars Global Surveyor, Mars Express, and the Mars Reconnaissance Orbiter pictures of ancient lakes, ancient river valleys, and widespread glaciation have accumulated. Besides the visual confirmation of water from a huge collection of images, an orbiting Gamma Ray Spectrometer found ice just under the surface of much of the planet. Also, radar studies discovered pure ice in formations that were thought to be glaciers. The Phoenix lander exposed ice as it landed, watched chunks of ice disappear, detected snow falling, and even saw drops of liquid water.
Today, it is generally believed that Mars had abundant water very early in its history during which snow and rain fell on the planet and created rivers, lakes, and possibly oceans. Large clay deposits were produced. Life may even have come into existence. Large areas of liquid water have disappeared, but climate changes have frequently deposited large amounts of water-rich materials in mid-latitudes. From these materials, glaciers and other forms of frozen ground came to be. Small amounts of water probably melt on steep slopes from time to time and produce gullies. Recent images have also detected yearly changes on some slopes that may have been caused by liquid water. Although Mars is very cold at present, water could exist as a liquid if it contains salts. Salt is expected to be on the Martian surface.
Details of how water has been discovered can be found in the sections that follow on the various orbiting and landing robots that have been sent to Mars. In addition, many bits and pieces of indirect evidence are listed here. Since several missions (Mars Odyssey, Mars Global Surveyor, Mars Reconnaissance Orbiter, Mars Express, Mars Opportunity Rover and Mars Curiosity Rover) are still sending back data from the Red Planet, discoveries continue to be made. One recent discovery, announced by NASA scientists on September 27, 2012, is that the Curiosity Rover found evidence for an ancient streambed suggesting a "vigorous flow" of water on Mars.
Image maps of Mars
The following imagemap of the planet Mars has embedded links to geographical features in addition to the noted Rover and Lander locations. Click on the features and you will be taken to the corresponding article pages. North is at the top; Elevations: red (higher), yellow (zero), blue (lower).
Map of quadrangles
The following imagemap of the planet Mars is divided into the 30 quadrangles defined by the United States Geological Survey The quadrangles are numbered with the prefix "MC" for "Mars Chart." Click on the quadrangle and you will be taken to the corresponding article pages. North is at the top; is at the far left on the equator. The map images were taken by the Mars Global Surveyor.
Findings from probes
Mariner 9
Mariner 9 imaging revealed the first direct evidence of water in the form of river beds, canyons (including the Valles Marineris, a system of canyons over about 4,020 kilometres (2,500 mi) long), evidence of water erosion and deposition, weather fronts, fogs, and more. The findings from the Mariner 9 missions underpinned the later Viking program. The enormous Valles Marineris canyon system is named after Mariner 9 in honor of its achievements. Launched in 1971, its mission ended the following year.
Viking program
By discovering many geological forms that are typically formed from large amounts of water, Viking orbiters caused a revolution in our ideas about water on Mars. Huge river valleys were found in many areas. They showed that floods of water broke through dams, carved deep valleys, eroded grooves into bedrock, and traveled thousands of kilometers. Large areas in the southern hemisphere contained branched valley networks, suggesting that rain once fell. The flanks of some volcanoes are believed to have been exposed to rainfall because they resemble those occurring on Hawaiian volcanoes. Many craters look as if the impactor fell into mud. When they were formed, ice in the soil may have melted, turned the ground into mud, then the mud flowed across the surface. Normally, material from an impact goes up, then down. It does not flow across the surface, going around obstacles, as it does on some Martian craters. Regions, called "Chaotic Terrain,"seemed to have quickly lost great volumes of water which caused large channels to form downstream. The amount of water involved was almost unthinkable—estimates for some channel flows run to ten thousand times the flow of the Mississippi River. Underground volcanism may have melted frozen ice; the water then flowed away and the ground just collapsed to leave chaotic terrain.
The images below, some of the best from the Viking orbiters, are mosaics of many small, high resolution images.
Streamlined islands in Maja Valles suggest that large floods occurred on Mars.
Large amounts of water would have been required to carry out the erosion shown in this image of Dromore Crater.
Networks of branched channels in Thaumasia quadrangle are strong evidence for rain on Mars in the past.
The ejecta from Arandas Crater acted like mud suggesting that large amounts of frozen water were melted by the impact.
Channels & troughs on the flank of Alba Patera. Some are associated with lava flows, others are probably caused by running water.
Results from Viking lander experiments strongly suggest the presence of water in the present and in the past of Mars. All samples heated in the gas chromatograph-mass spectrometer (GSMS) gave off water. However, the way the samples were handled prohibited an exact measurement of the amount of water. But, it was around 1%. General chemical analysis suggested the surface had been exposed to water in the past. Some chemicals in the soil contained sulfur and chlorine that were like those remaining after sea water evaporates. Sulfur was more concentrated in the crust on top of the soil, than in the bulk soil beneath. So it was concluded that the upper crust was cemented together with sulfates that were transported to the surface dissolved in water. This process is common on Earth's deserts. The sulfur may be present as sulfates of sodium, magnesium, calcium, or iron. A sulfide of iron is also possible.
Using results from the chemical measurements, mineral models suggest that the soil could be a mixture of about 80% iron-rich clay, about 10% magnesium sulfate (kieserite?), about 5% carbonate (calcite), and about 5% iron oxides (hematite, magnetite, goethite?). These minerals are typical weathering products of mafic igneous rocks. The presence of clay, magnesium sulfate, kieserite, calcite, hematite, and goethite strongly suggest that water was once in the area. Sulfate contains chemically bound water, hence its presence suggests water was around in the past. Viking 2 found similar group of minerals. Because Viking 2 was much farther north, pictures it took in the winter showed frost.
Mars Global Surveyor
The Mars Global Surveyor's Thermal Emission Spectrometer (TES) is an instrument able to detect mineral composition on Mars. Mineral composition gives information on the presence or absence of water in ancient times. TES identified a large (30,000 square-kilometer) area (in the Nili Fossae formation) that contained the mineral olivine. It is thought that the ancient impact that created the Isidis basin resulted in faults that exposed the olivine. Olivine is present in many mafic volcanic rocks; in the presence of water it weathers into minerals such as goethite, chlorite, smectite, maghemite, and hematite. The discovery of olivine is strong evidence that parts of Mars have been extremely dry for a long time. Olivine was also discovered in many other small outcrops within 60 degrees north and south of the equator. Olivine has been found in the SNC (shergottite, nakhlite, and chassigny) meteorites that are generally accepted to have come from Mars. Later studies have found that olivine-rich rocks cover more than 113,000 square kilometers of the Martian surface. That is 11 times larger than the five volcanoes on the Big Island of Hawaii.
Below are some examples of gullies that were photographed by Mars Global Surveyor.
Group of gullies on north wall of crater that lies west of the crater Newton (41.3047 degrees south latitude, 192.89 east longitide). Image is located in the Phaethontis quadrangle.
Gullies on one wall of Kaiser Crater. Gullies usually are found in only one wall of a crater. Location is Noachis quadrangle.
Many places on Mars show dark streaks on steep slopes, such as crater walls. Dark slope streaks have been studied since the Mariner and Viking missions. It seems that streaks start out being dark, then they become lighter with age. Often they originate with a small narrow spot, then widen and extend downhill for hundreds of meters. Streaks do not seem to be associated with any particular layer of material because they do not always start at a common level along a slope. Although many of the streaks appear very dark, they are only 10% or less darker than the surrounding surface. Mars Global Surveyor found that new streaks have formed in less than one year on Mars.
Several ideas have been advanced to explain the streaks. Some involve water, or even the growth of organisms. The generally accepted explanation for the streaks is that they are formed from the avalanching of a thin layer of bright dust that is covering a darker surface. Bright dust settles on all Martian surfaces after a period of time.
Dark streaks can be seen in the images below, as seen from Mars Global Surveyor.
Tikonravev Crater floor in Arabia quadrangle. Click on image to see dark slope streaks and layers.
Dark streaks in Diacria quadrangle.
Some parts of Mars show inverted relief. This occurs when materials are deposited on the floor of a stream then become resistant to erosion, perhaps by cementation. Later the area may be buried. Eventually erosion removes the covering layer. The former streams become visible since they are resistant to erosion. Mars Global Surveyor found several examples of this process. Many inverted streams have been discovered in various regions of Mars, especially in the Medusae Fossae Formation, Miyamoto Crater, and the Juventae Plateau.
Mars Pathfinder
Pathfinder found temperatures varied on a diurnal cycle. It was coldest just before sunrise (about −78 Celsius) and warmest just after Mars noon (about −8 Celsius). These extremes occurred near the ground which both warmed up and cooled down fastest. At this location, the highest temperature never reached the freezing point of water (0 °C), so Mars Pathfinder confirmed that where it landed it is too cold for liquid water to exist. However, water could exist as a liquid if it were mixed with various salts.
Surface pressures varied diurnally over a 0.2 millibar range, but showed 2 daily minima and two daily maxima. The average daily pressure decreased from about 6.75 millibars to a low of just under 6.7 millbars, corresponding to when the maximum amount of carbon dioxide had condensed on the south pole. The pressure on the Earth is generally close to 1000 millibars, so the pressure on Mars is very low. The pressures measured by Pathfinder would not permit water or ice to exist on the surface. But, if ice were insulated with a layer of soil, it could last a long time.
Other observations were consistent with water being present in the past. Some of the rocks at the Mars Pathfinder site leaned against each other in a manner geologists term imbricated. It is believed strong flood waters in the past pushed the rocks around until they faced away from the flow. Some pebbles were rounded, perhaps from being tumbled in a stream. Parts of the ground are crusty, maybe due to cementing by a fluid containing minerals.
There was evidence of clouds and maybe fog.
Mars Odyssey
Mars Odyssey found much evidence for water on Mars in the form of pictures and with a spectrometer it proved that much of the ground is loaded with ice. In July 2003, at a conference in California, it was announced that the Gamma Ray Spectrometer (GRS) on board the Mars Odyssey had discovered huge amounts of water over vast areas of Mars. Mars has enough ice just beneath the surface to fill Lake Michigan twice. In both hemispheres, from 55 degrees latitude to the poles, Mars has a high density of ice just under the surface; one kilogram of soil contains about 500 g of water ice. But, close to the equator, there is only 2 to 10% of water in the soil. Scientists believe that much of this water is locked up in the chemical structure of minerals, such as clay and sulfates. Previous studies with infrared spectroscopes have provided evidence of small amounts of chemically or physically bound water. The Viking landers detected low levels of chemically bound water in the Martian soil. It is believed that although the upper surface only contains a percent or so of water, ice may lie just a few feet deeper. Some areas, Arabia Terra, Amazonis quadrangle, and Elysium quadrangle contain large amounts of water. Analysis of the data suggest that the southern hemisphere may have a layered structure. Both of the poles showed buried ice, but the north pole had none close to it because it was covered over by seasonal carbon dioxide (dry ice). When the measurements were gathered, it was winter at the north pole so carbon dioxide had frozen on top of the water ice. There may be much more water further below the surface; the instruments aboard the Mars Odyssey are only able to study the top meter or so of soil. If all holes in the soil were filled by water, this would correspond to a global layer of water 0.5 to 1.5 km deep. The Phoenix lander confirmed the initial findings of the Mars Odyssey. It found ice a few inches below the surface and the ice is at least 8 inches deep. When the ice is exposed to the Martian atmosphere it slowly sublimates. In fact, some of the ice was exposed by the landing rockets of the craft.
Thousands of images returned from Odyssey support the idea that Mars once had great amounts of water flowing across its surface. Some pictures show patterns of branching valleys. Others show layers that may have formed under lakes. Deltas have been identified. For many years researchers believed that glaciers existed under a layer of insulating rocks. Lineated valley fill is one example of these rock-covered glaciers. They are found on the floors of some channels. Their surfaces have ridged and grooved materials that deflect around obstacles. Some glaciers on the Earth show such features. Lineated floor deposits may be related to Lobate debris aprons, which have been proven to contain large amounts of ice by orbiting radar. The pictures below, taken with the THEMIS instrument on board the Mars Odyssey, show examples of features that are associated with water present in the present or past.
Erosion features in Ares Vallis – the streamlined shape was probably formed by running water.
Delta in Lunae Palus quadrangle.
Branching channels on floor of Melas Chasma. Image is in Coprates quadrangle.
Dao Vallis begins near a large volcano, called Hadriaca Patera, so it is thought to have received water when hot magma melted huge amounts of ice in the frozen ground. The partially circular depressions on the left side of the channel in the image above suggests that groundwater sapping also contributed water. In some areas large river valleys begin with a landscape feature called "Chaos" or Chaotic Terrain." It is thought that the ground collapsed, as huge amounts of water were suddenly released. Examples of Chaotic terrain, as imaged by THEMIS, are shown below.
Blocks in Aram showing possible source of water. The ground collapsed when large amounts of water were released. The large blocks probably still contain some water ice. Location is Oxia Palus quadrangle.
The Phoenix lander confirmed the existence of large amounts of water ice in the northern regions of Mars. This finding was predicted by theory. and was measured from orbit by the Mars Odyssey instruments. On June 19, 2008, NASA announced that dice-sized clumps of bright material in the "Dodo-Goldilocks" trench, dug by the robotic arm, had vaporized over the course of four days, strongly implying that the bright clumps were composed of water ice which sublimated following exposure. Even though dry ice also sublimates under the conditions present, it would do so at a rate much faster than observed.
On July 31, 2008, NASA announced that Phoenix confirmed the presence of water ice on Mars. During the initial heating cycle of a new sample, the Thermal and Evolved-Gas Analyzer's (TEGA) mass spectrometer detected water vapor when the sample temperature reached 0 °C. Liquid water cannot exist on the surface of Mars with its present low atmospheric pressure, except at the lowest elevations for short periods.
Results published in the journal Science after the mission ended reported that chloride, bicarbonate, magnesium, sodium potassium, calcium, and possibly sulfate were detected in the samples. Perchlorate (ClO4), a strong oxidizer, was confirmed to be in the soil. The chemical when mixed with water can greatly lower freezing points, in a manner similar to how salt is applied to roads to melt ice. Perchlorate may be allowing small amounts of liquid water to form on Mars today. Gullies, which are common in certain areas of Mars, may have formed from perchlorate melting ice and causing water to erode soil on steep slopes.
Additionally, during 2008 and early 2009, a debate emerged within NASA over the presence of 'blobs' which appeared on photos of the vehicle's landing struts, which have been variously described as being either water droplets or 'clumps of frost'. Due to the lack of consensus within the Phoenix science project, the issue had not been raised in any NASA news conferences. One scientist posited that the lander's thrusters splashed a pocket of brine from just below the Martian surface onto the landing strut during the vehicle's landing. The salts would then have absorbed water vapor from the air, which would have explained how they appeared to grow in size during the first 44 Martian days before slowly evaporating as Mars temperature dropped. Some images even suggest that some of the droplets darkened, then moved and merged; this is strong physical evidence that they were liquid.
For about as far as the camera can see, the land is flat, but shaped into polygons between 2–3 meters in diameter and are bounded by troughs that are 20 cm to 50 cm deep. These shapes are due to ice in the soil expanding and contracting due to major temperature changes.
Comparison between polygons photographed by Phoenix on Mars...
... and as photographed (in false color) from Mars orbit...
The microscope showed that the soil on top of the polygons is composed of flat particles (probably a type of clay) and rounded particles. Clay is a mineral that forms from other minerals when water is available. So, finding clay proves the existence of past water. Ice is present a few inches below the surface in the middle of the polygons, and along its edges, the ice is at least 8 inches deep. When the ice is exposed to the Martian atmosphere it slowly sublimates.
Snow was observed to fall from cirrus clouds. The clouds formed at a level in the atmosphere that was around −65 °C, so the clouds would have to be composed of water-ice, rather than carbon dioxide-ice (dry ice) because the temperature for forming carbon dioxide ice is much lower—less than −120 °C. As a result of mission observations, it is now believed that water ice (snow) would have accumulated later in the year at this location. The highest temperature measured during the mission was −19.6 °C, while the coldest was −97.7 °C. So, in this region the temperature remained far below the freezing point (0°) of water. Bear in mind that the mission took place in the heat of the Martian summer.
Interpretation of the data transmitted from the craft was published in the journal Science. As per the peer reviewed data the site had a wetter and warmer climate in the recent past. Finding calcium carbonate in the Martian soil leads scientists to believe that the site had been wet or damp in the geological past. During seasonal or longer period diurnal cycles water may have been present as thin films. The tilt or obliquity of Mars changes far more than the Earth; hence times of higher humidity are probable. The data also confirms the presence of the chemical perchlorate. Perchlorate makes up a few tenths of a percent of the soil samples. Perchlorate is used as food by some bacteria on Earth. Another paper claims that the previously detected snow could lead to a buildup of water ice.
Mars Rovers
The Mars Rovers Spirit and Opportunity found a great deal of evidence for past water on Mars. Designed to last only three months, both were still operating after more than six years. Although Spirit got trapped in a sand pit, Opportunity continues to provide scientific discovery.
The Spirit rover landed in what was thought to be a huge lake bed. However, the lake bed had been covered over with lava flows, so evidence of past water was initially hard to detect. As the mission progressed and the Rover continued to move along the surface more and more clues to past water were found.
On March 5, 2004, NASA announced that Spirit had found hints of water history on Mars in a rock dubbed "Humphrey". Raymond Arvidson, the McDonnell University Professor and chair of Earth and planetary sciences at Washington University in St. Louis, reported during a NASA press conference: "If we found this rock on Earth, we would say it is a volcanic rock that had a little fluid moving through it." In contrast to the rocks found by the twin rover Opportunity, this one was formed from magma and then acquired bright material in small crevices, which look like crystallized minerals. If this interpretation holds true, the minerals were most likely dissolved in water, which was either carried inside the rock or interacted with it at a later stage, after it formed.
By Sol 390 (Mid-February 2005), as Spirit was advancing towards "Larry's Lookout", by driving up the hill in reverse, it investigated some targets along the way, including the soil target, "Paso Robles", which contained the highest amount of salt found on the red planet. The soil also contained a high amount of phosphorus in its composition, however not nearly as high as another rock sampled by Spirit, "Wishstone". Squyres said of the discovery, "We're still trying to work out what this means, but clearly, with this much salt around, water had a hand here".
As Spirit traveled with a dead wheel in December 2007, pulling the dead wheel behind, the wheel scraped off the upper layer of the martian soil, uncovering a patch of ground that scientists say shows evidence of a past environment that would have been perfect for microbial life. It is similar to areas on Earth where water or steam from hot springs came into contact with volcanic rocks. On Earth, these are locations that tend to teem with bacteria, said rover chief scientist Steve Squyres. "We're really excited about this," he told a meeting of the American Geophysical Union (AGU). The area is extremely rich in silica – the main ingredient of window glass. The researchers have now concluded that the bright material must have been produced in one of two ways. One: hot-spring deposits produced when water dissolved silica at one location and then carried it to another (i.e. a geyser). Two: acidic steam rising through cracks in rocks stripped them of their mineral components, leaving silica behind. "The important thing is that whether it is one hypothesis or the other, the implications for the former habitability of Mars are pretty much the same," Squyres explained to BBC News. Hot water provides an environment in which microbes can thrive and the precipitation of that silica entombs and preserves them. Squyres added, "You can go to hot springs and you can go to fumaroles and at either place on Earth it is teeming with life – microbial life.
Opportunity rover was directed to a site that had displayed large amounts of hematite from orbit. Hematite often forms from water. When Opportunity landed, layered rocks and marble-like hematite concretions ("blueberries") were easily visible. In its years of continuous operation, Opportunity sent back much evidence that a wide area on Mars was soaked in liquid water.
During a press conference in March 2006, mission scientists discussed their conclusions about the bedrock, and the evidence for the presence of liquid water during their formation. They presented the following reasoning to explain the small, elongated voids in the rock visible on the surface and after grinding into it (see last two images below). These voids are consistent with features known to geologists as "vugs". These are formed when crystals form inside a rock matrix and are later removed through erosive processes, leaving behind voids. Some of the features in this picture are "disk-like", which is consistent with certain types of crystals, notably sulfate minerals. Additionally, mission members presented first data from the Mössbauer spectrometer taken at the bedrock site. The iron spectrum obtained from the rock El Capitan shows strong evidence for the mineral jarosite. This mineral contains hydroxide ions, which indicates the presence of water when the minerals were formed. Mini-TES data from the same rock showed that it consists of a considerable amount of sulfates. Sulfates also contain water.
Spirit Rover found evidence for water in the Columbia Hills of Gusev crater. In the Clovis group of rocks the Mossbauer spectrometer(MB) detected goethite. Goethite forms only in the presence of water, so its discovery is the first direct evidence of past water in the Columbia Hills's rocks. In addition, the MB spectra of rocks and outcrops displayed a strong decline in olivine presence, although the rocks probably once contained much olivine. Olivine is a marker for the lack of water because it easily decomposes in the presence of water. Sulfate was found, and it needs water to form. Other rock groups also contained sulfates. One type of soil, called Paso Robles, from the Columbia Hills, may be an evaporate deposit because it contains large amounts of sulfur, phosphorus, calcium, and iron. In addition, the MB found that much of the iron in Paso Robles soil was of the oxidized, Fe+++ form, which would happen if water had been present.
After Spirit stopped working scientists studied old data from the Miniature Thermal Emission Spectrometer, or Mini-TES and confirmed the presence of large amounts of carbonate-rich rocks, which means that regions of the planet may have once harbored water. The carbonates were discovered in an outcrop of rocks called "Comanche."
On March 18, 2013, NASA reported evidence from instruments on the Curiosity rover of mineral hydration, likely hydrated calcium sulfate, in several rock samples including the broken fragments of "Tintina" rock and "Sutton Inlier" rock as well as in veins and nodules in other rocks like "Knorr" rock and "Wernicke" rock. Analysis using the rover's DAN instrument provided evidence of subsurface water, amounting to as much as 4% water content, down to a depth of 60 cm (2.0 ft), in the rover's traverse from the Bradbury Landing site to the Yellowknife Bay area in the Glenelg terrain.
Mars Reconnaissance Orbiter
The Mars Reconnaissance Orbiter's HiRISE instrument has taken many images that strongly suggest that Mars has had a rich history of water-related processes. A major discovery was finding evidence of hot springs. These may have contained life and may now contain well-preserved fossils of life. Research, in the January 2010 issue of Icarus, described strong evidence for sustained precipitation in the area around Valles Marineris. The types of minerals there are associated with water. Also, the high density of small branching channels indicates a great deal of precipitation because they are similar to stream channels on the Earth.
Some places on Mars show inverted relief. In these locations, a stream bed appears as a raised feature, instead of a depression. The inverted former stream channels may be caused by the deposition of large rocks or due to cementation of loose materials. In either case erosion would erode the surrounding land and consequently leave the old channel as a raised ridge because the ridge will be more resistant to erosion. Images below, taken with HiRISE show sinuous ridges that are old channels that have become inverted.
Using data from Mars Global Surveyor, Mars Odyssey and the Mars Reconnaissance Orbiter, scientists have found widespread deposits of chloride minerals. Usually chlorides are the last minerals to come out of solution. A picture below shows some deposits within the Phaethontis quadrangle. Evidence suggests that the deposits were formed from the evaporation of mineral-enriched waters. Lakes may have been scattered over large areas of the Martian surface. Carbonates, sulfates, and silica should precipitate out ahead of them. Sulfates and silica have been discovered by the Mars Rovers. Places with chloride minerals may have once held various life forms. Furthermore, such areas should preserve traces of ancient life.
Rocks on Mars have been found to frequently occur as layers, called strata, in many different places. Layers form by various ways. Volcanoes, wind, or water can produce layers. Many places on Mars show rocks arranged in layers. Scientists are happy about finding layers on Mars since layers may have formed under large bodies of water. Sometimes the layers display different colors. Light-toned rocks on Mars have been associated with hydrated minerals like sulfates. Instruments on orbiting spacecraft have detected clay (also called phyllosilicates) in some layers. Scientists are excited about finding hydrated minerals such as sulfates and clays on Mars because they are usually formed in the presence of water. Below are a few of the many examples of layers that have been studied with HiRISE.
Close-up of layers in west slope of Asimov Crater. Shadows show the overhang. Some of the layers are much more resistant to erosion, so they stick out. Location is Noachis quadrangle.
Dark slope streaks near the top of a pedestal crater, as seen by HiRISE under the HiWish program.
Much of the surface of Mars is covered by a thick smooth mantle that is thought to be a mixture of ice and dust. This ice-rich mantle, a few yards thick, smoothes the land. But in places it displays a bumpy texture, resembling the surface of a basketball. Because there are few craters on this mantle, the mantle is relatively young. The images below, all taken with HiRISE, show a variety of views of this smooth mantle.
Dissected Mantle with layers. Location is Noachis quadrangle.
The mantle is thought to result from frequent, major climate changes. Changes in Mars's orbit and tilt cause significant changes in the distribution of water ice from polar regions down to latitudes equivalent to Texas. During certain climate periods water vapor leaves polar ice and enters the atmosphere. The water returns to the ground at lower latitudes as deposits of frost or snow mixed generously with dust. The atmosphere of Mars contains a great deal of fine dust particles. Water vapor condenses on the particles, then they fall down to the ground due to the additional weight of the water coating. When ice at the top of the mantling layer goes back into the atmosphere, it leaves behind dust, which insulates the remaining ice. HiRISE has carried out many observations of gullies that are assumed to have been caused by recent flows of liquid water. Many gullies are imaged over and over to see if any changes occur. Some repeat observations of gullies have displayed changes that some scientists argue were caused by liquid water over the period of just a few years. Others say the flows were merely dry flows. These were first discovered by the Mars Global Surveyor. Below are some of the many hundreds of gullies that have been studied with HiRISE.
Gullies near Newton Crater, as seen by HiRISE, under the HiWish program. Place where there was an old glacier is labeled. Image from Phaethontis quadrangle.
Gullies in a crater in Terra Sirenum, as seen by HiRISE under the HiWish Program.
Of interest from the days of the Viking Orbiters are piles of material surrounding cliffs. These deposits of rock debris are called lobate debris aprons (LDAs). These features have a convex topography and a gentle slope from cliffs or escarpments; this suggests flow away from the steep source cliff. In addition, lobate debris aprons can show surface lineations just as rock glaciers on the Earth. In 2008, research with the Shallow Radar on the Mars Reconnaissance Orbiter provided strong evidence that the LDAs in Hellas Planitia and in mid northern latitudes are glaciers that are covered with a thin layer of rocks. Radar from the Mars Reconnaissance Orbiter gave a strong reflection from the top and base of LDAs, meaning that pure water ice made up the bulk of the formation (between the two reflections).). The discovery of water ice in LDAs demonstrates that water is found at even lower latitudes. Future colonists on Mars will be able to tap into these ice deposits, instead of having to travel to much higher latitudes. Another major advantage of LDAs over other sources of Martian water is that they can easily detected and mapped from orbit. Below are examples of lobate debris aprons that were studied with HiRISE.
View of lobate debris apron along a slope. Image located in Arcadia quadrangle.
Place where a lobate debris apron begins. Note stripes which indicate movement. Image located in Ismenius Lacus quadrangle.
Research, reported in the journal Science in September 2009, demonstrated that some new craters on Mars show exposed, pure, water ice. After a time, the ice disappears, evaporating into the atmosphere. The ice is only a few feet deep. The ice was confirmed with the Compact Imaging Spectrometer (CRISM) on board the Mars Reconnaissance Orbiter (MRO). The ice was found in five locations. Three of the locations are in the Cebrenia quadrangle. These locations are 55.57° N, 150.62° E; 43.28° N, 176.9° E; and 45° N, 164.5° E. Two others are in the Diacria quadrangle: 46.7° N, 176.8° E and 46.33° N, 176.9° E. This discovery proves that future colonists on Mars will be able to obtain water from a wide variety of locations. The ice can be dug up, melted, then taken apart to provide fresh oxygen and hydrogen for rocket fuel. Hydrogen is the powerful fuel used by the space shuttle main engines.
Columnar jointing
In 2009, HiRISE discovered columnar jointing in rocks on Mars. Such jointing is accepted as having involved water. To make the parallel cracks of columnar jointing, more cooling is necessary, and water is the most logical choice. Scientists calculate that the water was present intermittently for a few months to a few years.
Light-toned layered deposits
HiRISE has sent back many images of large surface areas that are termed "light-toned layered deposits." These 30–80 meter thick deposits are believed to have been formed from the action of water. They contain evidence of stream channel systems. Furthermore, chemical data from the Compact Reconnaissance Imaging Spectrometer orbiting the planet have shown water related mineral forms: opal (hydrated silica) and iron sulfates. These can be formed from the action of low temperature acid solutions reacting with basaltic rocks. These features of light-toned layered deposits strongly suggest that there was long lasting precipitation and surface runoff during the Hesperian epoch of Martian history.
Sources of Martian water
Volcanoes give off great amounts of gas when they erupt. The gases are usually water vapor and carbon dioxide. Estimates put the amount of gas released into the atmosphere as enough to make the Martian atmosphere thicker than the Earth's. The water vapor from the volcanoes could have made enough water to place all of Mars under 120 meters of water. In addition, all the carbon dioxide released would have raised the temperature of the planet due to the greenhouse effect, by trapping heat in the form of infrared radiation. So the eruption of lava on Tharsis could have made Mars Earth-like in the past. With a thicker atmosphere, oceans and/or lakes may have been present.
Groundwater on Mars
One group of researchers proposed that some of the layers on Mars were caused by groundwater rising to the surface in many places, especially inside of craters. According to the hypothesis, groundwater with dissolved minerals came to the surface, in and later around craters, and helped to form layers by adding minerals (especially sulfate) and cementing sediments. This hypothesis is supported by a groundwater model and by sulfates discovered in a wide area. At first, by examining surface materials with Opportunity Rover, scientists discovered that groundwater had repeatedly risen and deposited sulfates. Later studies with instruments on board the Mars Reconnaissance Orbiter showed that the same kinds of materials exist in a large area that included Arabia.
Evidence of frozen water
Ice patches
On July 28, 2005, the European Space Agency announced the existence of a crater partially filled with frozen water; some then interpreted the discovery as an "ice lake". Images of the crater, taken by the High Resolution Stereo Camera on board the European Space Agency's Mars Express spacecraft, clearly show a broad sheet of ice in the bottom of an unnamed crater located on Vastitas Borealis, a broad plain that covers much of Mars' far northern latitudes, at approximately 70.5° North and 103° East. The crater is 35 km wide and about 2 km deep.
The height difference between the crater floor and the surface of the water ice is about 200 metres. ESA scientists have attributed most of this height difference to sand dunes beneath the water ice, which are partially visible. While scientists do not refer to the patch as a "lake", the water ice patch is remarkable for its size and for being present throughout the year. Deposits of water ice and layers of frost have been found in many different locations on the planet.
Equatorial frozen sea
Surface features consistent with pack ice have been discovered in the southern Elysium Planitia. What appear to be plates of broken ice, ranging in size from 30 m to 30 km, are found in channels leading to a flooded area of approximately the same depth and width as the North Sea. The plates show signs of break up and rotation that clearly distinguish them from lava plates elsewhere on the surface of Mars. The source for the flood is thought to be the nearby geological fault Cerberus Fossae which spewed water as well as lava aged some 2 to 10 million years. It was suggested that the water exited the Cerberus Fossae then pooled and froze in the low, level plains. Not all scientists agree with these conclusions.
Glaciers formed much of the observable surface in large areas of Mars. Much of the area in high latitudes, especially the Ismenius Lacus quadrangle, is believed to still contain enormous amounts of water ice. Recent evidence has led many planetary scientists to believe that water ice still exists as glaciers with a thin covering of insulating rock. In March 2010, scientists released the results of a radar study of an area called Deuteronilus Mensae that found widespread evidence of ice lying beneath a few meters of rock debris. Glaciers are believed to be associated with Fretted terrain, many volcanoes, and even some craters. Ridges of debris on the surface of the glaciers show the direction of ice movement. The surface of some glaciers has a rough texture due to sublimation of buried ice. The ice goes directly into a gas (this process is called sublimation) and leaves behind an empty space. Overlying material then collapses into the void. Other pictures below show various features that appear to be connected with the existence of glaciers.
Glacier as seen by HiRISE under the HiWish program. Area in rectangle is enlarged in the next photo. Zone of accumulation of snow at the top. Glacier is moving down valley, then spreading out on plain. Evidence for flow comes from the many lines on surface. Location is in Protonilus Mensae in Ismenius Lacus quadrangle.
Enlargement of area in rectangle of the previous image. On Earth, the ridge would be called the terminal moraine of an alpine glacier. Picture taken with HiRISE under the HiWish program. Image from Ismenius Lacus quadrangle.
Context for the next image of the end of a flow feature or glacier. Location is Hellas quadrangle. Picture taken with HiRISE under the HiWish program.
Close-up of the area in the box in the previous image. This may be called by some the terminal moraine of a glacier. For scale, the box shows the approximate size of a football field. Image taken with HiRISE under the HiWish program. Location is Hellas quadrangle.
Possible moraine on the end of a past glacier on a mound in Deuteronilus Mensae, as seen by HiRISE, under the HiWish program.
Possible Glacial Cirque in Hellas Planitia, as seen by HiRISE, under the HiWish program. Lines are probably due to downhill movement.
Glaciers, as seen by HiRISE, under HiWish program. Glacier on left is thin because it has lost much of its ice. Glacier on the right on the other hand is thick; it still contains a lot of ice that is under a thin layer of dirt and rock. Location is Hellas quadrangle.
Remains of glaciers, as seen by HiRISE under the HiWish program. Image from Ismenius Lacus quadrangle.
Probable glacier as seen by HiRISE under HiWish program. Radar studies have found that it is made up of almost completely pure ice. It appears to be moving from the high ground (a mesa) on the right. Location is Ismenius Lacus quadrangle.
Polar ice caps
Both the northern polar cap (Planum Boreum) and the southern polar cap (Planum Australe) are believed to grow in thickness during the winter and partially sublime during the summer. Data obtained by the Mars Express satellite made it possible in 2004 to confirm that the southern polar cap has ice at a depth of 3.7 kilometres (2.3 mi) below the surface with varying contents of frozen water, depending on its latitude; the polar cap is a mixture of CO2 ice and water ice. The second part comprises steep slopes known as 'scarps', made almost entirely of water ice, that fall away from the polar cap to the surrounding plains. The third part encompasses the vast permafrost fields that stretch for tens of kilometres away from the scarps. NASA scientists calculate that the volume of water ice in the south polar ice cap, if melted, would be sufficient to cover the entire planetary surface to a depth of 11 metres.
Results, published in 2009, of shallow radar measurements of the North Polar ice cap determined that the volume of water ice in the cap is 821,000 cubic kilometers (197,000 cubic miles). That's equal to 30% of the Earth's Greenland ice sheet or enough to cover the surface of Mars to a depth of 5.6 meters (dividing the ice cap volume by the surface area of Mars is how this number is found). The radar instrument is on board the Mars Reconnaissance Orbiter.
Ground ice
For many years, various scientists have suggested that some Martian surfaces look like periglacial regions on Earth. Sometimes it is said that these are regions of permafrost. These observations suggest that frozen water lies right beneath the surface. A common feature in the higher latitudes, patterned ground, can occur in a number of shapes, including stripes and polygons. On the Earth, these shapes are caused by the freezing and thawing of soil. There are other types of evidence for large amounts of frozen water under the surface of Mars, such as terrain softening which rounds sharp topographical features. Besides landscape features that suggest water frozen in the ground, there is evidence from Mars Odyssey's Gamma Ray Spectrometer, theoretical calculations, and direct measurements with the Phoenix lander.
Permafrost polygons in the Arctic
Flat terrain near the north pole of Mars showing what appear to be polygonal patterns.
Patterned ground in the Canadian Arctic
Cones in Athabasca Valles formed from lava interacting with ice.
Some areas of Mars are covered with cones that resemble those on Earth where lava has flowed on top of frozen ground. The heat of the lava melts the ice, then changes it into steam. The powerful force of the steam works its way through the lava and produces a cone. In the Athabasca Valles image above, the larger cones were made when the steam went through the thicker layers of lava. The difference between highest elevation (red) to lowest (dark blue) is 170 metres (560 ft).
Scalloped topography
Certain regions of Mars display scalloped-shaped depressions. The depression are believed to be the remains of an ice-rich mantle deposit. Scallops were caused by ice sublimating from frozen soil. This mantle material probably fell from the air as ice formed on dust when the climate was different due to changes in the tilt of the Mars pole. The scallops are typically tens of meters deep and from a few hundred to a few thousand meters across. They can be almost circular or elongated. Some appear to have coalesced causing a large heavily pitted terrain to form. The process of forming the terrain may begin with sublimation from a crack. There are often polygon cracks where scallops form. So the presence of scalloped topography is an indication of frozen ground.
Possible evidence of flowing water
In August 2011, NASA announced the discovery of seasonal changes in gullies near crater rims on the Southern hemisphere. This suggests salty water flowing and then evaporating, possibly leaving some sort of residue.
On September 27, 2012, NASA scientists announced that the Curiosity rover found evidence for an ancient streambed suggesting a "vigorous flow" of water on Mars. In particular, analysis of an ancient streambed indicated that the water ran quickly, possibly at hip depth. The discovery marks an important achievement for Curiosity, and supports the notion that Mars was once capable of harboring life.
Remnants of the now dried-up stream were found inside the Gale Crater within which Curiosity is working. Proof of running water came in the form of rounded pebbles and gravel fragments that could have only been weathered by strong currents. Their shape and orientation suggests long-distance transport from above the rim of the crater, where a channel named Peace Vallis feeds into the alluvial fan. Because there are many channels like this, NASA scientists believe the flows were continuous or repeated for long durations, and not intermittent.
"From the size of gravels it carried, we can interpret the water was moving about 3 feet per second, with a depth somewhere between ankle and hip deep," noted Curiosity scientist William Dietrich speaking through NASA's official release. "Plenty of papers have been written about channels on Mars with many different hypotheses about the flows in them. This is the first time we're actually seeing water-transported gravel on Mars. This is a transition from speculation about the size of streambed material to direct observation of it."
Mars meteorites
Over thirty meteorites have been found that came from Mars. Some of them contain evidence that they were exposed to water when on Mars.
It has been shown that another class of meteorites, the nakhlites, were suffused with liquid water around 620 million years ago and that they were ejected from Mars around 10.75 million years ago by an asteroid impact. They fell to Earth within the last 10,000 years.
In 1996, a group of scientists reported on chemical fossils in Allan Hills 84001, a meteorite from Mars. Many studies disputed the validity of the fossils. It was found that most of the organic matter in the meteorite was of terrestrial origin.
A variety of lake basins have been discovered on Mars. Some are comparable in size to the largest lakes on Earth, such as the Caspian Sea, Black Sea, and Lake Baikal. Lakes that are fed by valley networks are found in the southern highlands. There are places that are closed depressions with river valleys leading into them. These areas are thought to have once contained lakes. One is in Terra Sirenum which had its overflow move through Ma'adim Vallis into Gusev Crater, which was explored by the Mars Exploration Rover Spirit. Another is near Parana Valles and Loire Vallis. Some lakes are believed to have formed by precipitation, while others were formed from groundwater. Lakes are believed to have existed in the Argyre basin, the Hellas basin, and maybe in Valles Marineris.
Research, published in January 2010, suggests that Mars had lakes, each around 20 km wide, along parts of the equator. Although earlier research showed that Mars had a warm and wet early history that has long since dried up, these lakes existed in the Hesperian Epoch, a much earlier period. Using detailed images from NASA's Mars Reconnaissance Orbiter, the researchers speculate that there may have been increased volcanic activity, meteorite impacts or shifts in Mars' orbit during this period to warm Mars' atmosphere enough to melt the abundant ice present in the ground. Volcanoes would have released gases that thickened the atmosphere for a temporary period, trapping more sunlight and making it warm enough for liquid water to exist. In this new study, channels were discovered that connected lake basins near Ares Vallis. When one lake filled up, its waters overflowed the banks and carved the channels to a lower area where another lake would form. These lakes would be another place to look for evidence of present or past life.
Lake deltas
Researchers have found a number of examples of deltas that formed in Martian lakes. Finding deltas is a major sign that Mars once had a lot of water. Deltas usually require deep water over a long period of time to form. Also, the water level needs to be stable to keep sediment from washing away. Deltas have been found over a wide geographical range and several are pictured below.
Delta in Lunae Palus quadrangle
Delta in Margaritifer Sinus quadrangle
Probable delta in Eberswalde crater
Mars Ocean Hypothesis
The Mars Ocean Hypothesis states that nearly a third of the surface of Mars was covered by an ocean of liquid water early in the planet’s geologic history. This primordial ocean, dubbed Oceanus Borealis, would have filled the Vastitas Borealis basin in the northern hemisphere, a region which lies 4–5 km (2.5–3 miles) below the mean planetary elevation, at a time period of approximately 3.8 billion years ago. Early Mars would require a warmer climate and thicker atmosphere to allow liquid water to remain at the surface.
Observational evidence
There are several physical features in the present geography of Mars that suggest the existence of an ocean. Networks of valleys that merge into larger channels imply erosion by a liquid agent, and resemble ancient riverbeds on Earth. Enormous channels, 25 km wide and several hundred meters deep, appear to direct flow from underground aquifers in the Southern uplands into the Northern plains.
Research published in the Journal of Geophysical Research – Planets, shows a much higher density of flow paths than formerly believed (more than twice as many). Regions on Mars with the most valleys are comparable to what is found on our Earth. In the research, the team developed a computer program to identify valleys by searching for U-shaped structures in topographical data. The large amount of valley networks strongly supports rain on the planet in the past. The global pattern of the martian valleys could be explained with a big northern ocean. A large ocean in the northern hemisphere would explain why there is a southern limit to valley networks; the southernmost regions of Mars, located farthest from the water reservoir, would get little rainfall and would not develop valleys. In a similar fashion the lack of rainfall would explain why Martian valleys become shallower as you go from north to south.
Much of the northern hemisphere of Mars is located at a significantly lower elevation than the rest of the planet (the Martian dichotomy), and is unusually flat. Along the margins of this region are physical features indicative of ancient shorelines. Sea level must follow a line of constant gravitational potential. After adjustment for polar wander caused by mass redistributions from volcanism, the Martian paleo-shorelines meet this criterion. The Mars Orbiter Laser Altimeter (MOLA), which accurately determined the altitude of all parts of Mars, found that the watershed for an ocean on Mars covers three-quarters of the planet.
Although phyllosilicates, i.e. clay, have been observed on Mars, the northern lowlands are known to show only few such abundances in early geological layers, a fact that has so far contradicted the theory of a northern ocean on Mars. A 2011 numerical simulation study found though that an ocean of water on the northern hemisphere would have had a temperature near the freezing point "which would have hindered the formation of phyllosilicate minerals in the ocean basin" and would therefore explain the relative absence of clay minerals on the northern hemisphere.
Theoretical issues
The existence of liquid water on the surface of Mars requires both a warmer and thicker atmosphere. Atmospheric pressure on the present day Martian surface only exceeds that of the triple point of water (6.11 hPa) in the lowest elevations; at higher elevations water can exist only in solid or vapor form. Annual mean temperatures at the surface are currently less than 210 K, significantly less than what is needed to sustain liquid water. However, early in its history Mars may have had conditions more conducive to retaining liquid water at the surface.
Calculations of the volume of one of the supposed oceans yielded a number that would mean that Mars was covered with as much water as the Earth.
The water that was in this ocean may have escaped into space, been deposited in the ice caps, or have been trapped in the soil.
Alternative ideas
The existence of a primordial Martian ocean remains controversial among scientists. The Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment has discovered large boulders on the site of the ancient seabed, which should contain only fine sediment. The interpretations of some features as ancient shorelines has been challenged. Some have been shown to be of volcanic origin.
Possibility of Mars having enough water to support life
Life is generally understood to require liquid water. Some evidence suggests that Mars had enough water to form lakes and to carve huge river valleys. Vast quantities of water have been discovered frozen beneath much of the Martian surface. Nevertheless, many significant issues remain.
- History. When did the water once flow on Mars? Mars areas have been extremely dry for long periods, as marked by the presence of olivine that would be decomposed by water. On the other hand, many other areas contain clay and/or sulfates, which indicate the presence of liquid water on the surface.
- Sulfates. While the presence of sulfates bolsters the case for surface water, they present problems of their own. Sulfates form under acid conditions. On Earth some organisms can survive in acidic environment, but questions remain about the possibility of life forming under such conditions. Even allowing for adaptation to acidic environments, could life actually originate in acidic waters? On the other hand, carbonates, which do not form in acid solutions, have been found in Martian meteorites by the Phoenix lander and by the Compact Reconnaissance Imaging Spectrometer, an instrument aboard the NASA Mars Reconnaissance Orbiter.
- Salts. The saltiness of the soil could be a major obstacle for life. Salt has been used by the human race as a major preservative since most organisms can not live in highly salted water (halophile bacteria being an exception).
- Oxidizers. The Phoenix mission discovered perchlorate, a highly oxidizing chemical in the soil. Although some organisms use perchlorate, the chemical could be hostile to life. Other research from different sources show that some areas of Mars may not be that hostile to life.
Benton Clark III, a member of the Mars Exploration Rover (MER) team, surmises that Martian organisms could be adapted to a sort of suspended animation for millions of years. Indeed, some organisms can endure extreme environments for a time. Measurements performed on Earth under 50 meters of permafrost, showed that half of the microorganisms would accumulate enough radiation from radioactive decay in rocks to die in 10 million years, but if organisms come back to life every few million years they could repair themselves and reset any damaged systems, especially DNA. Other scientists are in agreement.
The discovery of organisms living in extreme conditions on Earth has brought renewed hope that life exists, or once existed on Mars. Colonies of microbes have been found beneath almost 3 kilometers of glaciers in the Canadian Arctic and in Antarctica. Could microbes live under the ice caps of Mars? In the 1980s, it was thought that microorganisms might live up to a depth of a few meters under ground. Today, we know that a wide variety of organisms grow to a depth of over a mile. Some live on gases like methane, hydrogen, and hydrogen sulfide that originate from volcanic activity. Mars has had widespread volcanic activity. It is entirely possible that life exists near volcanoes or underground reservoirs of hot magma. Some organisms live inside of basalt (the most common rock on Mars) and produce methane. Methane has been tracked on Mars. Some[who?] believe there must be some (possibly biological) mechanism that is producing methane since it will not last long in the present atmosphere of Mars. Other organisms eat sulfur compounds; the same chemicals that have been found in many regions of Mars. Scientists have suggested that whole communities of organisms could thrive near areas heated by volcanic activity. Studies have shown that certain forms of life have adapted to extremely high temperatures (80° to 110 °C). With all the volcanic activity on Mars, one would suppose that certain places have not yet cooled down. An underground magma chamber might melt ice, then circulate water through the ground. Remains of hot springs like the ones in Yellowstone National Park have actually been spotted by the Mars Reconnaissance Orbiter. Minerals associated with hot springs, such as opal and silica have been studied on the ground by Spirit Rover and mapped from orbit by the Mars Reconnaissance Orbiter. Some volcanoes, like Olympus Mons, seem relatively young to the eyes of a geologist. However, no warm areas have ever been found on the surface. The Mars Global Surveyor scanned most of the surface in infrared with its TES instrument. The Mars Odyssey's THEMIS, also imaged the surface in wavelengths that measure temperature.
The possibility of liquid water on Mars has been examined. Although water would quickly boil or evaporate away, lake-sized bodies of water would quickly be covered with an ice layer which would greatly reduce evaporation. With a cover of dust and other debris, water under ice might last for some time and could even flow to significant distances as ice-covered rivers. Lake Vostok in Antarctica may have implications for liquid water still being on Mars because if the lake existed before the perennial glaciation began, is likely that the lake did not freeze all the way to the bottom. Accordingly if water existed before the polar ice caps on Mars, it is likely that there is still liquid water below the ice caps. Large quantities of water could be released, even today, by an asteroid impact. It has been suggested that life has survived over millions of years by periodic impacts which melted ice and allowed organisms to come out of dormancy and live for a few thousands of years. But if impacts brought the water, maybe liquid water did not exist on the surface very long. Large river valleys could have been made in short periods of time (maybe just days) when impacts caused water to flow as a giant flood. We suppose that Mars had great amounts of water because of the existence of so many large river valleys. Maybe, valleys did not take thousands to millions of years to form as on the Earth. It is accepted that a vast network of channels, resembling many Martian channels, were formed in a very short time period in eastern Washington State when floods were caused by a breakout of an ice-dammed lake. So, perhaps not that much water was involved and maybe it did not last long enough for life to develop.
Studies have shown that various salts present in the Martian soil could act as a kind of antifreeze—keeping water liquid well below its normal freezing point. Some calculations suggest that tiny amounts of liquid water may be present for short periods of time (hours) in some locations. Some researchers have calculated that when taking into consideration insolation and pressure factors that liquid water could exist in some areas for about 10% of the Martian year; others estimate that water could be a liquid for only 2% of the year. Either way, that may be enough liquid water to support some forms of hardy organisms. It may not take much liquid water for life; organisms have been found on Earth living on extremely thin layers of unfrozen water in below-freezing locations. Research described in December 2009, showed that liquid water could form in the daytime inside of snow on Mars. As light heats ice, it may be warming up dust grains located inside. These grains would then store heat and form water by melting some of the ice. The process has been already been observed in Antarctica. Enough water may be produced for physical, chemical, and biological processes..
Another location under consideration is in underground caves on Mars. There is some evidence for possible subsurface ice sheets near the equator. This may for instance be geologically ancient ice which may melt or sublimate on its way towards the surface.
Experiments also suggest that lichen and bacteria may also be able to survive solely on humidity from the air, particularly in cracks in the rocks. The water is present in the morning and evening when humidity briefly condenses as precipitation across the surface, and the organisms can absorb it.
Valleys and channels
The Viking Orbiters caused a revolution in our ideas about water on Mars. Huge river valleys were found in many areas. They showed that floods of water broke through dams, carved deep valleys, eroded grooves into bedrock, and traveled thousands of kilometers. Areas of branched streams, in the southern hemisphere, suggested that rain once fell.
The images below, some of the best from the Viking Orbiters, are mosaics of many small, high resolution images. Click on the images for more detail. Some of the pictures are labeled with place names.
Streamlined Islands seen by Viking showed that large floods occurred on Mars. Image is located in Lunae Palus quadrangle.
Scour Patterns, located in Lunae Palus quadrangle, were produced by flowing water from Maja Vallis, which lies just to the left of this mosaic. Detail of flow around Dromore Crater is shown on the next image.
The ejecta from Arandas Crater acts like mud. It moves around small craters (indicated by arrows), instead of just falling down on them. Craters like this suggest that large amounts of frozen water were melted when the impact crater was produced. Image is located in Mare Acidalium quadrangle and was taken by Viking Orbiter.
Branched channels in Thaumasia quadrangle, as seen by Viking Orbiter. Networks of channels like this are strong evidence for rain on Mars in the past.
The branched channels seen by Viking from orbit strongly suggested that it rained on Mars in the past. Image is located in Margaritifer Sinus quadrangle.
Channels in Candor plateau, as seen by HiRISE. Location is Coprates quadrangle. Click on image to see many small, branched channels which are strong evidence for sustained precipitation.
Stream meander and cutoff, as seen by HiRISE under HiWish program. This image is located in Mare Acidalium quadrangle.
The high resolution Mars Orbiter Camera on the Mars Global Surveyor has taken pictures which give much more detail about the history of liquid water on the surface of Mars. Despite the many gigantic flood channels and associated tree-like network of tributaries found on Mars, there are no smaller scale structures that would indicate the origin of the flood waters. It has been suggested that weathering processes have denuded these, indicating the river valleys are old features. Another theory about the formation of the ancient river valleys is that rather than floods, they were created by the slow seeping out of groundwater. This observation is supported by the sudden ending of the river networks in theatre shaped heads, rather than tapering ones. Additionally, valleys are often discontinuous, small sections of uneroded land separating the parts of the river.
Research, published in the Journal of Geophysical Research in June 2010, reported the detection of 40,000 river valleys on Mars, about four times the number of river valleys that have previously been identified by scientists.
Many Mars researchers now agree that the Martian water worn features can be divided into two distinct classes: 1. dendritic (branched), terrestrial-scale, widely distributed, Noachian-age "valley networks" and 2. exceptionally large, long, single-thread, isolated, uncommon, Hesperian-age "outflow channels". Consensus seems to be emerging that the latter formed in single, catastrophic ruptures of subsurface water reservoirs, possibly sealed by ice, discharging colossal quantities of water across an otherwise ultra-arid Mars surface. The former, however, probably indicate prolonged "wet" (though still arid by terrestrial standards) conditions on Noachian-era Mars, with an active ongoing hydrological cycle.
Higher resolution observations from spacecraft like Mars Global Surveyor also revealed at least a few hundred features along crater and canyon walls that appear similar to terrestrial seepage gullies. The gullies tended to be Equator facing and in the highlands of the southern hemisphere, and all greater than 30° north or south latitude. The researchers found no partially degraded (i.e. weathered) gullies and no superimposed impact craters, indicating that these are very young features.
Crater wall inside Mariner Crater showing a large group of gullies, as seen by HiRISE.
Liquid water
Liquid water cannot exist on the surface of Mars with its present low atmospheric pressure, except at the lowest elevations for short periods. Recently, the discovery of gully deposits that were not seen ten years ago provided evidence to support the popular belief that liquid water flowed on the surface in the recent past. There is some disagreement in the scientific community as to whether or not the new gully deposits were formed from liquid water. A paper, published in the January 2010 issue of Icarus, concluded that the observed deposits were probably dry flows that were started by a rockfall in steep regions.
Among the findings from the Opportunity rover is the presence of hematite on Mars in the form of small spheres on the Meridiani Planum. The spheres are only a few millimetres in diameter and are believed to have formed as rock deposits under watery conditions billions of years ago. Other minerals have also been found containing forms of sulfur, iron, or bromine such as jarosite. This and other evidence led a group of 50 scientists to conclude in the December 9, 2004 edition of the journal Science that "Liquid water was once intermittently present at the Martian surface at Meridiani, and at times it saturated the subsurface. Because liquid water is a key prerequisite for life, we infer conditions at Meridiani may have been habitable for some period of time in Martian history." Later studies suggested that this liquid water was actually acid because of the types of minerals found at the location. On the opposite side of the planet, the mineral goethite, which (unlike hematite) forms only in the presence of water, along with other evidence of water, has also been found by the Spirit rover in the "Columbia Hills".
Studies have shown that various salts present in the Martian soil could act as a kind of antifreeze—keeping water liquid at temperatures far below its normal freezing point. Some calculations suggest that tiny amounts of liquid water may be present for short periods of time (hours) in some locations. Some researchers have calculated that when taking into consideration insolation and pressure factors that liquid water could exist in some areas for about 10% of the Martian year; others estimate that water could be a liquid for only 2% of the year. Either way, that may be enough liquid water to support some forms of hardy organisms. It may not take much liquid water for life—organisms have been found on Earth living on extremely thin layers of unfrozen water in below-freezing locations. Research in December 2009 showed that liquid water could form in the daytime inside of snow on Mars. As light heats ice, it may be warming up dust grains located inside. These grains would then store heat and form water by melting some of the ice. This process has already been observed in Antarctica. Enough water may be produced for physical, chemical, and biological processes.
Polar ice caps
Both the northern polar cap (Planum Boreum) and the southern polar cap (Planum Australe) are believed to grow in thickness during the winter and partially sublime during the summer. Data obtained by the Mars Express satellite, made it possible in 2004 to confirm that the southern polar cap has ice at a depth of 3.7 kilometres (2.3 mi) below the surface with varying contents of frozen water depending on its latitude. The polar cap is a mixture of CO2 ice and water ice. The second part comprises steep slopes known as scarps, made almost entirely of water ice, that fall away from the polar cap to the surrounding plains. The third part encompasses the vast permafrost fields that stretch for tens of kilometres away from the scarps. NASA scientists calculate that the volume of water ice in the south polar ice cap, if melted, would be sufficient to cover the entire planetary surface to a depth of 11 metres.
Research, published in January 2010 using HiRISE images, says that understanding the layers is more complicated than was formerly believed. The brightness of the layers does not just depend on the amount of dust. The angle of the sun together with the angle of the spacecraft greatly affect the brightness seen by the camera. This angle depends on factors such as the shape of the trough wall and its orientation. Furthermore, the roughness of the surface can greatly change the albedo (amount of reflected light). In addition, many times what one is seeing is not a real layer, but a fresh covering of frost. All of these factors are influenced by the wind which can erode surfaces. The HiRISE camera did not reveal layers that were thinner than those seen by the Mars Global Surveyor. However, it did see more detail within layers.
Ice patches
On July 28, 2005, the European Space Agency announced the existence of a crater partially filled with frozen water, which some then interpreted as an "ice lake". Images of the crater, taken by the High Resolution Stereo Camera on board the European Space Agency's Mars Express spacecraft, clearly show a broad sheet of ice in the bottom of an unnamed crater located on Vastitas Borealis, a broad plain that covers much of Mars' far northern latitudes, at approximately 70.5° North and 103° East. The crater is 35 km wide and about 2 km deep.
The height difference between the crater floor and the surface of the water ice is about 200 metres. ESA scientists have attributed most of this height difference to sand dunes beneath the water ice, which are partially visible. While scientists do not refer to the patch as a "lake", the water ice patch is remarkable for its size and for being present throughout the year. Deposits of water ice and layers of frost have been found in many different locations on the planet.
Equatorial frozen sea
Surface features consistent with pack ice have been discovered in the southern Elysium Planitia. What appear to be plates of broken ice, ranging in size from 30 m to 30 km, are found in channels leading to a flooded area of approximately the same depth and width as the North Sea. The plates show signs of break up and rotation that clearly distinguish them from lava plates elsewhere on the surface of Mars. The source for the flood is thought to be the nearby geological fault, Cerberus Fossae, which spewed water as well as lava some 2 to 10 million years ago.
Ancient coastline
A striking feature of the topography of Mars is the flat plains of the northern hemisphere. With the increasing amounts of data returning from the current set of orbiting probes, what seems to be an ancient shoreline several thousands of kilometres long has been discovered. Actually, two different shorelines have been proposed. One, the Arabia shoreline, can be traced all around Mars except through the Tharsis volcanic region. The second, the Deuteronilus, follows the Vastitas Borealis Formation. Some researchers do not agree that these formations are real shorelines. One major problem with the conjectured 2 Ga old shoreline is that it is not flat — i.e. does not follow a line of constant gravitational potential. However, a 2007 Nature article points out that this could be due to a change in distribution in Mars' mass, perhaps due to volcanic eruption or meteor impact—the Elysium volcanic province or the massive Utopia basin that is buried beneath the northern plains have been put forward as the most likely causes. The Mars Ocean Hypothesis conjectures that the Vastitas Borealis basin was the site of a primordial ocean of liquid water 3.8 billion years ago.
Glaciers and ice ages
Many large areas of Mars have been shaped by glaciers. Much of the area in high latitudes, especially the Ismenius Lacus quadrangle, are believed to still contain enormous amounts of water ice. Recent evidence has led many planetary scientists to believe that water ice still exists as glaciers with thin coverings of insulating rock. In March 2010, scientists released the results of a radar study of an area called Deuteronilus Mensae that found widespread evidence of ice lying beneath a few meters of rock debris. Glaciers are believed to be associated with Fretted terrain, many volcanoes, and even some craters. Researchers have described glacial deposits on Hecates Tholus, Arsia Mons, Pavonis Mons, and Olympus Mons.
Ridges of debris on the surface of the glaciers indicate the direction of ice movement. The surface of some glaciers have rough textures due to sublimation of buried ice. The ice goes directly into a gas (this process is called sublimation) and leaves behind an empty space. Overlying material then collapses into the void. Glaciers are not pure ice; they contain dirt and rocks. At times, they dump their loads of material into ridges. Such ridges are called moraines. Some places on Mars have groups of ridges that are twisted around; this may have been due to more movement after the ridges were put into place. Sometimes chunks of ice fall from the glacier and get buried in the land surface. When they melt, a more or less round hole remains. On Earth we call these features kettles or kettle holes. Mendon Ponds Park in upstate NY has preserved several of these kettles. The picture from HiRISE below shows possible kettles in Moreux Crater.
Pictures below show various features that appear to be connected with the existence of glaciers.
Mesa in Ismenius Lacus quadrangle, as seen by CTX. Mesa has several glaciers eroding it. One of the glaciers is seen in greater detail in the next two images from HiRISE.
Glacier as seen by HiRISE under the HiWish program. Area in rectangle is enlarged in the next photo. Zone of accumulation of snow at the top. Glacier is moving down valley, then spreading out on plain. Evidence for flow comes from the many lines on surface. Location is in Protonilus Mensae in Ismenius Lacus quadrangle.
Many mid-latitude craters contain straight and/or curved ridges of material that resemble glacial moraines on the Earth. Moving ice carries rock material, then drops it as the ice disappears. On Mars, with its extremely thin atmosphere, ice does not usually melt but instead sublimates. As a result, the rock debris is just dropped, and melt water is not produced so the remains of these glaciers do not appear the same as on the Earth. Various names have been applied to these ridged features. Depending on the author, they may be called arcuate ridges, viscous flow features, Martian flow features, or moraine-like ridges. Many, but not all, seem to be associated with gullies on the walls of craters and mantling material.
Tongue-Shaped Glacier, as seen by Mars Global Surveyor. Location is Hellas quadrangle.
Lineated deposits are probably rock-covered glaciers which are found on the floors of some channels. Their surfaces have ridged and grooved materials that deflect around obstacles, similar to some glaciers on the Earth. Lineated floor deposits may be related to Lobate Debris Aprons, which have been proven to contain large amounts of ice by orbiting radar.
For many years, researchers believed that on Mars features called Lobate Debris Aprons looked like glacial flows. It was thought that ice existed under a layer of insulating rocks. With new instrument readings, it has been confirmed that Lobate Debris Aprons contain almost pure ice that is covered with a layer of rocks.
Ice ages on Mars are far different than the ones that our Earth experiences. Ice ages on Mars, that is when ice accumulates, occur during warmer periods. During a Martian ice age, the poles get warmer. Water ice leaves the ice caps and is deposited in mid latitudes. The moisture from the ice caps travels to lower latitudes in the form of deposits of frost or snow mixed generously with dust. The atmosphere of Mars contains a great deal of fine dust particles. Water vapor condenses on these particles, which then fall down to the ground due to the additional weight of the water coating. When ice at the top of the mantling layer returns to the atmosphere, it leaves behind dust which serves to insulate the remaining ice. The total volume of water removed is about a few percent of the ice caps, or enough to cover the entire surface of the planet under one meter of water. Much of this moisture from the ice caps results in a thick smooth mantle that is thought to be a mixture of ice and dust. This ice-rich mantle, a few yards thick, smoothes the land. But in places it displays a bumpy texture, resembling the surface of a basketball. Because there are few craters on this mantle, the mantle is relatively young. It is believed that this mantle was put in place during a relatively recent ice age. The mantle covers areas to the equivalent latitude of Saudi Arabia and the southern United States.
The images below, all taken with HiRISE show a variety of views of this smooth mantle.
Dissected mantle with layers. Location is Noachis quadrangle.
Ice ages are driven by changes in Mars's orbit and tilt. Orbital calculations show that Mars wobbles on its axis far more than Earth. Earth is stabilized by its proportionally large moon, so it only wobbles a few degrees. Mars, in contrast, may change its tilt by tens of degrees. Its poles get much more direct sunlight at times, which causes the ice caps to warm and become smaller as ice sublimes. Adding to the variability of the climate, the eccentricity of the orbit of Mars changes twice as much as Earth's eccentricity. Computer simulations have shown that a 45° tilt of the Martian axis would result in ice accumulation in areas that display glacial landforms. A 2008 study provided evidence for multiple glacial phases during Late Amazonian glaciation at the dichotomy boundary on Mars.
Glaciers on volcanoes
Using new MGS and Odyssey data, combined with recent developments in the study of cold-based glaciers, scientists believe glaciers once existed and still exist on some volcanoes. The evidence for this are concentric ridges (these are moraines dropped by the glacier), a knobby area (caused by ice sublimating), and a smooth section that flows over other deposits (debris-covered glacial ice). The ice could have been deposited when the tilt of Mars changed the climate, thereby causing more moisture to be present in the atmosphere. Studies suggest the glaciation happened in the Late Amazonian period, the latest period in Mars history. Multiple stages of glaciations probably occurred. The ice present today represents one more resource for the possible future colonization of the planet. Researchers have described glacial deposits on Hecates Tholus, Arisia Mons, Pavonis Mons, and Olympus Mons.
See also
- "Mars Global Surveyor Measures Water Clouds". Retrieved March 7, 2009.
- "Flashback: Water on Mars Announced 10 Years Ago". SPACE.com. June 22, 2000. Retrieved December 19, 2010.
- "Science@NASA, The Case of the Missing Mars Water". Retrieved March 7, 2009.
- ISBN 0-312-24551-3
- "PSRD: Ancient Floodwaters and Seas on Mars". Psrd.hawaii.edu. July 16, 2003. Retrieved December 19, 2010.
- "Gamma-Ray Evidence Suggests Ancient Mars Had Oceans | SpaceRef – Your Space Reference". SpaceRef. November 17, 2008. Retrieved December 19, 2010.
- Carr, M., Head, J. (2003). "Oceans on Mars: An assessment of the observational evidence and possible fate". Journal of Geophysical Research 108: 5042. Bibcode:2003JGRE..108.5042C. doi:10.1029/2002JE001963.
- Harwood, William (January 25, 2013). "Opportunity rover moves into 10th year of Mars operations". Space Flight Now. Retrieved February 18, 2013.
- "Water at Martian south pole". March 17, 2004. Retrieved September 29, 2009.
- "ch4". History.nasa.gov. Retrieved December 19, 2010.
- Harrison, K, Grimm, R. (2005). "Groundwater-controlled valley networks and the decline of surface runoff on early Mars". Journal of Geophysical Research 110. Bibcode:2005JGRE..11012S16H. doi:10.1029/2005JE002455.
- Howard, A.; Moore, Jeffrey M.; Irwin, Rossman P. (2005). "An intense terminal epoch of widespread fluvial activity on early Mars: 1. Valley network incision and associated deposits". Journal of Geophysical Research 110. Bibcode:2005JGRE..11012S14H. doi:10.1029/2005JE002459.
- Hugh H. Kieffer (1992). Mars. University of Arizona Press. ISBN 978-0-8165-1257-7. Retrieved March 7, 2011.
- "New Signs That Ancient Mars Was Wet". Space.com. 2008-10-28. Retrieved 2013-02-10.
- "Articles | Was there life on Mars? – ITV News". Itv.com. Retrieved December 19, 2010.
- Glotch, T. and P. Christensen. 2005. Geologic and mineralogical mapping of Aram Chaos: Evidence for water-rich history. J. Geophys. Res. 110. doi:10.1029/2004JE002389
- Irwin, Rossman P.; Howard, Alan D.; Craddock, Robert A.; Moore, Jeffrey M. (2005). "An intense terminal epoch of widespread fluvial activity on early Mars: 2. Increased runoff and paleolake development". Journal of Geophysical Research 110. Bibcode:2005JGRE..11012S15I. doi:10.1029/2005JE002460.
- Fassett, C., Head, III (2008). "Valley network-fed, open-basin lakes on Mars: Distribution and implications for Noachian surface and subsurface hydrology". Icarus 198: 37–56. Bibcode:2008Icar..198...37F. doi:10.1016/j.icarus.2008.06.016.
- Parker, T.; Clifford, S. M.; Banerdt, W. B. (2000). "Argyre Planitia and the Mars Global Hydrologic Cycle" (PDF). Lunar and Planetary Science XXXI: 2033. Bibcode:2000LPI....31.2033P.
- Heisinger, H., Head, J. (2002). "Topography and morphology of the Argyre basin, Mars: implications for its geologic and hydrologic history". Planet. Space Sci. 50 (10–11): 939–981. Bibcode:2002P&SS...50..939H. doi:10.1016/S0032-0633(02)00054-5.
- ISBN 978-0-521-87201-0
- Moore, J., Wilhelms, D. (2001). "Hellas as a possible site of ancient ice-covered lakes on Mars". Icarus 154 (2): 258–276. Bibcode:2001Icar..154..258M. doi:10.1006/icar.2001.6736.
- Weitz, C.; Parker, T. (2000). "New evidence that the Valles Marineris interior deposits formed in standing bodies of water" (PDF). Lunar and Planetary Science XXXI: 1693. Bibcode:2000LPI....31.1693W.
- Morton, O. 2002. Mapping Mars. Picador, NY, NY
- Head, JW; Neukum, G; Jaumann, R; Hiesinger, H; Hauber, E; Carr, M; Masson, P; Foing, B et al. (2005). "Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars". Nature 434 (7031): 346–350. Bibcode:2005Natur.434..346H. doi:10.1038/nature03359. PMID 15772652.
- Head, J. and D. Marchant. 2006. Evidence for global-scale northern mid-latitude glaciation in the Amazonian period of Mars: Debris-covered glacial and valley glacial deposits in the 30 - 50 N latitude band. Lunar. Planet. Sci. 37. Abstract 1127
- Head, J. and D. Marchant. 2006. Modifications of the walls of a Noachian crater in Northern Arabia Terra (24 E, 39 N) during northern mid-latitude Amazonian glacial epochs on Mars: Nature and evolution of Lobate Debris Aprons and their relationships to lineated valley fill and glacial systems. Lunar. Planet. Sci. 37. Abstract 1128
- Head, J., et al. 2006. Modification if the dichotomy boundary on Mars by Amazonian mid-latitude regional glaciation. Geophys. Res Lett. 33
- Garvin, J. et al. 2002. Lunar Planet. Sci: 33. Abstract # 1255.
- "Mars Odyssey: Newsroom". Mars.jpl.nasa.gov. May 28, 2002. Retrieved December 19, 2010.
- Feldman, W. C. (2004). "Global distribution of near-surface hydrogen on Mars". J. Geographical Research 109. Bibcode:2004JGRE..10909006F. doi:10.1029/2003JE002160.
- "Radar evidence for ice in lobate debris aprons in the mid-northern latitudes of Mars". Planetary.brown.edu. Retrieved 2013-02-10.
- Head, J. et al. 2005. Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars. Nature: 434. 346-350
- Source: Brown University Posted Monday, October 17, 2005 (2005-10-17). "Mars' climate in flux: Mid-latitude glaciers | SpaceRef - Your Space Reference". Marstoday.com. Retrieved 2013-02-10.
- (2008-04-23). "Glaciers Reveal Martian Climate Has Been Recently Active | Brown University News and Events". News.brown.edu. Retrieved 2013-02-10. Text " Contact: Richard Lewis " ignored (help); Text " " ignored (help)
- Plaut, J. et al. 2008. Radar Evidence for Ice in Lobate Debris Aprons in the Mid-Northern Latitudes of Mars. Lunar and Planetary Science XXXIX. 2290.pdf
- Holt, J. et al. 2008. Radar Sounding Evidence for Ice within Lobate Debris Aprons near Hellas Basin, Mid-Southern Latitudes of Mars. Lunar and Planetary Science XXXIX. 2441.pdf
- Bright Chunks at Phoenix Lander's Mars Site Must Have Been Ice – Official NASA press release (June 19, 2008)
- Rayl, A. j. s. (June 21, 2008). "Phoenix Scientists Confirm Water-Ice on Mars". The Planetary Society web site. Planetary Society. Retrieved June 23, 2008.
- "Confirmation of Water on Mars". Nasa.gov. June 20, 2008. Retrieved December 19, 2010.
- Witeway, J.; Komguem, L; Dickinson, C; Cook, C; Illnicki, M; Seabrook, J; Popovici, V; Duck, TJ et al. (2009). "Mars Water-Ice Clouds and Precipation". Science 325 (5936): 68–70. Bibcode:2009Sci...325...68W. doi:10.1126/science.1172344. PMID 19574386.
- "Liquid Saltwater Is Likely Present On Mars, New Analysis Shows". Sciencedaily.com. 2009-03-20. Retrieved 2011-08-20.
- ISBN 978-1-60598-176-5
- Rennó, Nilton O.; Bos, Brent J.; Catling, David; Clark, Benton C.; Drube, Line; Fisher, David; Goetz, Walter; Hviid, Stubbe F. et al. (2009). "Possible physical and thermodynamical evidence for liquid water at the Phoenix landing site". Journal of Geophysical Research 114. Bibcode:2009JGRC..11400E03R. doi:10.1029/2009JE003362.
- Staff (July 2, 2012). "Ancient Mars Water Existed Deep Underground". Space.com. Retrieved July 3, 2012.
- Forget, F., et al. 2006. Planet Mars Story of Another World. Praxis Publishing, Chichester, UK. ISBN 978-0-387-48925-4
- Carr, M. 2006. The Surface of Mars. Cambridge University Press. ISBN 978-0-521-87201-0
- Craddock, R.; Howard, A. (2002). "The case for rainfall on a warm, wet early Mars". J. Geophys. Res 107: E11.
- Head, J. et al. (2006). "Extensive valley glacier deposits in the northern mid-latitudes of Mars: Evidence for the late Amazonian obliquity-driven climate change". Earth Planet. Sci. Lett. 241: 663–671.
- Madeleine, J. et al. 2007. Mars: A proposed climatic scenario for northern mid-latitude glaciation. Lunar Planet. Sci. 38. Abstract 1778.
- Madeleine, J. et al. (2009). "Amazonian northern mid-latitude glaciation on Mars: A proposed climate scenario". Icarus 203: 300–405.
- Mischna, M. et al. 2003. On the orbital forcing of martian water and CO2 cycles: A general circulation model study with simplified volatile schemes. J. Geophys. Res. 108. (E6). 5062.
- Martian gullies could be scientific gold mines. Leonard David, 11/13/2006.
- Head, JW; Marchant, DR; Kreslavsky, MA (2008). "Formation of gullies on Mars: Link to recent climate history and insolation microenvironments implicate surface water flow origin". PNAS 105 (36): 13258–63. Bibcode:2008PNAS..10513258H. doi:10.1073/pnas.0803760105. PMC 2734344. PMID 18725636.
- "NASA Finds Possible Signs of Flowing Water on Mars". voanews.com. Retrieved August 5, 2011.
- "Is Mars Weeping Salty Tears?". news.sciencemag.org. Retrieved August 5, 2011.
- "Mars Gullies May Have Been Formed By Flowing Liquid Brine". Sciencedaily.com. 2009-02-15. Retrieved 2013-02-10.
- Osterloo, MM; Hamilton, VE; Bandfield, JL; Glotch, TD; Baldridge, AM; Christensen, PR; Tornabene, LL; Anderson, FS (2008). "Chloride-Bearing Materials in the Southern Highlands of Mars". Science 319 (5870): 1651–1654. Bibcode:2008Sci...319.1651O. doi:10.1126/science.1150690. PMID 18356522.
- Brown, Dwayne; Cole, Steve; Webster, Guy; Agle, D.C. (September 27, 2012). "NASA Rover Finds Old Streambed On Martian Surface". NASA. Retrieved September 28, 2012.
- NASA (September 27, 2012). "NASA's Curiosity Rover Finds Old Streambed on Mars - video (51:40)". NASAtelevision. Retrieved September 28, 2012.
- Chang, Alicia (September 27, 2012). "Mars rover Curiosity finds signs of ancient stream". Associated Press. Retrieved September 27, 2012.
- Morton, Oliver (2002). Mapping Mars: Science, Imagination, and the Birth of a World. New York: Picador USA. p. 98. ISBN 0-312-24551-3.
- Online Atlas of Mars
- Catalog Page for PIA03467
- "Mars Exploration: Missions". Marsprogram.jpl.nasa.gov. Retrieved December 19, 2010.
- "ch5". History.nasa.gov. Retrieved December 19, 2010.
- "ch7". History.nasa.gov. Retrieved December 19, 2010.
- Raeburn, P. 1998. Uncovering the Secrets of the Red Planet Mars. National Geographic Society. Washington D.C.
- Moore, P. et al. 1990. The Atlas of the Solar System. Mitchell Beazley Publishers NY, NY.
- Arvidson, R; Gooding, James L.; Moore, Henry J. (1989). "The Martian surface as Imaged, Sampled, and Analyzed by the Viking Landers". Review of Geophysics 27: 39–60. Bibcode:1989RvGeo..27...39A. doi:10.1029/RG027i001p00039.
- Clark, B.; Baird, AK; Rose Jr, HJ; Toulmin P, 3rd; Keil, K; Castro, AJ; Kelliher, WC; Rowe, CD et al. (1976). "Inorganic Analysis of Martian Samples at the Viking Landing Sites". Science 194 (4271): 1283–1288. Bibcode:1976Sci...194.1283C. doi:10.1126/science.194.4271.1283. PMID 17797084.
- Baird, A.; Toulmin P, 3rd; Clark, BC; Rose Jr, HJ; Keil, K; Christian, RP; Gooding, JL (1976). "Mineralogic and Petrologic Implications of Viking Geochemical Results From Mars: Interim Report". Science 194 (4271): 1288–1293. Bibcode:1976Sci...194.1288B. doi:10.1126/science.194.4271.1288. PMID 17797085.
- Hoefen, T.; Clark, RN; Bandfield, JL; Smith, MD; Pearl, JC; Christensen, PR (2003). "Discovery of Olivine in the Nili Fossae Region of Mars". Science 302 (5645): 627–630. Bibcode:2003Sci...302..627H. doi:10.1126/science.1089647. PMID 14576430.
- Hamiliton, W.; Christensen, Philip R.; McSween, Harry Y. (1997). "Determination of Martian meteorite lithologies and mineralogies using vibrational spectroscopy". Journal of Geophysical Research 102: 25593–25603. Bibcode:1997JGR...10225593H. doi:10.1029/97JE01874.
- [dead link]
- Henderson, Mark (December 7, 2006). "Water has been flowing on Mars within past five years, Nasa says". The Times (UK). Retrieved March 17, 2007.
- Mars photo evidence shows recently running water. The Christian Science Monitor. Retrieved on March 17, 2007
- Malin, Michael C.; Edgett, Kenneth S. (2001). "Mars Global Surveyor Mars Orbiter Camera: Interplanetary cruise through primary mission". Journal of Geophysical Research 106: 23429–23570. Bibcode:2001JGR...10623429M. doi:10.1029/2000JE001455.
- "Mars Global Surveyor MOC2-1618 Release". Msss.com. doi:10.1126/science.288.5475.2330. Retrieved December 19, 2010.
- Malin, M.; Edgett, KS; Posiolova, LV; McColley, SM; Dobrea, EZ (2006). "Present-Day Impact Cratering Rate and Contemporary Gully Activity on Mars". Science 314 (5805): 1573–1577. Bibcode:2006Sci...314.1573M. doi:10.1126/science.1135156. PMID 17158321.
- "Changing Mars Gullies Hint at Recent Flowing Water". SPACE.com. December 6, 2006. Retrieved December 19, 2010.
- "Mars Global Surveyor MOC2-239 Release". Mars.jpl.nasa.gov. Retrieved December 19, 2010.
- "HiRISE | Slope Streaks in Marte Vallis (PSP_003570_1915)". Hirise.lpl.arizona.edu. Retrieved December 19, 2010.
- [dead link]
- "spcae.com". spcae.com. Retrieved December 19, 2010.
- [dead link]
- Malin (2010). "An overview of the 1985–2006 Mars Orbiter Camera science investigation". The Mars Journal 5: 1. Bibcode:2010IJMSE...5....1M. doi:10.1555/mars.2010.0001.
- Zimbelman, J., Griffin, L. (2010). "HiRISE images of yardangs and sinuous ridges in the lower member of the Medusae Fossae Formation, Mars". Icarus 205: 198–210. Bibcode:2010Icar..205..198Z. doi:10.1016/j.icarus.2009.04.003.
- Newsom, H.; Lanza, Nina L.; Ollila, Ann M.; Wiseman, Sandra M.; Roush, Ted L.; Marzo, Giuseppe A.; Tornabene, Livio L.; Okubo, Chris H. et al. (2010). "Inverted channel deposits on the floor of Miyamoto crater, Mars". Icarus 205: 64–72. Bibcode:2010Icar..205...64N. doi:10.1016/j.icarus.2009.03.030.
- Weitz, C.; Milliken, R.E.; Grant, J.A.; McEwen, A.S.; Williams, R.M.E.; Bishop, J.L.; Thomson, B.J. (2010). "Mars Reconnaissance Orbiter observations of light-toned layered deposits and associated fluvial landforms on the plateaus adjacent to Valles Marineris". Icarus 205: 73–102. Bibcode:2010Icar..205...73W. doi:10.1016/j.icarus.2009.04.017.
- "Icarus, Volume 210, Issue 2, Pages 539–1000 (December 2010)". ScienceDirect. Retrieved December 19, 2010.
- Fairen, A.; Davila, AF; Gago-Duport, L; Amils, R; McKay, CP (2009). "Stability against freezing of aqueous solutions on early Mars". Nature 459 (7245): 401–404. Bibcode:2009Natur.459..401F. doi:10.1038/nature07978. PMID 19458717.
- Atmospheric and Meteorological Properties, NASA
- Golombek, M.; Cook, RA; Economou, T; Folkner, WM; Haldemann, AF; Kallemeyn, PH; Knudsen, JM; Manning, RM et al. (1997). "Overview of the Mars Pathfinder Mission and Assessment of Landing Site Predictions". Science 278 (5344): 1743–1748. Bibcode:1997Sci...278.1743G. doi:10.1126/science.278.5344.1743. PMID 9388167.
- Murche, S. et al. (1993). "Spatial Variations in the Spectral Properties of Bright Regions on Mars". Icarus 105 (2): 454–468. Bibcode:1993Icar..105..454M. doi:10.1006/icar.1993.1141.
- "Home Page for Bell (1996) Geochemical Society paper". Marswatch.tn.cornell.edu. Retrieved December 19, 2010.
- Feldman, WC; Boynton, WV; Tokar, RL; Prettyman, TH; Gasnault, O; Squyres, SW; Elphic, RC; Lawrence, DJ et al. (2002). "Global Distribution of Neutrons from Mars: Results from Mars Odyssey". Science 297 (5578): 75–78. Bibcode:2002Sci...297...75F. doi:10.1126/science.1073541. PMID 12040088.
- Mitrofanov, I.; Anfimov, D; Kozyrev, A; Litvak, M; Sanin, A; Tret'yakov, V; Krylov, A; Shvetsov, V et al. (2002). "Maps of Subsurface Hydrogen from the High Energy Neutron Detector, Mars Odyssey". Science 297 (5578): 78–81. Bibcode:2002Sci...297...78M. doi:10.1126/science.1073616. PMID 12040089.
- Boynton, W.; Feldman, WC; Squyres, SW; Prettyman, TH; Bruckner, J; Evans, LG; Reedy, RC; Starr, R et al. (2002). "Distribution of Hydrogen in the Near Surface of Mars: Evidence for Subsurface Ice Deposits". Science 297 (5578): 81–85. Bibcode:2002Sci...297...81B. doi:10.1126/science.1073722. PMID 12040090.
- Arvidson, P. H.; Tamppari, L.; Arvidson, R. E.; Bass, D.; Blaney, D.; Boynton, W.; Carswell, A.; Catling, D. et al. (2008). "Introduction to special section on the phoenix mission: Landing site characterization experiments, mission overviews, and expected science". J. Geophysical Research 113. Bibcode:2008JGRE..11300A18S. doi:10.1029/2008JE003083.
- "The Dirt on Mars Lander Soil Findings". SPACE.com. Retrieved December 19, 2010.
- Head, J.; Neukum, G.; Jaumann, R.; Hiesinger, H.; Hauber, E.; Carr, M.; Masson, P.; Foing, B. et al. (2005). "Tropical to mid-latitude snow and ice accumulation, flow and glaciation on Mars". Nature 434 (7031): 346–350. Bibcode:2005Natur.434..346H. doi:10.1038/nature03359. PMID 15772652.
- "Mars' climate in flux: Mid-latitude glaciers | Mars Today – Your Daily Source of Mars News". Mars Today. October 17, 2005. Retrieved December 19, 2010.
- Richard Lewis (April 23, 2008). "Glaciers Reveal Martian Climate Has Been Recently Active | Brown University Media Relations". News.brown.edu. Retrieved December 19, 2010.
- Plaut, Jeffrey J.; Safaeinili, Ali; Holt, John W.; Phillips, Roger J.; Head, James W.; Seu, Roberto; Putzig, Nathaniel E.; Frigeri, Alessandro (2009). "Radar Evidence for Ice in Lobate Debris Aprons in the Mid-Northern Latitudes of Mars". Geophysical Research Letters 36 (2). Bibcode:2009GeoRL..3602203P. doi:10.1029/2008GL036379.
- Holt, J. W.; Safaeinili, A.; Plaut, J. J.; Young, D. A.; Head, J. W.; Phillips, R. J.; Campbell, B. A.; Carter, L. M. et al. (2008). "Radar Sounding Evidence for Ice within Lobate Debris Aprons near Hellas Basin, Mid-Southern Latitudes of Mars". Lunar and Planetary Science. XXXIX: 2441. Bibcode:2008LPI....39.2441H.
- Plaut, Jeffrey J.; Safaeinili, Ali; Holt, John W.; Phillips, Roger J.; Head, James W.; Seu, Roberto; Putzig, Nathaniel E.; Frigeri, Alessandro (2009). "Radar evidence for ice in lobate debris aprons in the mid-northern latitudes of Mars". Geophysical Research Letters 36 (2). Bibcode:2009GeoRL..3602203P. doi:10.1029/2008GL036379.
- "Reull Vallis (Released 22 October 2002) | Mars Odyssey Mission THEMIS". Themis.asu.edu. Retrieved December 19, 2010.
- "Dao Vallis (Released 7 August 2002) | Mars Odyssey Mission THEMIS". Themis.asu.edu. Retrieved December 19, 2010.
- Mellon, M., Jakosky, B. (1993). "Geographic variations in the thermal and diffusive stability of ground ice on Mars". J. Geographical Research 98: 3345–3364. Bibcode:1993JGR....98.3345M. doi:10.1029/92JE02355.
- Johnson, John (August 1, 2008). "There's water on Mars, NASA confirms". Los Angeles Times. Retrieved August 1, 2008.
- Heldmann et al., Jennifer L. (May 7, 2005). "Formation of Martian gullies by the action of liquid water flowing under current Martian environmental conditions" (PDF). Journal of Geophysical Research 110: Eo5004. Bibcode:2005JGRE..11005004H. doi:10.1029/2004JE002261. Retrieved September 14, 2008 'conditions such as now occur on Mars, outside of the temperature-pressure stability regime of liquid water' ... 'Liquid water is typically stable at the lowest elevations and at low latitudes on the planet because the atmospheric pressure is greater than the vapor pressure of water and surface temperatures in equatorial regions can reach 273 K for parts of the day [Haberle et al., 2001]'
- Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (June 3, 2006). "Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement". Geophysical Research Letters 33 (11): L11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946. Retrieved August 12, 2007 'Martian high-latitude zones are covered with a smooth, layered ice-rich mantle'
- Hecht, MH; Kounaves, SP; Quinn, RC; West, SJ; Young, SM; Ming, DW; Catling, DC; Clark, BC et al. (2009). "Detection of Perchlorate and the Soluble Chemistry of Martian Soil at the Phoenix Lander Site". Science 325 (5936): 64–67. Bibcode:2009Sci...325...64H. doi:10.1126/science.1172466. PMID 19574385.
- Chang, Kenneth (2009) Blobs in Photos of Mars Lander Stir a Debate: Are They Water?, New York Times (online), March 16, 2009, retrieved April 4, 2009;
- http://articles.latimes.com/2009/mar/14/nation/na-marswater12. Missing or empty
- "Astrobiology Top 10: Too Salty to Freeze". Astrobio.net. Retrieved December 19, 2010.
- Smith, PH; Tamppari, LK; Arvidson, RE; Bass, D; Blaney, D; Boynton, WV; Carswell, A; Catling, DC et al. (2009). "H2O at the Phoenix Landing Site". Science 325 (5936): 58–61. Bibcode:2009Sci...325...58S. doi:10.1126/science.1172339. PMID 19574383.
- "The Dirt on Mars Lander Soil Findings". Space.com. Retrieved December 19, 2010.
- "CSA – News Release". Asc-csa.gc.ca. July 2, 2009. Retrieved December 19, 2010.
- Boynton, WV; Ming, DW; Kounaves, SP; Young, SM; Arvidson, RE; Hecht, MH; Hoffman, J; Niles, PB et al. (2009). "Evidence for Calcium Carbonate at the Mars Phoenix Landing Site". Science 325 (5936): 61–64. Bibcode:2009Sci...325...61B. doi:10.1126/science.1172768. PMID 19574384.
- "Audio Recording of Phoenix Media Telecon for Aug. 5, 2008". Jet Propulsion Laboratory (NASA). August 5, 2008. Retrieved July 14, 2009.
- "Mars Exploration Rover Mission: Press Releases". Marsrovers.jpl.nasa.gov. March 5, 2004. Retrieved December 19, 2010.
- "NASA - Mars Rover Spirit Unearths Surprise Evidence of Wetter Past". Nasa.gov. 2007-05-21. Retrieved 2013-02-10.
- Amos, Jonathan (December 11, 2007). "Mars robot unearths microbe clue". NASA says its robot rover Spirit has made one of its most significant discoveries on the surface of Mars. (BBC News). Retrieved December 12, 2007.
- Bertster, Guy (December 10, 2007). "Mars Rover Investigates Signs of Steamy Martian Past". Press Release. Jet Propulsion Laboratory, Pasadena, California. Retrieved December 12, 2007.
- "Opportunity Rover Finds Strong Evidence Meridiani Planum Was Wet". Retrieved July 8, 2006.
- Klingelhofer, G., et al. (2005) Lunar Planet. Sci. XXXVI abstr. 2349
- Schroder, C., et al. (2005) European Geosciences Union, General Assembly, Geophysical Research abstr., Vol. 7, 10254, 2005
- Morris,S., et al. Mossbauer mineralogy of rock, soil, and dust at Gusev crater, Mars: Spirit’s journal through weakly altered olivine basalt on the plains and pervasively altered basalt in the Columbia Hills. J. Geophys. Res: 111
- Ming,D., et al. 2006 Geochemical and mineralogical indicators for aqueous processes in the Columbia Hills of Gusev crater, Mars. J. Geophys. Res.111
- Bell, J (ed.) The Martian Surface. 2008. Cambridge University Press. ISBN 978-0-521-86698-9
- "Outcrop of long-sought rare rock on Mars found". Sciencedaily.com. 2010-06-04. doi:10.1126/science.1189667. Retrieved 2013-02-10.
- Richard V. Morris, Steven W. Ruff, Ralf Gellert, Douglas W. Ming, Raymond E. Arvidson, Benton C. Clark, D. C. Golden, Kirsten Siebach, Göstar Klingelhöfer, Christian Schröder, Iris Fleischer, Albert S. Yen, Steven W. Squyres. Identification of Carbonate-Rich Outcrops on Mars by the Spirit Rover. Science, June 3, 2010 doi:10.1126/science.1189667
- Brown, Dwayne; Webster, Guy; Jones, Nance Neal (December 3, 3012). "NASA Mars Rover Fully Analyzes First Martian Soil Samples". NASA. Retrieved December 3, 2012.
- Chang, Ken (December 3, 2012). "Mars Rover Discovery Revealed". New York Times. Retrieved December 3, 2012.
- Webster, Guy; Brown, Dwayne (March 18, 2013). "Curiosity Mars Rover Sees Trend In Water Presence". NASA. Retrieved March 20, 2013.
- Rincon, Paul (March 19, 2013). "Curiosity breaks rock to reveal dazzling white interior". BBC. Retrieved March 19, 2013.
- Staff (March 20, 2013). "Red planet coughs up a white rock, and scientists freak out". MSN. Retrieved March 20, 2013.
- "HiRISE | Sinuous Ridges Near Aeolis Mensae". Hiroc.lpl.arizona.edu. January 31, 2007. Retrieved December 19, 2010.
- "HiRISE | High Resolution Imaging Science Experiment". Hirise.lpl.arizona.edu?psp_008437_1750. Retrieved December 19, 2010.
- Grotzinger, J. and R. Milliken (eds.) 2012. Sedimentary Geology of Mars. SEPM
- "Target Zone: Nilosyrtis? | Mars Odyssey Mission THEMIS". Themis.asu.edu. Retrieved December 19, 2010.
- Head, James W.; Mustard, John F.; Kreslavsky, Mikhail A.; Milliken, Ralph E.; Marchant, David R. (2003). "Recent ice ages on Mars". Nature 426 (6968): 797–802. doi:10.1038/nature02114. PMID 14685228.
- Head, J. et al. 2008. Formation of gullies on Mars: Link to recent climate history and insolation microenvironments implicate surface water flow origin. PNAS: 105. 13258-13263.
- MLA NASA/Jet Propulsion Laboratory (2003, December 18). Mars May Be Emerging From An Ice Age. ScienceDaily. Retrieved February 19, 2009, from http://www.sciencedaily.com /releases/2003/12/031218075443.htmAds by GoogleAdvertise
- Malin, M.; Edgett, KS; Posiolova, LV; McColley, SM; Dobrea, EZ (2006). "Present-day impact cratering rate and contemporary gully activity on Mars". Science 314 (5805): 1573–1577. Bibcode:2006Sci...314.1573M. doi:10.1126/science.1135156. PMID 17158321.
- Kolb, K.; Pelletier, Jon D.; McEwen, Alfred S. (2010). "Modeling the formation of bright slope deposits associated with gullies in Hale Crater, Mars: Implications for recent liquid water". Icarus 205: 113–137. Bibcode:2010Icar..205..113K. doi:10.1016/j.icarus.2009.09.009.
- Byrne, S; Dundas, CM; Kennedy, MR; Mellon, MT; McEwen, AS; Cull, SC; Daubar, IJ; Shean, DE et al. (2009). "Distribution of mid-latitude ground ice on Mars from new impact craters". Science 325 (5948): 1674–1676. Bibcode:2009Sci...325.1674B. doi:10.1126/science.1175307. PMID 19779195.
- "Water Ice Exposed in Mars Craters". SPACE.com. Retrieved December 19, 2010.
- [dead link]
- Milazzo, M.; Keszthelyi, L.P.; Jaeger, W.L.; Rosiek, M.; Mattson, S.; Verba, C.; Beyer, R.A.; Geissler, P.E. et al. (2009). "The discovery of columnar jointing on Mars". Geology 37 (2): 171–174. doi:10.1130/G25187A.1.
- Milazzo, M.; Keszthelyi, L. P.; McEwen, A. S.; Jaeger, W. (2003). "The formation of columnar joints on Earth and Mars (abstract #2120)". Lunar and Planetary Science. XXXIV: 2120. Bibcode:2003LPI....34.2120M.
- Mangold, C.; Quantin, C; Ansan, V; Delacourt, C; Allemand, P (2004). "Evidence for precipitation on Mars from dendritic valleys in the Valles Marineris area". Science 305 (5680): 78–81. Bibcode:2004Sci...305...78M. doi:10.1126/science.1097549. PMID 15232103.
- Murchie, Scott; Roach, Leah; Seelos, Frank; Milliken, Ralph; Mustard, John; Arvidson, Raymond; Wiseman, Sandra; Lichtenberg, Kimberly et al. (2009). "Evidence for the origin of layered deposits in Candor Chasma, Mars, from mineral composition and hydrologic modeling". Journal of Geophysical Research 114. Bibcode:2009JGRE..11400D05M. doi:10.1029/2009JE003343.
- Edgett, E. (2005). "The sedimentary rocks of Sinus Meridiani: Five key observations from data acquired by the Mars Global Surveyor and Mars Odyssey orbiters". Mars 1: 5–58. Bibcode:2005Mars....1....5E. doi:10.1555/mars.2005.0002.
- Hartmann, W. 2003. A Traveler's Guide to Mars. Workman Publishing. NY NY.
- Andrews‐Hanna, J. C., R. J. Phillips, and M. T. Zuber (2007), Meridiani Planum and the global hydrology of Mars, Nature, 446, 163–166, doi:10.1038/nature05594.
- Andrews‐Hanna, J. C., M. T. Zuber, R. E. Arvidson, and S. M. Wiseman (2010), Early Mars hydrology: Meridiani playa deposits and the sedimentary record of Arabia Terra, J. Geophys. Res., 115, E06002, doi:10.1029/2009JE003485.
- Grotzinger, J. P., et al. (2005), Stratigraphy and sedimentology of a dry to wet eolian depositional system, Burns formation, Meridiani Planum, Mars, Earth Planet. Sci. Lett., 240, 11–72, doi:10.1016/j.epsl.2005.09.039
- McLennan, S. M., et al. (2005), Provenance and diagenesis of the evaporitebearing Burns formation, Meridiani Planum, Mars, Earth Planet. Sci. Lett., 240, 95–121, doi:10.1016/j.epsl.2005.09.041
- Squyres, S. W., and A. H. Knoll (2005), Sedimentary rocks at Meridiani Planum: Origin, diagenesis, and implications for life on Mars, Earth Planet. Sci. Lett., 240, 1–10, doi:10.1016/j.epsl.2005.09.038.
- Squyres, S. W., et al. (2006), Two years at Meridiani Planum: Results from the Opportunity rover, Science, 313, 1403–1407, doi:10.1126/science.
- M. Wiseman, J. C. Andrews-Hanna, R. E. Arvidson3, J. F. Mustard, K. J. Zabrusky DISTRIBUTION OF HYDRATED SULFATES ACROSS ARABIA TERRA USING CRISM DATA: IMPLICATIONS FOR MARTIAN HYDROLOGY. 42nd Lunar and Planetary Science Conference (2011) 2133.pdf
- "Water ice in crater at Martian north pole" – July 27, 2005 ESA Press release. Retrieved March 17, 2006.
- "Ice lake found on the Red Planet" – July 29, 2005 BBC story. Retrieved March 17, 2006.
- Cabrol, N. and E. Grin (eds.). 2010. Lakes on Mars. Elsevier. NY
- Murray, John B.; et al. (2005). "Evidence from the Mars Express High Resolution Stereo Camera for a frozen sea close to Mars' equator". Nature 434 (7031): 352–356. Bibcode:2005Natur.434..352M. doi:10.1038/nature03379. PMID 15772653.
- Orosei, R.; Cartacci, M.; Cicchetti, A.; Noschese, R.; Federico, C.; Frigeri, A.; Flamini, E.; Holt, J. W. et al. (2008). "Radar subsurface sounding over the putative frozen sea in Cerberus Palus, Mars" (PDF). Lunar and Planetary Science. XXXIX: 1. Bibcode:2007AGUFM.P14B..05O. doi:10.1109/ICGPR.2010.5550143. ISBN 978-1-4244-4604-9.
- ISBN 978-0-521-85226-5
- "ESA – Mars Express – Breathtaking views of Deuteronilus Mensae on Mars". Esa.int. March 14, 2005. Retrieved December 19, 2010.
- "HiRISE | Glacier? (ESP_018857_2225)". Uahirise.org. Retrieved December 19, 2010.
- [dead link]
- Shean, David E. (2005). "Origin and evolution of a cold-based tropical mountain glacier on Mars: The Pavonis Mons fan-shaped deposit". Journal of Geophysical Research 110. Bibcode:2005JGRE..11005001S. doi:10.1029/2004JE002360.
- "HiRISE | Fretted Terrain Valley Traverse (PSP_009719_2230)". Hirise.lpl.arizona.edu. Retrieved December 19, 2010.
- "Mars' South Pole Ice Deep and Wide". NASA News & Media Resources. NASA. March 15, 2007. Retrieved 2013-03-18.
- "Water at Martian south pole". European Space Agency (ESA). March 17, 2004. Retrieved September 11, 2009.
- Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (June 3, 2006). "Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement". Geophysical Research Letters 33 (11): L11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946. Retrieved August 1, 2008
- "Radar Map of Buried Mars Layers Matches Climate Cycles". OnOrbit. Retrieved December 19, 2010.
- "Polygonal Patterned Ground: Surface Similarities Between Mars and Earth | SpaceRef – Your Space Reference". SpaceRef. September 28, 2002. Retrieved December 19, 2010.
- [dead link]
- Squyres, S. (1989). "Urey Prize Lecture: Water on Mars". Icarus 79 (2): 229–288. Bibcode:1989Icar...79..229S. doi:10.1016/0019-1035(89)90078-X.
- Lefort, A.; Russell, P.S.; Thomas, N. (2010). "Scaloped terrains in the Peneus and Amphitrites Paterae region of Mars as observed by HiRISE". Icarus 205: 259–268. Bibcode:2010Icar..205..259L. doi:10.1016/j.icarus.2009.06.005.
- "NASA – Turbulent Lava Flow in Mars' Athabasca Valles". Nasa.gov. January 11, 2010. Retrieved December 19, 2010.
- "HiRISE | Dissected Mantled Terrain (PSP_002917_2175)". Hirise.lpl.arizona.edu. Retrieved December 19, 2010.
- Lefort, A.; Russell, P.S.; Thomas, N. (2010). "Scalloped terrains in the Peneus and Amphitrites Paterae region of Mars as observed by HiRISE". Icarus 205: 259–268. Bibcode:2010Icar..205..259L. doi:10.1016/j.icarus.2009.06.005.
- NASA Spacecraft Data Suggest Water Flowing on Mars, NASA, August 4, 2011
- "Shergotty Meteorite – JPL, NASA". .jpl.nasa.gov. Retrieved December 19, 2010.
- Treiman, A (2005). "The nakhlite meteorites: Augite-rich igneous rocks from Mars". Chemie der Erde – Geochemistry 65 (3): 203. Bibcode:2005ChEG...65..203T. doi:10.1016/j.chemer.2005.01.004. Retrieved September 8, 2006.
- McKay, D.; Gibson Jr, EK; Thomas-Keprta, KL; Vali, H; Romanek, CS; Clemett, SJ; Chillier, XD; Maechling, CR et al. (1996). "Search for Past Life on Mars: Possible Relic Biogenic Activity in Martian Meteorite AL84001". Science 273 (5277): 924–930. Bibcode:1996Sci...273..924M. doi:10.1126/science.273.5277.924. PMID 8688069.
- Gibbs, W.; Powell, C. (August 19, 1996). "Bugs in the Data?". Scientific American.
- "Controversy Continues: Mars Meteorite Clings to Life – Or Does It?". SPACE.com. March 20, 2002. Retrieved December 19, 2010.
- Bada, J.; Glavin, DP; McDonald, GD; Becker, L (1998). "A Search for Endogenous Amino Acids in Martian Meteorite AL84001". Science 279 (5349): 362–365. Bibcode:1998Sci...279..362B. doi:10.1126/science.279.5349.362. PMID 9430583.
- Goldspiel, J., Squires, S. (2000). "Groundwater sapping and valley formation on Mars". Icarus 148: 176–192. Bibcode:2000Icar..148..176G. doi:10.1006/icar.2000.6465.
- McCauley, J. 1978. Geologic map of the Coprates quadrangle of Mars. U.S. Geol. Misc. Inv. Map I-897
- Nedell, S.; Squyres, Steven W.; Andersen, David W. (1987). "Origin and evolution of the layered deposits in the Valles Marineris, Mars". Icarus 70 (3): 409–441. Bibcode:1987Icar...70..409N. doi:10.1016/0019-1035(87)90086-8.
- "Spectacular Mars images reveal evidence of ancient lakes". Sciencedaily.com. January 4, 2010. Retrieved December 19, 2010.
- Gupta, Sanjeev; Warner, Nicholas; Kim, Rack; Lin, Yuan; Muller, Jan; -1#Jung-, Shih- (2010). "Hesperian equatorial thermokarst lakes in Ares Vallis as evidence for transient warm conditions on Mars". Geology 38: 71–74. doi:10.1130/G30579.1.
- Di Achille, Gaetano; Hynek, Brian M. (2010). "Ancient ocean on Mars supported by global distribution of deltas and valleys". Nature Geoscience 3 (7): 459. Bibcode:2010NatGe...3..459D. doi:10.1038/ngeo891.
- Clifford, S. M., Parker, T. J. (2001). "The Evolution of the Martian Hydrosphere: Implications for the Fate of a Primordial Ocean and the Current State of the Northern Plains". Icarus 154: 40–79. Bibcode:2001Icar..154...40C. doi:10.1006/icar.2001.6671.
- Baker, V. R., Strom, R. G., Gulick, V. C., Kargel, J. S., Komatsu, G., Kale, V. S. (1991). "Ancient oceans, ice sheets and the hydrological cycle on Mars". Nature 352 (6336): 589–594. Bibcode:1991Natur.352..589B. doi:10.1038/352589a0.
- Read, Peter L.; Lewis, S. R. (2004). The Martian Climate Revisited: Atmosphere and Environment of a Desert Planet (Paperback). Chichester, UK: Praxis. ISBN 978-3-540-40743-0. Retrieved December 19, 2010.
- "Martian North Once Covered by Ocean". Astrobio.net. Retrieved December 19, 2010.
- "New Map Bolsters Case for Ancient Ocean on Mars". SPACE.com. November 23, 2009. Retrieved December 19, 2010.
- Zuber, Maria T. (2007). "Planetary Science: Mars at the tipping point". Nature 447 (7146): 785–786. Bibcode:2007Natur.447..785Z. doi:10.1038/447785a. PMID 17568733.
- Smith, D. et al. (1999). "The Gravity Field of Mars: Results from Mars Global Surveyor". Science 284 (5437): 94–97. Bibcode:1999Sci...286...94S. doi:10.1126/science.286.5437.94.
- "Ancient ocean may have covered third of Mars". Sciencedaily.com. June 14, 2010. Retrieved December 19, 2010.
- Fairén, Alberto G.; Davila, Alfonso F.; Gago-Duport, Luis; Haqq-Misra, Jacob D.; Gil, Carolina; McKay, Christopher P.; Kasting, James F. (28 August 2011). "Cold glacial oceans would have inhibited phyllosilicate sedimentation on early Mars". Nature Geoscience 4 (10): 667. Bibcode:2011NatGe...4..667F. doi:10.1038/ngeo1243.
- "Mars Ocean Hypothesis Hits the Shore " Articles " NASA Astrobiology". Astrobiology.nasa.gov. January 26, 2001. Retrieved December 19, 2010.
- Kerr, Richard A. (2007). "Is Mars Looking Drier and Drier for Longer and Longer?". Science 317 (5845): 1673. doi:10.1126/science.317.5845.1673. PMID 17885108.
- Cabrol, N., Grin, E. (2001). "The Evolution of Lacustrine Environments on Mars: Is Mars Only Hydrologically Dormant?". Icarus 149 (2): 291–328. Bibcode:2001Icar..149..291C. doi:10.1006/icar.2000.6530.
- "Once-Habitable Lake Found on Mars". SPACE.com. March 6, 2008. Retrieved December 19, 2010.
- Gulick, V., Baker, V. (1989). "Fluvial valleys and martian palaeoclimates". Nature 341 (6242): 514–516. Bibcode:1989Natur.341..514G. doi:10.1038/341514a0.
- Head, J.; Kreslavsky, M. A.; Ivanov, M. A.; Hiesinger, H.; Fuller, E. R.; Pratt, S. (2001). "Water in Middle Mars History: New Insights From MOLO Data". American Geophysical Union. Bibcode:2001AGUSM...P31A02H.
- Head, J. et al. (2001). "Exploration for standing Bodies of Water on Mars: When Were They There, Where did They go, and What are the Implications for Astrobiology?". American Geophysical Union 21: 03. Bibcode:2001AGUFM.P21C..03H.
- "Mars Rover's Meteorite Discovery Triggers Questions". Space.com. Retrieved 2013-02-10.
- Source: NASA HQ Posted Tuesday, October 28, 2008 (2008-10-28). "NASA Mars Reconnaissance Orbiter Reveals Details of a Wetter Mars | SpaceRef - Your Space Reference". SpaceRef. Retrieved 2013-02-10.
- "Amazing Mars: Discoveries in 2008". Space.com. 2008-12-30. Retrieved 2013-02-10.
- "What Mars Fossils Might Look Like". SPACE.com. May 1, 2008. Retrieved December 19, 2010.
- http://blogs.discover magazine.com/80beats/2008/05/30/mars-water-suited-for-pickles-not-for-life-2/
- Mittlefehldt, D. (1994). "ALH84001, a cumulate orthopyroxenite member of the martian meteorite clan". Meteortics 29: 214–221.
- [dead link]
- Boston, P.; Ivanov, MV; McKay, CP (1992). "On the Possibility of Chemosynthetic Ecosystems in Subsurface Habitats on Mars". Icarus 95 (2): 300–308. Bibcode:1992Icar...95..300B. doi:10.1016/0019-1035(92)90045-9. PMID 11539823.
- Thompson, Andrea (April 14, 2009). "Mars Sprinkled with Salty Mysteries". SPACE.com. Retrieved October 9, 2012.
- Jpl.Nasa.Gov (2009-07-02). "NASA Phoenix Results Point to Martian Climate Cycles - NASA Jet Propulsion Laboratory". Jpl.nasa.gov. Retrieved 2013-02-10.
- "Astrobiology Magazine". Astrobio.net. Retrieved December 19, 2010.
- Cowen, R. (2003). "Martian Invasion". Science News 164 (19): 298–300. doi:10.2307/4018828. JSTOR 4018828.
- McKay, C. P. (1997). "Looking for life on Mars". Astronomy 25 (8): 38–43. Bibcode:1997Ast....25...38F.
- Gilichinsky, D.; Wilson, GS; Friedmann, EI; McKay, CP; Sletten, RS; Rivkina, EM; Vishnivetskaya, TA; Erokhina, LG et al. (2007). "Microbal Populations in Antarctic Permafrost: Biodiversity, State, Age, and Implication for Astrobiology". Astrobiology 7 (2): 275–311. Bibcode:2007AsBio...7..275G. doi:10.1089/ast.2006.0012. PMID 17480161.
- Raeburn, P. 1998. Mars. National Geographic Society. Washington, D.C.
- Allen, C.; Albert, FG; Chafetz, HS; Combie, J; Graham, CR; Kieft, TL; Kivett, SJ; McKay, DS et al. (2000). "Microscopic Physical Biomarkers in Carbonate Hot Springs: Implications in the Search fo Life on Mars". Icarus 147 (1): 49–67. Bibcode:2000Icar..147...49A. doi:10.1006/icar.2000.6435. PMID 11543582.
- Fredrickson, J., Onstott, T. (1996). "Microbes Deep inside the Earth". Scientific American 275 (4): 68–73. doi:10.1038/scientificamerican1096-68. PMID 8797299.
- Pedersen, K. (1993). "The deep subterranean biosphere". Earth-Science Reviews 34 (4): 243–260. Bibcode:1993ESRv...34..243P. doi:10.1016/0012-8252(93)90058-F.
- Stevens, T, McKinley, J. (1995). "Lithoautotrophic Microbial Ecosystems in Deep Basalt Aquifers". Science 270 (5235): 450–454. Bibcode:1995Sci...270..450S. doi:10.1126/science.270.5235.450.
- Payne, M; Farmer, J. (2001). "Volcanic-Ice Interactions and the Exploration for Extant Martian Life". American Geophysical Union 22: 0549. Bibcode:2001AGUFM.P22B0549P.
- "Martian Life Appears Less Likely : Discovery News". Dsc.discovery.com. August 12, 2009. Retrieved December 19, 2010.
- "Tough Microbe Has The Right Stuff for Mars". LiveScience. 2009-07-18. Retrieved 2013-02-10.
- Huber, R.; Stotters, P.; Cheminee, J. L.; Richnow, H. H.; Stetter, K. O. (1990). "Hyperthermophilic archaebacteria within the crater and open-sea plume of erupting Macdonald Seamount". Nature 345 (6271): 179–182. Bibcode:1990Natur.345..179H. doi:10.1038/345179a0.
- Walter, M., DesMarais, D. (1993). "Preservation of Biological Information in Thermal Spring Deposits: Developing a Strategy for the Search for Fossil Life on Mars". Icarus 101 (1): 129–143. Bibcode:1993Icar..101..129W. doi:10.1006/icar.1993.1011. PMID 11536937.
- Allen, C., Oehler, D. (2008). "A Case for Ancient Springs in Arabia Terra, Mars". Astrobiology 8 (6): 1093–1112. Bibcode:2008AsBio...8.1093A. doi:10.1089/ast.2008.0239. PMID 19093802.
- "Evidence of Ancient Hot Springs on Mars Detailed in Astrobiology Journal | SpaceRef – Your Space Reference". SpaceRef. February 11, 2009. Retrieved December 19, 2010.
- Wallace, D., Sagan, C. (1979). "Evaporation of Ice in Planetary Atmospheres: Ice-Covered Rivers on Mars". Icarus 39 (3): 385–400. Bibcode:1979Icar...39..385W. doi:10.1016/0019-1035(79)90148-9.
- Duxbury, N. S.; Zotikov, I. A.; Nealson, K. H.; Romanovsky, V. E.; Carsey, F. D. (2001). "A numerical model for an alternative origin of Lake Vostok and its exobiological implications for Mars". Journal of Geophysical Research 106: 1453. Bibcode:2001JGR...106.1453D. doi:10.1029/2000JE001254. Retrieved April 8, 2009.
- Segura, T. et al. 2001. Effects of Large Impacts on Mars: Implication for River Formation. American Astronomical society, DPS meeting
- Segura, T.; Toon, OB; Colaprete, A; Zahnle, K (2002). "Environmental Effects of Large Impacts on Mars". Science 298 (5600): 1977–1980. Bibcode:2002Sci...298.1977S. doi:10.1126/science.1073586. PMID 12471254.
- Baker, V., Milton, D. (1974). "Erosion by Catastrophic Floods on Mars and Earth". Icarus 23: 27–41. Bibcode:1974Icar...23...27B. doi:10.1016/0019-1035(74)90101-8.
- Christensen, P. (2005). "The Many Faces of Mars". Scientific American 293 (1): 32–39. doi:10.1038/scientificamerican0705-32. PMID 16008291.
- Source: Ames Research Center Posted Saturday, June 6, 2009 (June 6, 2009). "NASA Scientists Find Evidence for Liquid Water on a Frozen Early Mars | SpaceRef – Your Space Reference". SpaceRef. Retrieved December 19, 2010.
- Kreslavsky, M.; Head, James W.; Marchant, David R. (2006). "Periods of Active Permafrost Layer Formation During the Geological History of Mars: Implication for Circum-Polar and Mid-Latitude surface Processes" (PDF). Planetary and space Science Special Issue on Polar Processes 56 (2): 266–288. Bibcode:2008P&SS...56..289K. doi:10.1016/j.pss.2006.02.010.
- "Dead Spacecraft on Mars Lives on in New Study". SPACE.com. June 10, 2008. Retrieved December 19, 2010.
- Lobitz, B.; Wood, BL; Averner, MM; McKay, CP (2001). "Use of spacecraft data to derive regions on Mars where liquid water would be stable". Proc. Natl. Acad. Sci. 98 (5): 2132–2137. Bibcode:2001PNAS...98.2132L. doi:10.1073/pnas.031581098. PMC 30104. PMID 11226204.
- Haberie, Robert M.; McKay, Christopher P.; Schaeffer, James; Cabrol, Nathalie A.; Grin, Edmon A.; Zent, Aaron P.; Quinn, Richard (2001). "On the possibility of liquid water on present-day Mars". J. Geophysical Research 106: 23317–23326. Bibcode:2001JGR...10623317H. doi:10.1029/2000JE001360.
- Nancy Atkinson (September 4, 2008). "Phoenix Probe Says Both Yes and No to Water on Mars". Universetoday.com. Retrieved December 19, 2010.
- http://www.newscientist.com/article/mg20427373.700 (subscription required)
- Tudor Vieru (2009-12-07). "Greenhouse Effect on Mars May Be Allowing for Life". News.softpedia.com. Retrieved 2011-08-20.
- Possible New Mars Caves Targets in Search for Life
- Michael T. Mellon Subsurface Ice at Mars: A review of ice and water in the equatorial regions University of Colorado 10 May 2011 Planetary Protection Subcommittee Meeting
- Robert Roy Britt Ice Packs and Methane on Mars Suggest Present Life Possiblespace.com 22 February 2005
- Mellon, M. T., B. M. Jakosky, and S. E. Postawko (1997)The persistence of equatorial ground ice on Mars, J. Geophys. Res., 102(E8), 19357–19369, doi:10.1029/97JE01346.
- John D. Arfstrom A Conceptual Model of Equatorial Ice Sheets on Mars. J Comparative Climatology of Terrestrial Planets (2012)
- Surviving the conditions on Mars DLR, 26 April 2012
- Jean-Pierre de Vera Lichens as survivors in space and on Mars Fungal Ecology Volume 5, Issue 4, August 2012, Pages 472–479
- R. de la Torre Noetzel, F.J. Sanchez Inigo, E. Rabbow, G. Horneck, J. P. de Vera, L.G. Sancho Survival of lichens to simulated Mars conditions
- F.J. Sáncheza, , , E. Mateo-Martíb, J. Raggioc, J. Meeßend, J. Martínez-Fríasb, L.Ga. Sanchoc, S. Ottd, R. de la Torrea The resistance of the lichen Circinaria gyrosa (nom. provis.) towards simulated Mars conditions—a model test for the survival capacity of an eukaryotic extremophile Planetary and Space Science Volume 72, Issue 1, November 2012, Pages 102–110
- ed, Hugh H. Kieffer ... (1994). Mars ([2. Aufl.]. ed.). Tucson [u.a.]: Univ. of Arizona Press. ISBN 0-8165-1257-4.
- Jakosky, Bruce M. (1999). "Water, Climate, and Life". Science 283 (5402): 648–649. doi:10.1126/science.283.5402.648. PMID 9988657.
- "Mars Global Surveyor MOC2-862 Release". Msss.com. Retrieved 2012-01-16.
- "Ancient ocean may have covered third of Mars". Sciencedaily.com. 2010-06-13. Retrieved 2012-01-16.
- Carr, M.H. (1979). "Formation of Martian flood features by relaease of water from confined aquifers". J. Geophys. Res. 84: 2995–3007. Bibcode:1979JGR....84.2995C. doi:10.1029/JB084iB06p02995.
- Craddock, R.A. and Howard, A.D. (2002). The case for rainfall on a warm, wet early Mars. J. Geophys. Res., 107(E11), doi:10.1029/2001JE001505.
- "Flashback: Water on Mars Announced 10 Years Ago". Space.com. Retrieved 2012-01-16.
- Malin, Michael C.; Edgett, Kenneth S. (2000). "Evidence for Recent Groundwater Seepage and Surface Runoff on Mars". Science 288 (5475): 2330–2335. Bibcode:2000Sci...288.2330M. doi:10.1126/science.288.5475.2330. PMID 10875910.
- Heldmann et al., Jennifer L. (2005-05-07). "Formation of Martian gullies by the action of liquid water flowing under current Martian environmental conditions" (– Scholar search). Journal of Geophysical Research 110 (E5): Eo5004. Bibcode:2005JGRE..11005004H. doi:10.1029/2004JE002261. Archived from the original on December 1, 2007. Retrieved 2007-08-12 'conditions such as now occur on Mars, outside of the temperature-pressure stability regime of liquid water' … 'Liquid water is typically stable at the lowest elevations and at low latitudes on the planet because the atmospheric pressure is greater than the vapor pressure of water and surface temperatures in equatorial regions can reach 273 K for parts of the day [Haberle et al., 2001]'
- Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (June 3, 2006). "Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement". Geophysical Research Letters 33 (11): L11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946. Retrieved 2007-08-12 'Martian high-latitude zones are covered with a smooth, layered ice-rich mantle'
- Jpl.Nasa.Gov (2006-12-06). "JPL news release 2006-145". Jpl.nasa.gov. Retrieved 2012-01-16.
- Malin, Michael C.; Kenneth S. Edgett, Liliya V. Posiolova, Shawn M. McColley, Eldar Z. Noe Dobrea (8 December 2006). "Present-Day Impact Cratering Rate and Contemporary Gully Activity on Mars". Science 314 (5805): 1573–1577. Bibcode:2006Sci...314.1573M. doi:10.1126/science.1135156. PMID 17158321. Retrieved 2009-09-03.
- Kolb, Kelly Jean; Pelletier, Jon D.; McEwen, Alfred S. (2010). "Modeling the formation of bright slope deposits associated with gullies in Hale Crater, Mars: Implications for recent liquid water". Icarus 205 (1): 113–137. Bibcode:2010Icar..205..113K. doi:10.1016/j.icarus.2009.09.009.
- Benison, KC; Laclair, DA (2003). "Modern and ancient extremely acid saline deposits: terrestrial analogs for martian environments?". Astrobiology 3 (3): 609–618. Bibcode:2003AsBio...3..609B. doi:10.1089/153110703322610690. PMID 14678669.
- Benison, K; Bowen, B (2006). "Acid saline lake systems give clues about past environments and the search for life on Mars". Icarus 183 (1): 225–229. Bibcode:2006Icar..183..225B. doi:10.1016/j.icarus.2006.02.018.
- Johnson, John (2008-08-01). "There's water on Mars, NASA confirms". Los Angeles Times. Retrieved 2008-08-01.
- Source: Ames Research Center Posted Saturday, June 6, 2009 (2009-06-06). "NASA Scientists Find Evidence for Liquid Water on a Frozen Early Mars | SpaceRef - Your Space Reference". SpaceRef. Retrieved 2012-01-16.
- Fairén, Alberto G.; Davila, Alfonso F.; Gago-Duport, Luis; Amils, Ricardo; McKay, Christopher P. (May 2009). "Stability against freezing of aqueous solutions on early Mars". Nature 459 (7245): 401–4. Bibcode:2009Natur.459..401F. doi:10.1038/nature07978. PMID 19458717.
- Kreslavsky, M; Head, J; Marchant, D (2008). "Periods of Active Permafrost Layer Formation During the Geological History of Mars: Implication for Circum-Polar and Mid-Latitude surface Processes" (PDF). Planetary and Space Science 56 (2): 289–302. Bibcode:2008P&SS...56..289K. doi:10.1016/j.pss.2006.02.010.
- Lobitz, B.; Wood, BL; Averner, MM; McKay, CP (February 2001). "Use of spacecraft data to derive regions on Mars where liquid water would be stable". Proc. Natl. Acad. Sci. U.S.A. 98 (5): 2132–7. Bibcode:2001PNAS...98.2132L. doi:10.1073/pnas.031581098. PMC 30104. PMID 11226204.
- Haberle, Robert M.; McKay, Christopher P.; Schaeffer, James; Cabrol, Nathalie A.; Grin, Edmon A.; Zent, Aaron P.; Quinn, Richard (2001). "On the possibility of liquid water on present-day Mars". Journal of Geophysical Research 106 (E10): 23317–23326. Bibcode:2001JGR...10623317H. doi:10.1029/2000JE001360.
- Nancy Atkinson (2008-09-04). "Phoenix Probe Says Both Yes and No to Water on Mars". Universetoday.com. Retrieved 2012-01-16.
- Shiga, David (7 December 2009). "Watery niche may foster life on Mars". New Scientist (2737).
- Mars-May-Be-Allowing-for-Life-129065.shtml[dead link]
- Kostama, V.-P.; Kreslavsky, M. A.; Head, J. W. (June 3, 2006). "Recent high-latitude icy mantle in the northern plains of Mars: Characteristics and ages of emplacement". Geophysical Research Letters 33 (11): L11201. Bibcode:2006GeoRL..3311201K. doi:10.1029/2006GL025946. Retrieved 2008-08-01
- Fishbaugh, KE; Byrne, Shane; Herkenhoff, Kenneth E.; Kirk, Randolph L.; Fortezzo, Corey; Russell, Patrick S.; McEwen, Alfred (2010). "Evaluating the meaning of "layer" in the martian north polar layered depsoits and the impact on the climate connection" (PDF). Icarus 205 (1): 269–282. Bibcode:2010Icar..205..269F. doi:10.1016/j.icarus.2009.04.011.
- "PSRD: Ancient Floodwaters and Seas on Mars". Psrd.hawaii.edu. 2003-07-16. Retrieved 2012-01-16.
- Carr, Michael H. (2003). "Oceans on Mars: An assessment of the observational evidence and possible fate" (PDF). Journal of Geophysical Research 108 (E5): 5042. Bibcode:2003JGRE..108.5042C. doi:10.1029/2002JE001963.
- Zuber, Maria T. (2007). "Mars at the tipping point". Nature 447 (7146): 785–786. Bibcode:2007Natur.447..785Z. doi:10.1038/447785a. PMID 17568733.
- ISBN 0-8165-1257-4
- "ESA - Mars Express - Breathtaking views of Deuteronilus Mensae on Mars". Esa.int. 2005-03-14. Retrieved 2012-01-16.
- Ohanlon, Larry (4 March 2010). "Mars' Ice Age Revealed in Map". Discovery News.
- Hauber, E. et al. (2005). "Discovery of a flank caldera and very young glacial activity at Hecates Tholus, Mars". Nature 434 (7031): 356–61. Bibcode:2005Natur.434..356H. doi:10.1038/nature03423. PMID 15772654.
- Shean, D. et al. (2005). "Origin and evolution of a cold-based mountain glacier on Mars: The Pavonis Mons fan-shaped deposit". Journal of Geophysical Research 110 (E5): E05001. Bibcode:2005JGRE..11005001S. doi:10.1029/2004JE002360.
- Basilevsky, A. et al. (2006). "Geological recent tectonic, volcanic and fluvial activity on the eastern flank of the Olympus Mons volcano, Mars". Geophysical Research Letters 33. L13201. Bibcode:2006GeoRL..3313201B. doi:10.1029/2006GL026396.
- "Fretted Terrain Valley Traverse". Hirise.lpl.arizona.edu. Retrieved 2012-01-16.
- "Jumbled Flow Patterns". Hirise.lpl.arizona.edu. Retrieved 2012-01-16.
- Berman, D. et al. (2005). "The role of arcuate ridges and gullies in the degradation of craters in the Newton Basin region of Mars". Icarus 178 (2): 465–86. Bibcode:2005Icar..178..465B. doi:10.1016/j.icarus.2005.05.011.
- Milliken, R. et al. (2003). "Viscous flow features on the surface of Mars: Observations from high-resolution Mars Orbiter Camera (MOC) images". Journal of Geophysical Research 108. E6, 5057.
- Arfstrom, J.; W. Hartmann (2005). "Martian flow features, moraine-like ridges, and gullies: Terrestrial analogs and interrelationships". Icarus 174 (2): 321–35. Bibcode:2005Icar..174..321A. doi:10.1016/j.icarus.2004.05.026.
- Baker, V (2003). "Icy martian mysteries". Nature 426 (6968): 779–80. doi:10.1038/426779a. PMID 14685217.
- Head, J. et al. (2003). "Recent ice ages on Mars". Nature 426 (6968): 797–802. doi:10.1038/nature02114. PMID 14685228.
- Mustard, J. et al. (2001). "Evidence for recent climate change on Mars from the identification of youthful near-surface ground ice". Nature 412 (6845): 411–4. doi:10.1038/35086515. PMID 11473309.
- Kreslavsky, M.; J. Head (2002). "Mars: Nature and evolution of young latitude-dependent water-ice-rich mantle" (PDF). Geophysical Research Letters 29 (15). Bibcode:2002GeoRL..29o..14K. doi:10.1029/2002GL015392.
- "HiRISE | Dissected Mantled Terrain (PSP_002917_2175)". Hirise.lpl.arizona.edu. Retrieved 2012-01-16.
- Forget, F. et al. (2006). "Formation of Glaciers on Mars by Atmospheric Precipitation at High Obliquity". Science 311 (5759): 368–71. Bibcode:2006Sci...311..368F. doi:10.1126/science.1120335. PMID 16424337.
- Dickson, James L.; Head, James W.; Marchant, David R. (2008). "Late Amazonian glaciation at the dichotomy boundary on Mars: Evidence for glacial thickness maxima and multiple glacial phases". Geology 36 (5): 411–4. doi:10.1130/G24382A.1.
- "Origin and evolution of a cold-based tropical mountain glacier on Mars: The Pavonis Mons fan-shaped deposit". Mars.asu.edu. Retrieved 2013-02-10.
|Wikimedia Commons has media related to: Water on Mars|
- NASA - Curiosity Rover Finds Evidence For An Ancient Streambed - September, 2012
- Images - Signs Of Water On Mars (HiRISE)
- Video (02:01) - Liquid Flowing Water Discovered on Mars - August, 2011
- Video (04:32) - Evidence: Water "Vigorously" Flowed On Mars - September, 2012 | http://en.wikipedia.org/wiki/Water_on_Mars | 13 |
66 | In this experiment we will study motion in two-dimension. An object which has motion in both the X and Y direction has two dimensional motion. The apparatus we will use to study this motion is called ballistic pendulum. We will first determine at what velocity the ball is being fired from the firing mechanism, and then with this knowledge and some calculations. Determine how far the ball will travel when it is fired at an angle other than the horizontal.
In introductory physics courses, a projectile is an object which is given some initial velocity, v0, and thereafter, subjected only to gravity. This definition of a projectile assumes that no force due to air resistance is acting on the projectile. This assumption is approximately valid if the velocity of the projectile is relatively small (less than 10 meters/sec) and the cross-sectional area of the object is small, which will be the case in this experiment. Since gravity is the only force assumed to act on the object after it is given its initial velocity, the object will be in free-fall in the vertical direction, and will move with constant velocity in the horizontal direction.
Range of a Projectile Fired Horizontally.
Consider an object projected horizontally with a velocity, v0X, from some initial height, H, above the floor, as sketched below. The object will travel a horizontal distance, R, during the time it falls a vertical distance, H. Since the velocity in the horizontal direction is constant,
R = v0X t
Where t is the time that the object is in flight (which is also the time it takes the object to fall a distance H).
In free fall, the vertical distance moved during a time interval, t, is given by the equation,
y - yo = voy t - (1/2)gt2
where y0 is the initial position of the object, g is the acceleration due to gravity (about 9.8 m/sec2), and v0Y is the initial velocity of the object in the vertical (y) direction. In equation , “up” is taken as the positive direction, and “down” is the negative direction. For the case of an object propelled horizontally, v0Y is zero (no component of initial velocity up or down). If the object is initially propelled from a height H above the floor, (y0 = H) then at a later time it hits the floor, and y = 0.
Thus, from equation ,
and the time of flight is
The initial velocity of the projectile can then be calculated from equation .
Projectile Fired at Angle q above the Horizontal:
Consider a projectile projected with an initial velocity, v0, at angle q above the horizontal at height, h1 above the floor, as sketched below.
In this case, the initial velocity has a horizontal component voX = vocosq, and a vertical component voY = vosinq. If an object in free fall is given an initial velocity upward, then after time, t1, its vertical velocity will be given by
vy = voy - gt1
At the maximum height in its path, the vertical component of the velocity of a projectile is zero. Then, from equation ,
voy = gt1
t1 = voy /g
At this instant of time, the height of the projectile, h2 , above the original height (h1) can be calculated from the equation
h1 = voy t1 - (1/2)g t12
During the time t1 the object has moved a horizontal distance, x1, given by
x1 = vox t1
After obtaining a maximum height, the object moves an additional distance, x2, before finally striking the floor. The distance, x2, can be calculated by the methods previously discussed in Method 2 (i.e., a projectile given a horizontal velocity at an initial height H = h1+h2 above the floor). Thus, the time, t2, to move distance, x2, can then be calculated from
The total range of the projectile is thus,
1. Place the ball into the projectile launcher and use the plunger to push the ball into the mechanism until the yellow indictor is within one of the range settings. Also be sure that the ball remains seated in the mechanism and does not roll free along the barrel.
2. Fire the ball and take note of where it landed. Use a coin or some small object to mark the location. Create a target using a piece of carbon paper sandwiched between two sheets of computer paper. Center the target where you have marked its initial landing and tape the paper at the corners to the floor to prevent movement. Indicate on the target which edge is nearest to the launcher.
3. Measure and record the distance, d1, in meters from the nearest edge of the paper to the small hash-mark shown on the projectile launcher that is within the depiction of the ball launching position. Measure in meters the height, H, from the floor to the bottom of the depiction of the ball launching position. (Remember the bottom of the ball hits the ground first).
4. To prevent having to chase the ball down place a wooden backdrop at the far edge of the target at an angle to the target to ricochet the ball away from the target and back towards you.
5. Reload the launcher to the same position and launch the ball. The carbon paper will make an imprint on the computer paper of where the ball struck. There is no need to make any measurements at this time. Repeat this until you have struck the target 5 times. Remove the paper from the floor and take it to you table. Measure the distance from the edge of the paper to each of the imprints made on the paper. Note: Some imprints may be very close together, make as good judgment as possible to the center of each imprint. Also, if all 5 imprints cannot be determined it is not necessary to repeat the process, just use the number of shots that can be determined. Record each value into the data table and find the average.
6. Find the range, R, by adding the average value from the data table to d1. Determine the time of flight of the ball using equation . Now determine the velocity of the ball using equation . Record your value.
Ave = d2.
d1 = 1.7m d2 = 1.8924m
4) The range the ball travel horizontally R = d1 + d2 . and the height the ball fell vertically is
H = 0.87m R = 1.8924m
5) Use the equation where g = 9.8 m/s2.to find the total time the ball was in the air. Use t, to find the velocity from the equation R = v * t.
t = 0.421368 s ^2 v = 4.49109 m/s^2
Finding a Range for an angle other than zero degrees.
1) 1. Ask you instructor for an angle q. Remove the projectile launcher from the ballistic pendulum. Rotate the ballistic pendulum 180°. Reconnect the projectile launcher near the top of the ballistic pendulum; the front will be secured using the single hole. The rear of the projectile launcher will be secured using the curved slot. If connected properly you should now be able to set the angle you were given by loosening the projectile launcher and lower the rear of it until the string with the plumb indicates the desired angle. Using the previously determined velocity find the x and y components vx and vy.
q = 35 degrees
vx = v cos q = 3.67889 - m/s vy = v sin q = 2.57598 m/s
2. Measure in meters the height, H, from the floor to the bottom of the depiction of the ball launching position. (Remember the bottom of the ball hits the ground first).
H = 0.87 m
3. To determine the time of flight for the projectile use the equation
y = H + vy sin q t –4.9t2
0 = c + b(t) + (-a)(t)2
If we set the point of impact (the floor) as zero the y = 0 in the equation above.
The quadratic equation can be used to determined t.
a = -4.9 b = 2.57598 c = 0.87 t = 0.15887 s
4. The determine the distance the projectile will travel horizontally during this time to determine the range.
R = vx * t = 0.584465 m
5. Measure this distance from the projectile launcher out to the floor. Place the scoring target down at the measured range position. Load the ball and when you are ready fired the ball at the target. | http://library.thinkquest.org/06aug/02205/experiment.htm | 13 |
57 | A particle accelerator is a device that uses electromagnetic fields to propel charged particles to high speeds and to contain them in well-defined beams. An ordinary CRT television set is a simple form of accelerator. There are two basic types: electrostatic and oscillating field accelerators.
Particle accelerators are used as a research tool in particle physics by accelerating elementary particles to very high kinetic energy and letting them impact other particles. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. These may become apparent only at high energies and for tiny periods of time, and therefore may be hard or impossible to study in other ways.
In this context particle accelerators are often called colliders, and can be categorized into different families based upon their differing designs. In the 20th century, colliders were commonly referred to as atom smashers. Despite the fact that modern colliders actually propel subatomic particles—atoms themselves now being relatively simple to disassemble without an accelerator—the term persists in popular usage when referring to particle accelerators in general.
Rolf Widerøe, Gustav Ising, Leó Szilárd, Donald Kerst and Ernest Lawrence are considered as pioneers of modern particle accelerators, conceiving and building the first operational linear particle accelerator, the betatron, and the cyclotron.
Beams of high-energy particles are useful for both fundamental and applied research in the sciences, and also in many technical and industrial fields unrelated to fundamental research. It has been estimated that there are approximately 26,000 accelerators worldwide. Of these, only about 1% are research machines with energies above 1 GeV of the sort which are the main focus of this article. Of the rest, about 44% are for radiotherapy, 41% for ion implantation, 9% for industrial processing and research, and 4% for biomedical and other low-energy research.
For the most basic inquiries into the dynamics and structure of matter, space, and time, physicists seek the simplest kinds of interactions at the highest possible energies. These typically entail particle energies of many GeV, and the interactions of the simplest kinds of particles: leptons (e.g. electrons and positrons) and quarks for the matter, or photons and gluons for the field quanta. Since isolated quarks are experimentally unavailable due to color confinement, the simplest available experiments involve the interactions of, first, leptons with each other, and second, of leptons with nucleons, which are composed of quarks and gluons. To study the collisions of quarks with each other, scientists resort to collisions of nucleons, which at high energy may be usefully considered as essentially 2-body interactions of the quarks and gluons of which they are composed. Thus elementary particle physicists tend to use machines creating beams of electrons, positrons, protons, and anti-protons, interacting with each other or with the simplest nuclei (e.g., hydrogen or deuterium) at the highest possible energies, generally hundreds of GeV or more. Nuclear physicists and cosmologists may use beams of bare atomic nuclei, stripped of electrons, to investigate the structure, interactions, and properties of the nuclei themselves, and of condensed matter at extremely high temperatures and densities, such as might have occurred in the first moments of the Big Bang. These investigations often involve collisions of heavy nuclei – of atoms like iron or gold – at energies of several GeV per nucleon. At lower energies, beams of accelerated nuclei are also used in medicine, as for the treatment of cancer.
Besides being of fundamental interest, high energy electrons may be coaxed into emitting extremely bright and coherent beams of high energy photons – ultraviolet and X ray – via synchrotron radiation, which photons have numerous uses in the study of atomic structure, chemistry, condensed matter physics, biology, and technology. Examples include the ESRF in Europe, which has recently been used to extract detailed 3-dimensional images of insects trapped in amber. Thus there is a great demand for electron accelerators of moderate (GeV) energy and high intensity.
Everyday examples of particle accelerators are cathode ray tubes found in television sets and X-ray generators. These low-energy accelerators use a single pair of electrodes with a DC voltage of a few thousand volts between them. In an X-ray generator, the target itself is one of the electrodes. A low-energy particle accelerator called an ion implanter is used in the manufacture of integrated circuits.
DC accelerator types capable of accelerating particles to speeds sufficient to cause nuclear reactions are Cockcroft-Walton generators or voltage multipliers, which convert AC to high voltage DC, or Van de Graaff generators that use static electricity carried by belts.
The largest and most powerful particle accelerators, such as the RHIC, the Large Hadron Collider (LHC) at CERN (which came on-line in mid-November 2009) and the Tevatron, are used for experimental particle physics.
Particle accelerators can also produce proton beams, which can produce proton-rich medical or research isotopes as opposed to the neutron-rich ones made in fission reactors; however, recent work has shown how to make 99Mo, usually made in reactors, by accelerating isotopes of hydrogen, although this method still requires a reactor to produce tritium. An example of this type of machine is LANSCE at Los Alamos.
Electrostatic particle accelerators
Historically, the first accelerators used simple technology of a single static high voltage to accelerate charged particles. While this method is still extremely popular today, with the electrostatic accelerators greatly out-numbering any other type, they are more suited to lower energy studies owing to the practical voltage limit of about 30 MV (when the accelerator is placed in a gas with high dielectric strength, such as sulfur hexafluoride, allowing the high voltage). The same high voltage can be used twice in a tandem accelerator if the charge of the particles can be reversed while they are inside the terminal; this is possible with the acceleration of atomic nuclei by first adding an extra electron or forming an anionic (negatively charged) chemical compound, and then putting the beam through a thin foil to strip off electrons inside the high voltage conducting terminal, making a beam of positive charge.
Although electrostatic accelerators accelerate particles along a straight line, the term linear accelerator is more often associated with accelerators that use oscillating rather than static electric fields. Thus, many accelerators arranged in a straight line are not termed “linear accelerators” but rather “electrostatic accelerators” to differentiate the two cases.
Oscillating field particle accelerators
Due to the high voltage ceiling imposed by electrical discharge, in order to accelerate particles to higher energies, techniques involving more than one lower, but oscillating, high voltage sources are used. The electrodes can either be arranged to accelerate particles in a line or circle, depending on whether the particles are subject to a magnetic field while they are accelerated, causing their trajectories to arc.
Linear particle accelerators
In a linear accelerator (linac), particles are accelerated in a straight line with a target of interest at one end. They are often used to provide an initial low-energy kick to particles before they are injected into circular accelerators. The longest linac in the world is the Stanford Linear Accelerator, SLAC, which is 3 km (1.9 mi) long. SLAC is an electron-positron collider.
Linear high-energy accelerators use a linear array of plates (or drift tubes) to which an alternating high-energy field is applied. As the particles approach a plate they are accelerated towards it by an opposite polarity charge applied to the plate. As they pass through a hole in the plate, the polarity is switched so that the plate now repels them and they are now accelerated by it towards the next plate. Normally a stream of “bunches” of particles are accelerated, so a carefully controlled AC voltage is applied to each plate to continuously repeat this process for each bunch.
As the particles approach the speed of light the switching rate of the electric fields becomes so high that they operate at microwave frequencies, and so RF cavity resonators are used in higher energy machines instead of simple plates.
Linear accelerators are also widely used in medicine, for radiotherapy and radiosurgery. Medical grade LINACs accelerate electrons using a klystron and a complex bending magnet arrangement which produces a beam of 6-30 million electron-volt (MeV) energy. The electrons can be used directly or they can be collided with a target to produce a beam of X-rays. The reliability, flexibility and accuracy of the radiation beam produced has largely supplanted the older use of Cobalt-60 therapy as a treatment tool.
- John Cockcroft worked on linear accelerators
- Robert J. Van de Graaff at Princeton University initially used Tesla coils and then in 1929 migrated to Van De Graff generators.
Circular or cyclic accelerators
In the circular accelerator, particles move in a circle until they reach sufficient energy. The particle track is typically bent into a circle using electromagnets. The advantage of circular accelerators over linear accelerators (linacs) is that the ring topology allows continuous acceleration, as the particle can transit indefinitely. Another advantage is that a circular accelerator is smaller than a linear accelerator of comparable power (i.e. a linac would have to be extremely long to have the equivalent power of a circular accelerator).
Depending on the energy and the particle being accelerated, circular accelerators suffer a disadvantage in that the particles emit synchrotron radiation. When any charged particle is accelerated, it emits electromagnetic radiation and secondary emissions. As a particle traveling in a circle is always accelerating towards the center of the circle, it continuously radiates towards the tangent of the circle. This radiation is called synchrotron light and depends highly on the mass of the accelerating particle. For this reason, many high energy electron accelerators are linacs. Certain accelerators (synchrotrons) are however built specially for producing synchrotron light (X-rays).
Since the special theory of relativity requires that matter always travels slower than the speed of light in a vacuum, in high-energy accelerators, as the energy increases the particle speed approaches the speed of light as a limit, but never attains it. Therefore particle physicists do not generally think in terms of speed, but rather in terms of a particle’s energy or momentum, usually measured in electron volts (eV). An important principle for circular accelerators, and particle beams in general, is that the curvature of the particle trajectory is proportional to the particle charge and to the magnetic field, but inversely proportional to the (typically relativistic) momentum.
The earliest circular accelerators were cyclotrons, invented in 1929 by Ernest O. Lawrence at the University of California, Berkeley. Cyclotrons have a single pair of hollow ‘D’-shaped plates to accelerate the particles and a single large dipole magnet to bend their path into a circular orbit. It is a characteristic property of charged particles in a uniform and constant magnetic field B that they orbit with a constant period, at a frequency called the cyclotron frequency, so long as their speed is small compared to the speed of light c. This means that the accelerating D’s of a cyclotron can be driven at a constant frequency by a radio frequency (RF) accelerating power source, as the beam spirals outwards continuously. The particles are injected in the centre of the magnet and are extracted at the outer edge at their maximum energy.
Cyclotrons reach an energy limit because of relativistic effects whereby the particles effectively become more massive, so that their cyclotron frequency drops out of synch with the accelerating RF. Therefore simple cyclotrons can accelerate protons only to an energy of around 15 million electron volts (15 MeV, corresponding to a speed of roughly 10% of c), because the protons get out of phase with the driving electric field. If accelerated further, the beam would continue to spiral outward to a larger radius but the particles would no longer gain enough speed to complete the larger circle in step with the accelerating RF. To accommodate relativistic effects the magnetic field needs to be increased to higher radii like it is done in isochronous cyclotrons. An example for an isochronous cyclotron is the PSI Ring cyclotron which is providing protons at the energy of 590 MeV which corresponds to roughly 80% of the speed of light. The advantage of such a cyclotron is the maximum achievable extracted proton current which is currently 2.2 mA. The energy and current correspond to 1.3 MW beam power which is the highest of any accelerator currently existing.
Synchrocyclotrons and isochronous cyclotrons
A classic cyclotron can be modified to increase its energy limit. The historically first approach was the synchrocyclotron, which accelerates the particles in bunches. It uses a constant magnetic field B, but reduces the accelerating field’s frequency so as to keep the particles in step as they spiral outward, matching their mass-dependent cyclotron resonance frequency. This approach suffers from low average beam intensity due to the bunching, and again from the need for a huge magnet of large radius and constant field over the larger orbit demanded by high energy.
The second approach to the problem of accelerating relativistic particles is the isochronous cyclotron. In such a structure, the accelerating field’s frequency (and the cyclotron resonance frequency) is kept constant for all energies by shaping the magnet poles so to increase magnetic field with radius. Thus, all particles get accelerated in isochronous time intervals. Higher energy particles travel a shorter distance in each orbit than they would in a classical cyclotron, thus remaining in phase with the accelerating field. The advantage of the isochronous cyclotron is that it can deliver continuous beams of higher average intensity, which is useful for some applications. The main disadvantages are the size and cost of the large magnet needed, and the difficulty in achieving the high magnetic field values required at the outer edge of the structure.
Synchrocyclotrons have not been built since the isochronous cyclotron was developed.
FFAG accelerators, in which a very strong radial field gradient, combined with strong focusing, allows the beam to be confined to a narrow ring, are an extension of the isochronous cyclotron idea that is lately under development. They use RF accelerating sections between the magnets, and so are isochronous for relativistic particles like electrons (which achieve essentially the speed of light at only a few MeV), but only over a limited energy range for protons and heavier particles at sub-relativistic energies. Like the isochronous cyclotrons they achieve continuous beam operation, but without the need for a huge dipole bending magnet covering the entire radius of the orbits.
Another type of circular accelerator, invented in 1940 for accelerating electrons, is the Betatron, a concept which originates ultimately from Norwegian-German scientist Rolf Widerøe. These machines, like synchrotrons, use a donut-shaped ring magnet (see below) with a cyclically increasing B field, but accelerate the particles by induction from the increasing magnetic field, as if they were the secondary winding in a transformer, due to the changing magnetic flux through the orbit.
Achieving constant orbital radius while supplying the proper accelerating electric field requires that the magnetic flux linking the orbit be somewhat independent of the magnetic field on the orbit, bending the particles into a constant radius curve. These machines have in practice been limited by the large radiative losses suffered by the electrons moving at nearly the speed of light in a relatively small radius orbit.
To reach still higher energies, with relativistic mass approaching or exceeding the rest mass of the particles (for protons, billions of electron volts or GeV), it is necessary to use a synchrotron. This is an accelerator in which the particles are accelerated in a ring of constant radius. An immediate advantage over cyclotrons is that the magnetic field need only be present over the actual region of the particle orbits, which is very much narrower than the diameter of the ring. (The largest cyclotron built in the US had a 184-inch-diameter (4.7 m) magnet pole, whereas the diameter of the LEP and LHC is nearly 10 km. The aperture of the two beams of the LHC is of the order of a millimeter.)
However, since the particle momentum increases during acceleration, it is necessary to turn up the magnetic field B in proportion to maintain constant curvature of the orbit. In consequence synchrotrons cannot accelerate particles continuously, as cyclotrons can, but must operate cyclically, supplying particles in bunches, which are delivered to a target or an external beam in beam “spills” typically every few seconds.
Since high energy synchrotrons do most of their work on particles that are already traveling at nearly the speed of light c, the time to complete one orbit of the ring is nearly constant, as is the frequency of the RF cavity resonators used to drive the acceleration.
Note also a further point about modern synchrotrons: because the beam aperture is small and the magnetic field does not cover the entire area of the particle orbit as it does for a cyclotron, several necessary functions can be separated. Instead of one huge magnet, one has a line of hundreds of bending magnets, enclosing (or enclosed by) vacuum connecting pipes. The design of synchrotrons was revolutionized in the early 1950s with the discovery of the strong focusing concept. The focusing of the beam is handled independently by specialized quadrupole magnets, while the acceleration itself is accomplished in separate RF sections, rather similar to short linear accelerators. Also, there is no necessity that cyclic machines be circular, but rather the beam pipe may have straight sections between magnets where beams may collide, be cooled, etc. This has developed into an entire separate subject, called “beam physics” or “beam optics”.
More complex modern synchrotrons such as the Tevatron, LEP, and LHC may deliver the particle bunches into storage rings of magnets with constant B, where they can continue to orbit for long periods for experimentation or further acceleration. The highest-energy machines such as the Tevatron and LHC are actually accelerator complexes, with a cascade of specialized elements in series, including linear accelerators for initial beam creation, one or more low energy synchrotrons to reach intermediate energy, storage rings where beams can be accumulated or “cooled” (reducing the magnet aperture required and permitting tighter focusing; see beam cooling), and a last large ring for final acceleration and experimentation.
Circular electron accelerators fell somewhat out of favor for particle physics around the time that SLAC was constructed, because their synchrotron losses were considered economically prohibitive and because their beam intensity was lower than for the unpulsed linear machines. The Cornell Electron Synchrotron, built at low cost in the late 1960s, was the first in a series of high-energy circular electron accelerators built for fundamental particle physics, culminating in the LEP at CERN.
A large number of electron synchrotrons have been built in the past two decades, specialized to be synchrotron light sources, of ultraviolet light and X rays; see below.
For some applications, it is useful to store beams of high energy particles for some time (with modern high vacuum technology, up to many hours) without further acceleration. This is especially true for colliding beam accelerators, in which two beams moving in opposite directions are made to collide with each other, with a large gain in effective collision energy. Because relatively few collisions occur at each pass through the intersection point of the two beams, it is customary to first accelerate the beams to the desired energy, and then store them in storage rings, which are essentially synchrotron rings of magnets, with no significant RF power for acceleration.
Synchrotron radiation sources
Some circular accelerators have been built to deliberately generate radiation (called synchrotron light) as X-rays also called synchrotron radiation, for example the Diamond Light Source which has been built at the Rutherford Appleton Laboratory in England or the Advanced Photon Source at Argonne National Laboratory in Illinois, USA. High-energy X-rays are useful for X-ray spectroscopy of proteins or X-ray absorption fine structure (XAFS) for example.
Synchrotron radiation is more powerfully emitted by lighter particles, so these accelerators are invariably electron accelerators. Synchrotron radiation allows for better imaging as researched and developed at SLAC’s SPEAR.
Lawrence’s first cyclotron was a mere 4 inches (100 mm) in diameter. Later he built a machine with a 60 in diameter pole face, and planned one with a 184-inch diameter, which was, however, taken over for World War II-related work connected with uranium isotope separation; after the war it continued in service for research and medicine over many years.
The first large proton synchrotron was the Cosmotron at Brookhaven National Laboratory, which accelerated protons to about 3 GeV. The Bevatron at Berkeley, completed in 1954, was specifically designed to accelerate protons to sufficient energy to create antiprotons, and verify the particle-antiparticle symmetry of nature, then only strongly suspected. The Alternating Gradient Synchrotron (AGS) at Brookhaven was the first large synchrotron with alternating gradient, “strong focusing” magnets, which greatly reduced the required aperture of the beam, and correspondingly the size and cost of the bending magnets. The Proton Synchrotron, built at CERN, was the first major European particle accelerator and generally similar to the AGS.
The Stanford Linear Accelerator, SLAC, became operational in 1966, accelerating electrons to 30 GeV in a 3 km long waveguide, buried in a tunnel and powered by hundreds of large klystrons. It is still the largest linear accelerator in existence, and has been upgraded with the addition of storage rings and an electron-positron collider facility. It is also an X-ray and UV synchrotron photon source.
The Fermilab Tevatron has a ring with a beam path of 4 miles (6.4 km). It has received several upgrades, and has functioned as a proton-antiproton collider until it was shut down due to budget cuts on September 30, 2011. The largest circular accelerator ever built was the LEP synchrotron at CERN with a circumference 26.6 kilometers, which was an electron/positron collider. It achieved an energy of 209 GeV before it was dismantled in 2000 so that the underground tunnel could be used for the Large Hadron Collider (LHC). The LHC is a proton collider, and currently the world’s largest and highest-energy accelerator, expected to achieve 7 TeV energy per beam, and currently operating at half that.
The aborted Superconducting Super Collider (SSC) in Texas would have had a circumference of 87 km. Construction was started in 1991, but abandoned in 1993. Very large circular accelerators are invariably built in underground tunnels a few metres wide to minimize the disruption and cost of building such a structure on the surface, and to provide shielding against intense secondary radiations that occur, which are extremely penetrating at high energies.
Current accelerators such as the Spallation Neutron Source, incorporate superconducting cryomodules. The Relativistic Heavy Ion Collider, and Large Hadron Collider also make use of superconducting magnets and RF cavity resonators to accelerate particles.
Targets and detectors
The output of a particle accelerator can generally be directed towards multiple lines of experiments, one at a given time, by means of a deviating electromagnet. This makes it possible to operate multiple experiments without needing to move things around or shutting down the entire accelerator beam. Except for synchrotron radiation sources, the purpose of an accelerator is to generate high-energy particles for interaction with matter.
This is usually a fixed target, such as the phosphor coating on the back of the screen in the case of a television tube; a piece of uranium in an accelerator designed as a neutron source; or a tungsten target for an X-ray generator. In a linac, the target is simply fitted to the end of the accelerator. The particle track in a cyclotron is a spiral outwards from the centre of the circular machine, so the accelerated particles emerge from a fixed point as for a linear accelerator.
For synchrotrons, the situation is more complex. Particles are accelerated to the desired energy. Then, a fast acting dipole magnet is used to switch the particles out of the circular synchrotron tube and towards the target.
A variation commonly used for particle physics research is a collider, also called a storage ring collider. Two circular synchrotrons are built in close proximity – usually on top of each other and using the same magnets (which are then of more complicated design to accommodate both beam tubes). Bunches of particles travel in opposite directions around the two accelerators and collide at intersections between them. This can increase the energy enormously; whereas in a fixed-target experiment the energy available to produce new particles is proportional to the square root of the beam energy, in a collider the available energy is linear.
At present the highest energy accelerators are all circular colliders, but it is likely that limits have been reached in respect of compensating for synchrotron radiation losses for electron accelerators, and the next generation will probably be linear accelerators 10 times the current length. An example of such a next generation electron accelerator is the 40 km long International Linear Collider, due to be constructed between 2015-2020.
As of 2005, it is believed that plasma wakefield acceleration in the form of electron-beam ‘afterburners’ and standalone laser pulsers will provide dramatic increases in efficiency within two to three decades. In plasma wakefield accelerators, the beam cavity is filled with a plasma (rather than vacuum). A short pulse of electrons or laser light either constitutes or immediately trails the particles that are being accelerated. The pulse disrupts the plasma, causing the charged particles in the plasma to integrate into and move toward the rear of the bunch of particles that are being accelerated. This process transfers energy to the particle bunch, accelerating it further, and continues as long as the pulse is coherent.
Energy gradients as steep as 200 GeV/m have been achieved over millimeter-scale distances using laser pulsers and gradients approaching 1 GeV/m are being produced on the multi-centimeter-scale with electron-beam systems, in contrast to a limit of about 0.1 GeV/m for radio-frequency acceleration alone. Existing electron accelerators such as SLAC could use electron-beam afterburners to greatly increase the energy of their particle beams, at the cost of beam intensity. Electron systems in general can provide tightly collimated, reliable beams; laser systems may offer more power and compactness. Thus, plasma wakefield accelerators could be used — if technical issues can be resolved — to both increase the maximum energy of the largest accelerators and to bring high energies into university laboratories and medical centres.
Black hole production and public safety concerns
In the future, the possibility of black hole production at the highest energy accelerators may arise if certain predictions of superstring theory are accurate. This and other exotic possibilities have led to public safety concerns that have been widely reported in connection with the LHC, which began operation in 2008. The various possible dangerous scenarios have been assessed as presenting “no conceivable danger” in the latest risk assessment produced by the LHC Safety Assessment Group. If they are produced, it is theoretically predicted that such small black holes should evaporate extremely quickly via Bekenstein-Hawking radiation, but which is as yet experimentally unconfirmed. If colliders can produce black holes, cosmic rays (and particularly ultra-high-energy cosmic rays, UHECRs) must have been producing them for eons, but they have yet to harm us. It has been argued that to conserve energy and momentum, any black holes created in a collision between an UHECR and local matter would necessarily be produced moving at relativistic speed with respect to the Earth, and should escape into space, as their accretion and growth rate should be very slow, while black holes produced in colliders (with components of equal mass) would have some chance of having a velocity less than Earth escape velocity, 11.2 km per sec, and would be liable to capture and subsequent growth. Yet even on such scenarios the collisions of UHECRs with white dwarfs and neutron stars would lead to their rapid destruction, but these bodies are observed to be common astronomical objects. Thus if stable micro black holes should be produced, they must grow far too slowly to cause any noticeable macroscopic effects within the natural lifetime of the solar system. | http://rajawaseem6.wordpress.com/2012/02/06/ | 13 |
104 | ||This article needs additional citations for verification. (December 2011)|
In geometry, a solid angle (symbol: Ω) is the two-dimensional angle in three-dimensional space that an object subtends at a point. It is a measure of how large the object appears to an observer looking from that point. In the International System of Units (SI), a solid angle is a dimensionless unit of measurement called a steradian (symbol: sr).
A small object nearby may subtend the same solid angle as a larger object farther away. For example, although the Moon is much smaller than the Sun, it is also much closer to Earth. Therefore, as viewed from any point on Earth, both objects have approximately the same solid angle as well as apparent size. This is most easily observed during a solar eclipse.
Definition and properties
An object's solid angle is equal to the area of the segment of a unit sphere, centered at the angle's vertex, that the object covers. A solid angle equals the area of a segment of unit sphere in the same way a planar angle equals the length of an arc of a unit circle.
The solid angle of a sphere measured from a point in its interior is 4π sr, and the solid angle subtended at the center of a cube by one of its faces is one-sixth of that, or 2π/3 sr. Solid angles can also be measured in square degrees (1 sr = (180/π)2 square degree) or in fractions of the sphere (i.e., fractional area), 1 sr = 1/4π fractional area.
In spherical coordinates, there is a simple formula as
The solid angle for an arbitrary oriented surface S subtended at a point P is equal to the solid angle of the projection of the surface S to the unit sphere with center P, which can be calculated as the surface integral:
where is the vector position of an infinitesimal area of surface with respect to point P and where represents the unit vector normal to . Even if the projection on the unit sphere to the surface S is not isomorphic, the multiple folds are correctly considered according to the surface orientation described by the sign of the scalar product .
- Defining luminous intensity and luminance, and the correspondent radiometric quantities radiant intensity and radiance.
- Calculating spherical excess E of a spherical triangle
- The calculation of potentials by using the boundary element method (BEM)
- Evaluating the size of ligands in metal complexes, see ligand cone angle.
- Calculating the electric field and magnetic field strength around charge distributions.
- Deriving Gauss's Law.
- Calculating emissive power and irradiation in heat transfer.
- Calculating cross sections in Rutherford scattering.
- Calculating cross sections in Raman scattering.
- The solid angle of the acceptance cone of the optical fiber
Solid angles for common objects
Cone, spherical cap, hemisphere
For small θ in radians such that sin(θ)~θ, this reduces to the area of a circle πθ^2.
Over 2200 years ago Archimedes proved, without the use of calculus, that the surface area of a spherical cap was always equal to the area of a circle whose radius was equal to the distance from the rim of the spherical cap to the point where the cap's axis of symmetry intersects the cap. In the diagram opposite this radius is given as:
Hence for a unit sphere the solid angle of the spherical cap is given as:
When θ = π/2, the spherical cap becomes a hemisphere having a solid angle 2π.
The solid angle of the complement of the cone (picture a melon with the cone cut out) is clearly:
A terran astronomical observer positioned at latitude can see this much of the celestial sphere as the earth rotates, that is, a proportion
At the equator you see all of the celestial sphere, at either pole only one half.
A segment of a cone cut by a plane at angle from the cone's axis can be calculated by the formula:
Let OABC be the vertices of a tetrahedron with an origin at O subtended by the triangular face ABC where are the vector positions of the vertices A, B and C. Define the vertex angle to be the angle BOC and define correspondingly. Let be the dihedral angle between the planes that contain the tetrahedral faces OAC and OBC and define correspondingly. The solid angle at subtended by the triangular surface ABC is given by
This follows from the theory of spherical excess and it leads to the fact that there is an analogous theorem to the theorem that "The sum of internal angles of a planar triangle is equal to ", for the sum of the four internal solid angles of a tetrahedron as follows:
where ranges over all six of the dihedral angles between any two planes that contain the tetrahedral faces OAB, OAC, OBC and ABC.
An efficient algorithm for calculating the solid angle at subtended by the triangular surface ABC where are the vector positions of the vertices A, B and C has been given by Oosterom and Strackee:
- is the vector representation of point A, while is the magnitude of that vector (the origin-point distance);
- denotes the scalar product.
When implementing the above equation care must be taken with the
atan function to avoid negative or incorrect solid angles. One source of potential errors is that the determinant can be negative if a,b,c have the wrong winding. Computing
abs(det) is a sufficient solution since no other portion of the equation depends on the winding. The other pitfall arises when the determinant is positive but the divisor is negative. In this case
atan returns a negative value that must be biased by .
from scipy import dot, arctan2, pi from scipy.linalg import norm, det def tri_projection(a, b, c): """Given three 3-vectors, a, b, and c.""" determ = det((a, b, c)) al = norm(a) bl = norm(b) cl = norm(c) div = al*bl*cl + dot(a,b)*cl + dot(a,c)*bl + dot(b,c)*al at = arctan2(determ, div) if at < 0: at += pi # If det > 0 and div < 0 arctan2 returns < 0, so add pi. omega = 2 * at return omega
Another useful formula for calculating the solid angle of the tetrahedron at the origin O that is purely a function of the vertex angles is given by L' Huilier's theorem as
If both the side lengths (α and β) of the base of the pyramid and the distance (d) from the center of the base rectangle to the apex of the pyramid (the center of the sphere) are known, then the above equation can be manipulated to give
The solid angle of a right n-gonal pyramid, where the pyramid base is a regular n-sided polygon of circumradius (r), with a pyramid height (h) is
The solid angle of an arbitrary pyramid defined by the sequence of unit vectors representing edges can be efficiently computed by:
The solid angle of a latitude-longitude rectangle on a globe is , where and are north and south lines of latitude (measured from the equator in radians with angle increasing northward), and and are east and west lines of longitude (where the angle in radians increases eastward).: Mathematically, this represents an arc of angle swept around a sphere by radians. When longitude spans 2π radians and latitude spans π radians, the solid angle is that of a sphere.
A latitude-longitude rectangle should not be confused with the solid angle of a rectangular pyramid. All four sides of a rectangular pyramid intersect the sphere's surface in great circle arcs. With a latitude-longitude rectangle, only lines of longitude are great circle arcs; lines of latitude are not.
Sun and Moon
The resulting value is approximately 0.196 deg2 or square degrees, or about 6×10−5 steradians. In terms of the total celestial sphere, the Sun and the Moon each subtend a fractional area of approximately 0.00047%.
Solid angles in arbitrary dimensions
The solid angle subtended by the full surface of the unit n-sphere (in the geometer's sense) can be defined in any number of dimensions . One often needs this solid angle factor in calculations with spherical symmetry. It is given by the formula
where is the Gamma function. When is an integer, the Gamma function can be computed explicitly. It follows that
This gives the expected results of 2π rad for the 2D circumference and 4π sr for the 3D sphere. It also throws the slightly less obvious 2 for the 1D case, in which the origin-centered unit "sphere" is the set , which indeed has a measure of 2.
- Mazonka, Oleg (2012). "Solid Angle of Conical Surfaces, Polyhedral Cones, and Intersecting Spherical Caps". Cornell University Library Archive.
- Van Oosterom, A; Strackee, J (1983). "The Solid Angle of a Plane Triangle". IEEE Trans. Biom. Eng. BME-30 (2): 125–126. doi:10.1109/TBME.1983.325207.
- "Area of a Latitude-Longitude Rectangle". The Math Forum @ Drexel. 2003.
|Wikimedia Commons has media related to: Solid angle|
- Arthur P. Norton, A Star Atlas, Gall and Inglis, Edinburgh, 1969.
- F. M. Jackson, Polytopes in Euclidean n-Space. Inst. Math. Appl. Bull. (UK) 29, 172-174, Nov./Dec. 1993.
- M. G. Kendall, A Course in the Geometry of N Dimensions, No. 8 of Griffin's Statistical Monographs & Courses, ed. M. G. Kendall, Charles Griffin & Co. Ltd, London, 1961
- Weisstein, Eric W., "Spherical Excess", MathWorld.
- Weisstein, Eric W., "Solid Angle", MathWorld. | http://en.wikipedia.org/wiki/Solid_angle | 13 |
64 | MyPhysicsLab – Simple Pendulum
This simulation shows a simple pendulum operating under gravity. For small oscillations the pendulum is linear, but it is non-linear for larger oscillations.
You can change parameters in the simulation such as mass, gravity, and friction (damping). You can drag the pendulum with your mouse to change the starting position. If you don't see the simulation try instructions for enabling Java. Scroll down to see the math!
Try using the graph and changing parameters like mass, length, gravity to answer these questions about the spring simulation:
- What is the relationship between angular acceleration and angle?
- How do mass, length, or gravity affect the relationship between angular acceleration and angle?
- For small oscillations, how do length or gravity affect the period or frequency of the oscillation?
Note: Leave damping and drive frequency set to zero here (they complicate things). You'll find the answers
below. (Hint: Try starting the pendulum from an almost vertically "up" position.)
Physics - Rotational Method
The pendulum is modeled as a point mass at the end of a massless rod. We define the following variables:
- θ = angle of pendulum (0=vertical)
- R = length of rod
- T = tension in rod
- m = mass of pendulum
- g = gravitational constant
We will derive the equation of motion for the pendulum using the rotational analog of Newton's second law for motion about a fixed axis, which is τ = I α
- τ = net torque
- I = rotational inertia
- α = θ''= angular acceleration
The rotational inertia about the pivot is I = m R2
. Torque can be calculated as the vector cross product of the position vector and the force. The magnitude of the torque due to gravity works out to be τ = −R m g sin θ
. So we have
−R m g sin θ = m R2 α
which simplifies to
This is the equation of motion for the pendulum.
Physics - Direct Method
Most students are less familiar with rotational inertia and torque than with the simple mass and acceleration found in Newton's second law, F = m a
. To show that there is nothing new in the rotational version of Newton's second law, we derive the equation of motion here without the rotational dynamics. As you will see, this method involves more algebra.
We'll need the standard unit vectors, i, j
. We use bold and overline to indicate a vector.
- i = unit vector in horizontal direction
- j = unit vector in vertical direction
The kinematics of the pendulum are then as follows
position = R sin θ i − R cos θ j
velocity = R θ' cos θ i + R θ' sin θ j
acceleration = R(θ'' cos θ i −
θ' 2 sin θ i +
θ'' sin θ j +
θ' 2 cos θ j)
The position is derived by a fairly simple application of trigonometry. The velocity and acceleration are then the first and second derivatives of the position.
Next we draw the free body diagram for the pendulum. The forces on the pendulum are the tension in the rod T
and gravity. So we can write the net force as:
F = T cos θ j − T sin θ i − m g j
Using Newton's law F = m a
and the pendulum acceleration we found earlier, we have
T cos θ j − T sin θ i − m g j =
m R(θ'' cos θ i −
θ' 2 sin θ i +
θ'' sin θ j +
θ' 2 cos θ j)
We can write the vector components of the above equation as separate equations. This gives us two simultaneous equations: the first for the i
component and the second for the j
−T sin θ = m R(θ'' cos θ − θ' 2 sin θ)
T cos θ − m g = m R(θ'' sin θ + θ' 2 cos θ)
Next we do some algebraic manipulations to eliminate the unknown T
. Multiply the first equation by cos θ
and the second by sin θ
−T sin θ cos θ = m R(θ'' cos2θ − θ' 2 sin θ cos θ)
T cos θ sin θ − m g sin θ =
m R(θ'' sin2θ +
θ' 2 sin θ cos θ)
Use the first equation to substitute for T cos θ sin θ
in the second equation and do a little more algebra to get:
−θ'' cos2θ + θ' 2 sin θ cos θ = θ'' sin2θ +
θ' 2 sin θ cos θ +
g⁄R sin θ
With the trig identity cos2θ + sin2θ = 1
this simplifies to equation (1)
θ'' = − g⁄R sin θ
Physics - Energy Method
There is yet a third way to derive the equations of motion for the pendulum. This is to use the "indirect" energy based method associated with the terms "Lagrangian", "Euler-Lagrange equations", "Hamiltonian", and others. While this method isn't shown here, you can see an example of it for the Pendulum+Cart simulation
To solve the equations of motion numerically, so that we can drive the simulation, we use the Runge-Kutta method
for solving sets of ordinary differential equations. First we define a variable for the angular velocity ω = θ'
. Then we can write the second order equation (1) as two first order equations.
θ' = ω
ω' = − g⁄R sin θ
This is the form needed for using the Runge-Kutta method.
Question: What is the relationship between angular acceleration and angle?
Answer: It is a sine wave relationship as given by equation (1):
θ'' = − g⁄R sin θ
Question: How do mass, length, or gravity affect the relationship between angular acceleration and angle?
Answer: From equation (1) we see that:
- Mass doesn't affect the motion at all.
- The amplitude of the sine relationship is proportional to gravity.
- The amplitude of the sine relationship is inversely proportional to length of the pendulum.
Question: For small oscillations, how do length or gravity affect the period or frequency of the oscillation?
Answer: For small oscillations we can use the approximation that sin θ = θ. Then the equation of motion becomes
θ'' = − g⁄R θ
This is a linear relationship. You can see that the graph of acceleration versus angle is a straight line for small oscillations. This is the same form of equation as for the single spring simulation. The analytic solution is
where θ0 is the initial angle and t is time. The period is the time it takes for θ(t) to repeat, so
The frequency of oscillation is the inverse of the period:
So we predict that
- increasing length by 4 times doubles the period and halves the frequency;
- increasing gravity by 4 times halves the period and doubles the frequency; | http://www.myphysicslab.com/pendulum1.html | 13 |
78 | The Union Army was the land force that fought for the Union during the American Civil War. It was also known as the Federal Army, the U.S. Army, the Northern Army and the National Army. It consisted of the small United States Army (the regular army), augmented by massive numbers of units supplied by the Northern states, composed of volunteers as well as conscripts. The Union Army fought and eventually defeated the smaller Confederate States Army during the war which lasted from 1861 to 1865. About 360,000 died from all causes; some 280,000 were wounded.
When the American Civil War began in April 1861, there were only 16,000 men in the U.S. Army, and of these many Southern officers resigned and joined the Confederate States Army. The U.S. Army consisted of ten regiments of infantry, four of artillery, two of cavalry, two of dragoons, and one of mounted infantry. The regiments were scattered widely. Of the 197 companies in the army, 179 occupied 79 isolated posts in the West, and the remaining 18 manned garrisons east of the Mississippi River, mostly along the Canada–United States border and on the Atlantic coast.
With the secession of the Southern states, and with this drastic shortage of men in the army, President Abraham Lincoln called on the states to raise a force of 75,000 men for three months to put down the "insurrection". Lincoln's call forced the border states to choose sides, and four seceded, making the Confederacy eleven states strong. The war proved to be longer and more extensive than anyone North or South had expected, and on July 22, 1861, Congress authorized a volunteer army of 500,000 men.
The call for volunteers initially was easily met by patriotic Northerners, abolitionists, and even immigrants who enlisted for a steady income and meals. Over 10,000 Germans in New York and Pennsylvania immediately responded to Lincoln's call, and the French were also quick to volunteer. As more men were needed, however, the number of volunteers fell and both money bounties and forced conscription had to be turned to. Nevertheless, between April 1861 and April 1865, at least two and a half million men served in the Union Army, of whom the majority were volunteers.
It is a misconception that the South held an advantage because of the large percentage of professional officers who resigned to join the Confederate States Army. At the start of the war, there were 824 graduates of the U.S. Military Academy on the active list; of these, 296 resigned or were dismissed, and 184 of those became Confederate officers. Of the approximately 900 West Point graduates who were then civilians, 400 returned to the Union Army and 99 to the Confederate. Therefore, the ratio of Union to Confederate professional officers was 642 to 283. (One of the resigning officers was Robert E. Lee, who had initially been offered the assignment as commander of a field army to suppress the rebellion. Lee disapproved of secession, but refused to bear arms against his native state, Virginia, and resigned to accept the position as commander of Virginia forces. He eventually became the commander of the Confederate States Army.) The South did have the advantage of other military colleges, such as The Citadel and Virginia Military Institute, but they produced fewer officers. Only 26 enlisted men and non-commissioned officers are known to have left the regular United States Army to join the Confederate Army, all by desertion.
Major organizations
The Union Army was composed of numerous organizations, which were generally organized geographically.
- Military Division
- A collection of Departments reporting to one commander (e.g., Military Division of the Mississippi, Middle Military Division, Military Division of the James). Military Divisions were similar to the regions described by the more modern term, Theater.
- An organization that covered a defined region, including responsibilities for the Federal installations therein and for the field armies within their borders. Those named for states usually referred to Southern states that had been occupied. It was more common to name departments for rivers (such as Department of the Tennessee, Department of the Cumberland) or regions (Department of the Pacific, Department of New England, Department of the East, Department of the West, Middle Department).
- A subdivision of a Department (e.g., District of Cairo, District of East Tennessee). There were also Subdistricts for smaller regions.
- The fighting force that was usually, but not always, assigned to a District or Department but could operate over wider areas. Some of the most prominent armies were:
- Army of the Cumberland, the army operating primarily in Tennessee, and later Georgia, commanded by William S. Rosecrans and George Henry Thomas.
- Army of Georgia, operated in the March to the Sea and the Carolinas commanded by Henry W. Slocum.
- Army of the Gulf, the army operating in the region bordering the Gulf of Mexico, commanded by Benjamin Butler, Nathaniel P. Banks, and Edward Canby.
- Army of the James, the army operating on the Virginia Peninsula, 1864–65, commanded by Benjamin Butler and Edward Ord.
- Army of the Mississippi, a briefly existing army operating on the Mississippi River, in two incarnations—under John Pope and William S. Rosecrans in 1862; under John A. McClernand in 1863.
- Army of the Ohio, the army operating primarily in Kentucky and later Tennessee and Georgia, commanded by Don Carlos Buell, Ambrose E. Burnside, John G. Foster, and John M. Schofield.
- Army of the Potomac, the principal army in the Eastern Theater, commanded by George B. McClellan, Ambrose E. Burnside, Joseph Hooker, and George G. Meade.
- Army of the Shenandoah, the army operating in the Shenandoah Valley, under David Hunter, Philip Sheridan, and Horatio G. Wright.
- Army of the Tennessee, the most famous army in the Western Theater, operating through Kentucky, Tennessee, Mississippi, Georgia, and the Carolinas; commanded by Ulysses S. Grant, William T. Sherman, James B. McPherson, and Oliver O. Howard.
- Army of Virginia, the army assembled under John Pope for the Northern Virginia Campaign.
Each of these armies was usually commanded by a major general. Typically, the Department or District commander also had field command of the army of the same name, but some conflicts within the ranks occurred when this was not true, particularly when an army crossed a geographic boundary.
The regular army, the permanent United States Army, was intermixed into various units and formations of the Union Army, forming a cadre of experienced and skilled troops. They were regarded by many as elite troops and often held in reserve during battles in case of emergencies. This force was quite small compared to the massive state-raised volunteer forces that comprised the bulk of the Union Army.
Operations in the Civil War were distinctly divided within broad geographic regions known as theaters. For overviews of general army operations and strategies, see articles on the main theaters, including the Western Theater, and Eastern Theater.
Personnel organization
Soldiers were organized by military specialty. The combat arms included infantry, cavalry, artillery, and other such smaller organizations such as the United States Marine Corps, which, at some times, was detached from its navy counterpart for land based operations. The Signal Corps was created and deployed for the first time, through the leadership of Albert J. Myer.
Below major units like armies, soldiers were organized mainly into regiments, the main fighting unit with which a soldier would march and be deployed with, commanded by a colonel, lieutenant colonel, or possibly a major. According to W. J. Hardee's "Rifle and Light Infantry Tactics" (1855), the primary tactics for riflemen and light infantry in use immediately prior and during the Civil War, there would typically be, within each regiment, ten companies, each commanded by a captain, and deployed according to the ranks of captains. Some units only possessed between four and eight companies and were generally known as battalions. Regiments were almost always raised within a single state, and were generally referred by number and state, e.g. 54th Massachusetts, 20th Maine, etc.
Regiments were usually grouped into brigades under the command of a brigadier general. However, brigades were changed easily as the situation demanded; the regiment was the main form of permanent grouping. Brigades were usually formed once regiments reached the battlefield, according to where the regiment might be deployed, and alongside which other regiments.
Several men served as generals-in-chief of the Union Army throughout its existence:
- Winfield Scott: July 5, 1841 – November 1, 1861
- George B. McClellan: November 1, 1861 – March 11, 1862
- Henry W. Halleck: July 23, 1862 – March 9, 1864
- Ulysses S. Grant: March 9, 1864 – March 4, 1869
The gap from March 11 to July 23, 1862, was filled with direct control of the army by President Lincoln and United States Secretary of War Edwin M. Stanton, with the help of an unofficial "War Board" that was established on March 17, 1862. The board consisted of Ethan A. Hitchcock, the chairman, with Department of War bureau chiefs Lorenzo Thomas, Montgomery C. Meigs, Joseph G. Totten, James W. Ripley, and Joseph P. Taylor.
Scott was an elderly veteran of the War of 1812 and the Mexican-American War and could not perform his duties effectively. His successor, Maj. Gen. McClellan, built and trained the massive Union Army of the Potomac, the primary fighting force in the Eastern Theater. Although he was popular among the soldiers, McClellan was relieved from his position as general-in-chief because of his overly cautious strategy and his contentious relationship with his commander in chief, President Lincoln. (He remained commander of the Army of the Potomac through the Peninsula Campaign and the Battle of Antietam.) His replacement, Major General Henry W. Halleck, had a successful record in the Western Theater, but was more of an administrator than a strategic planner and commander.
Ulysses S. Grant was the final commander of the Union Army. He was famous for his victories in the West when he was appointed lieutenant general and general-in-chief of the Union Army in March 1864. Grant supervised the Army of the Potomac (which was formally led by his subordinate, Maj. Gen. George G. Meade) in delivering the final blows to the Confederacy by engaging Confederate forces in many fierce battles in Virginia, the Overland Campaign, conducting a war of attrition that the larger Union Army was able to survive better than its opponent. Grant laid siege to Lee's army at Petersburg, Virginia, and eventually captured Richmond, the capital of the Confederacy. He developed the strategy of coordinated simultaneous thrusts against wide portions of the Confederacy, most importantly the Georgia and Carolinas Campaigns of William Tecumseh Sherman and the Shenandoah Valley campaign of Philip Sheridan. These campaigns were characterized by another strategic notion of Grant's-better known as total war—denying the enemy to access resources needed to continue the war by widespread destruction of its factories and farms along the paths of the invading Union armies.
Grant had critics who complained about the high numbers of casualties that the Union Army suffered while he was in charge, but Lincoln would not replace Grant, because, in Lincoln's words: "I cannot spare this man. He fights."
Among memorable field leaders of the army were Nathaniel Lyon (first Union general to be killed in battle during the war), William Rosecrans, George Henry Thomas and William Tecumseh Sherman. Others, of lesser competence, included Benjamin F. Butler.
Union victory
The decisive victories by Grant and Sherman resulted in the surrender of the major Confederate armies. The first and most significant was on April 9, 1865, when Robert E. Lee surrendered the Army of Northern Virginia to Grant at Appomattox Court House. Although there were other Confederate armies that surrendered in the following weeks, such as Joseph E. Johnston's in North Carolina, this date was nevertheless symbolic of the end of the bloodiest war in American history, the end of the Confederate States of America, and the beginning of the slow process of Reconstruction.
|This section does not cite any references or sources. (June 2009)|
Of the 2,213,363 men who served in the Union Army during the Civil War, 364,511 died in combat, or from injuries sustained in combat, disease, or other causes, and 281,881 were wounded. More than 1 out of every 4 Union soldiers was killed or wounded during the war; casualties in the Confederate Army were even worse—1 in 3 Southern soldiers were killed or wounded. It should be noted, however, that the Confederates suffered a considerably lower amount of overall casualties than the Union, at roughly 260,000 total casualties to the Union's 360,000. This is by far the highest casualty ratio of any war in which America has been involved. By comparison, 1 out of every 16 American soldiers was killed or wounded in World War II, and 1 out of every 22 during the Vietnam War.
In total, 620,000 soldiers died during the Civil War. There were 34 million Americans at that time, so 2% of the American population died in the war. In today's terms, this would be the equivalent of 5.9 million American men being killed in a war.
Ethnic groups
The estimate of 25 percent of the Union armed forces being foreign-born is very accurate. This means that about 1,600,000 soldiers and sailors were born in the United States, including about 200,000 African-Americans. About 200,000 soldiers were born in one of the German states (although this is somewhat speculative since anyone serving from a German family tended to be identified as German regardless of where actually born). About 200,000 soldiers and sailors were born in Ireland. Although some soldiers came from as far away as Malta, Italy, India, and Russia, most of the remaining foreign-born soldiers came from England, Scotland and Canada.
|1,000,000||45.4||Native-born white Americans.|
|216,000||9.7||about 216,000 German born.|
|210,000||9.5||African American. Half were freedmen who lived in the North, and half were ex-slaves from the South. They served under white officers in more than 160 "colored" regiments and in Federal regiments organized as the United States Colored Troops (USCT).|
|50,000||2.3||Born in England.|
|40,000||1.8||French or French Canadian. About half were born in the United States of America, the other half in Quebec.|
|20,000||0.9||Scandinavian (Norwegian, Swedish, Finnish, and Danish).|
|5,000||Polish (many of whom served in the Polish Legion of Brig. Gen. Włodzimierz Krzyżanowski)|
|Several hundred of other various nationalities|
Many immigrant soldiers formed their own regiments, such as the Irish Brigade (69th New York, 63rd New York, 88th New York, 28th Massachusetts, 116th Pennsylvania); the Swiss Rifles (15th Missouri); the Gardes Lafayette (55th New York); the Garibaldi Guard (39th New York); the Martinez Militia (1st New Mexico); the Polish Legion (58th New York); the German Rangers (52nd New York); the Highlander Regiment (79th New York); and the Scandinavian Regiment (15th Wisconsin). But for the most part, the foreign-born soldiers were scattered as individuals throughout units.
For comparison, the Confederate Army was not very diverse: 91% of Confederate soldiers were native born and only 9% were foreign-born, Irish being the largest group with others including Germans, French, Mexicans (though most of them simply happened to have been born when the Southwest was still part of Mexico), and British. Some Southern propaganda compared foreign-born soldiers in the Union Army to the hated Hessians of the American Revolution. Also, a relatively small number of Native Americans (Cherokee, Chickasaw, Choctaw, and Creek) fought for the Confederacy.
Army administration and issues
Various organizational and administrative issues arose during the war, which had a major effect on subsequent military procedures.
Blacks in the army
The inclusion of blacks as combat soldiers became a major issue. Eventually, it was realized, especially after the valiant effort of the 54th Massachusetts Volunteer Infantry in the Battle of Fort Wagner, that blacks were fully able to serve as competent and reliable soldiers. This was partly due to the efforts of Robert Smalls, who, while still a slave, won fame by defecting from the Confederacy, and bringing a Confederate transport ship which he was piloting. He later met with Edwin Stanton, Secretary of War, to argue for including blacks in combat units. This led to the formation of the first combat unit for black soldiers, the 1st South Carolina Volunteers. Regiments for black soldiers were eventually referred to as United States Colored Troops. The blacks were paid less than white soldiers until late in the war and were, in general, treated harshly. Even after the end of the war, they were not permitted (by Sherman's order) to march in the great victory parade through Washington, DC.
Unit supplies
Battlefield supplies were a major problem. They were greatly improved by new techniques in preserving food and other perishables, and in transport by railroad. General Montgomery C. Meigs was one of the most important Union Army leaders in this field.
Combat medicine
Medical care was, at first, extremely disorganized and substandard. Gradually, medical experts began calling for higher standards, and created an agency known as the United States Sanitary Commission. This created professional standards, and led to some of the first advances in battlefield medicine as a separate specialty. General William Alexander Hammond of the Medical Corps did some major work and provided some important leadership in this area.
Additionally, care of the wounded was greatly improved by medical pioneers such as Clara Barton, who often worked alone to provide supplies and care, and brought a new level of dedication to caring for the wounded.
Military strategy
The Civil War drove many innovations in military strategy. W. J. Hardee published the first revised infantry tactics for use with modern rifles in 1855. However, even these tactics proved ineffective in combat, as it involved massed volley fire, in which entire units (primarily regiments) would fire simultaneously. These tactics had not been tested before in actual combat, and the commanders of these units would post their soldiers at incredibly close range, compared to the range of the rifled musket, which led to disastrous mortality rates. In a sense, the weapons had evolved beyond the tactics, which would soon change as the war drew to a close. Railroads provided the first mass movement of troops. The electric telegraph was used by both sides, which enabled political and senior military leaders to pass orders to and receive reports from commanders in the field.
There were many other innovations brought by necessity. Generals were forced to reexamine the offensive minded tactics developed during the Mexican–American War where attackers could mass to within 100 yards of the defensive lines, the maximum effective range of smoothbore muskets. Attackers would have to endure one volley of inaccurate smoothbore musket fire before they could close with the defenders. But by the civil War, the smoothbores had been replaced with rifled muskets, using the quick loadable minié ball, with accurate ranges up to 900 yards. Defense now dominated the battlefield. Now attackers, whether advancing in ordered lines or by rushes, were subjected to three or four aimed volleys before they could get among the defenders. This made offensive tactics that were successful only 20 years before nearly obsolete.
Desertions and draft riots
|This section does not cite any references or sources. (January 2008)|
Desertion was a major problem for both sides. The daily hardships of war, forced marches, thirst, suffocating heat, disease, delay in pay, solicitude for family, impatience at the monotony and futility of inactive service, panic on the eve of battle, the sense of war weariness, the lack of confidence in commanders, and the discouragement of defeat (especially early on for the Union Army), all tended to lower the morale of the Union Army and to increase desertion.
In 1861 and 1862, the war was going badly for the Union Army and there were, by some counts, 180,000 desertions. In 1863 and 1864, the bitterest two years of the war, the Union Army suffered over 200 desertions every day, for a total of 150,000 desertions during those two years. This puts the total number of desertions from the Union Army during the four years of the war at nearly 350,000. Using these numbers, 15% of Union soldiers deserted during the war. Official numbers put the number of deserters from the Union Army at 200,000 for the entire war, or about 8% of Union Army soldiers. It is estimated that 1 out of 3 deserters returned to their regiments, either voluntarily or after being arrested and being sent back. Many of the desertions were by "professional" bounty men, men who would enlist to collect the often large cash bonuses and then desert at the earliest opportunity to do the same elsewhere. If not caught, it could prove a very lucrative criminal enterprise.
The Irish were also the main participants in the famous "New York Draft Riots" of 1863 (as dramatized in the film Gangs of New York). The Irish had shown the strongest support for Southern aims prior to the start of the war and had long had an enmity with black populations in several Northern cities dating back to nativist attacks on Irish immigrants in the 1840s, when it was observed that blacks, who rivaled the Irish at the bottom of the economic ladder, were frequently reported encouraging on nativist mobs.[clarification needed]
With the view that the war was an upper class abolitionist war led in large part by former nativists to free a large black population, which might move north and compete for jobs and housing, the poorer classes did not welcome a draft, especially one from which a richer man could buy an exemption. As a result of the Enrollment Act, rioting began in several Northern cities, the most heavily hit being New York City. A mob reported as consisting principally of Irish immigrants rioted in the summer of 1863, with the worst violence occurring in July during the Battle of Gettysburg. The mob set fire to everything from African American churches and an orphanage for "colored children" as well as the homes of certain prominent Protestant abolitionists. A mob was reportedly repulsed from the offices of the staunchly pro-Union New York Tribune by workers wielding and firing two Gatling guns. The principal victims of the rioting were African Americans and activists in the anti-slavery movement. Not until victory was achieved at Gettysburg could the Union Army be sent in; some units had to open fire to quell the violence and stop the rioters. By the time the rioting was over, perhaps up to 1,000 people had been killed or wounded (estimates varied widely, now and then).
See also
- Grand Army of the Republic
- Military history of African Americans
- Uniform of the Union Army
- United States National Cemeteries
- Hispanics in the American Civil War
- American Civil War
- American Civil War Corps Badges
The following Armies were authorized during the American Civil War:
- Army of the Cumberland
- Army of the Frontier
- Army of Georgia
- Army of the Gulf
- Army of the James
- Army of the Mississippi
- Army of the Ohio
- Army of the Potomac
- Army of the Shenandoah
- Army of the Southwest
- Army of the Tennessee
The following Corps were authorized during the American Civil War:
- I Corps
- II Corps
- III Corps
- IV Corps
- V Corps
- VI Corps
- VII Corps
- VIII Corps
- IX Corps
- X Corps
- XI Corps
- XII Corps
- XIII Corps
- XIV Corps
- XV Corps
- XVI Corps
- XVII Corps
- XVIII Corps
- XIX Corps
- XX Corps
- XXI Corps
- XXII Corps
- XXIII Corps
- XXIV Corps
- XXV Corps
- Cavalry Corps
- See, for example, usage in Grant, Preface p. 3.
- Hattaway & Jones, pp. 9–10.
- Hattaway & Jones, p. 10.
- "Civil War Army Organization and Rank". North Carolina Museum of History. Retrieved February 14, 2012.
- Eicher, pp. 37–38.
- McPherson, pp.36–37.
- Sanitary Commission Report, 1869
- Chippewa County, Wisconsin Past and Present, Volume II. Chicago: S.J. Clarke Publishing Company, 1913. p. 258.
- Joseph T. Glatthaar, Forged in Battle: The Civil War Alliance of Black Soldiers and White Officers (2000)
- Eicher, John H., and David J. Eicher. Civil War High Commands. Stanford, CA: Stanford University Press, 2001. ISBN 0-8047-3641-3.
- Grant, Ulysses S. Personal Memoirs of U.S. Grant. 2 vols. Charles L. Webster & Company, 1885–86. ISBN 0-914427-67-9.
- Glatthaar, Joseph T. Forged in Battle: The Civil War Alliance of Black Soldiers and White Officers. New York: Free Press, 1990. ISBN 978-0-02-911815-3.
- Hattaway, Herman, and Archer Jones. How the North Won: A Military History of the Civil War. Urbana: University of Illinois Press, 1983. ISBN 0-252-00918-5.
- McPherson, James M. What They Fought For, 1861–1865. Baton Rouge: Louisiana State University Press, 1994. ISBN 978-0-8071-1904-4.
Further reading
- Nevins, Allan. The War for the Union. Vol. 1, The Improvised War 1861–1862. New York: Charles Scribner's Sons, 1959. ISBN 0-684-10426-1.
- Nevins, Allan. The War for the Union. Vol. 2, War Becomes Revolution 1862–1863. New York: Charles Scribner's Sons, 1960. ISBN 1-56852-297-5.
- Nevins, Allan. The War for the Union. Vol. 3, The Organized War 1863–1864. New York: Charles Scribner's Sons, 1971. ISBN 0-684-10428-8.
- Nevins, Allan. The War for the Union. Vol. 4, The Organized War to Victory 1864–1865. New York: Charles Scribner's Sons, 1971. ISBN 1-56852-299-1.
- Shannon, Fred A. The Organization and Administration of the Union Army 1861–1865. 2 vols. Gloucester, MA: P. Smith, 1965. OCLC 428886. First published 1928 by A.H. Clark Co.
- Welcher, Frank J. The Union Army, 1861–1865 Organization and Operations. Vol. 1, The Eastern Theater. Bloomington: Indiana University Press, 1989. ISBN 0-253-36453-1.
- Welcher, Frank J. The Union Army, 1861–1865 Organization and Operations. Vol. 2, The Western Theater. Bloomington: Indiana University Press, 1993. ISBN 0-253-36454-X.
|Wikimedia Commons has media related to: Union Army|
- Civil War Home: Ethnic groups in the Union Army
- "The Common Soldier", HistoryNet
- A Manual of Military Surgery, by Samuel D. Gross, MD (1861), the manual used by doctors in the Union Army.
- Union Army Historical Pictures
- U.S. Civil War Era Uniforms and Accoutrements
- Louis N. Rosenthal lithographs, depicting over 50 Union Army camps, are available for research use at the Historical Society of Pennsylvania.
- Official Army register of the Volunteer Force 1861; 1862; 1863; 1864; 1865
- Civil War National Cemeteries
- Christian Commission of Union Dead
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols 1–8
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols 9–12
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols 13–15
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols. 16–17
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vol. 18
- Roll of Honor: names of Soldiers who died in Defense of the Union Vol. 19
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols. 20–21
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols, 22–23
- Roll of Honor: Names of Soldiers who died in Defense of the Union Vols. 24–27 | http://en.wikipedia.org/wiki/Union_army | 13 |
135 | IT IS CONVENTIONAL to let the letter s (for space) symbolize the length of an arc, which is called arc length. We say in geometry that an arc "subtends" an angle θ; literally, "stretches under."
Now the circumference of a circle is an arc length. And the ratio of the circumference to the diameter is the basis of radian measure. That ratio is the definition of π.
Since D = 2r, then
That ratio of the circumference of a circle C to the radius r -- 2π -- is called the radian measure of 1 revolution, which are four right angles at the center. The circumference subtends those four right angles.
Thus the radian measure is based on ratios -- numbers -- that are actually found in the circle. The radian measure is a real number that indicates the ratio of a curved line to a straight, of an arc to the radius. For, the ratio of s to r does determine a unique central angle θ.
In any circles the same ratio of arc length to radius
θ1 = θ2.
We will prove this theorem below.
radian measure of the central angle.
At that central angle, the arc is four fifths of the radius.
Example 2. An angle of .75 radians means that the arc is three fourths of the radius. s = .75r
Example 3. In a circle whose radius is 10 cm, a central angle θ intercepts an arc of 8 cm.
a) What is the radian measure of that angle?
Answer. According to the definition:
b) At that same central angle θ, what is the arc length if the radius is
Answer. For a given central angle, the ratio of arc to radius is the same. 5 is half of 10. Therefore the arc length will be half of 8: 4cm.
a) At a central angle of 2.35 radians, what ratio has the arc to the radius?
Answer. That number is the ratio. The arc is 2.35 times the radius.
b) In which quadrant of the circle does 2.35 radians fall?
An angle of 2.35 radians, then, is greater than 1.57 but less that 3.14. It falls in the second quadrant.
s = rθ
If the radius is 10 cm, and the central angle is 2.35 radians, then how long is the arc?
Answer. We let the definition of θ,
become a formula for finding s :
s = 10 × 2.35 = 23.5 cm
Because of the simplicity of that formula, radian measure is used exclusively in theoretical mathematics.
Once again: The radian measure is a real number x. And in the unit circle -- r = 1 -- the length of the arc s is that real number.
s = rθ = 1· x = x.
It is here that the term trigonometric "function" has its full meaning. For, corresponding to each real number x -- each radian measure, each arc -- there is a unique value of sin x, of cos x, and so on. The definition of a function is satisfied. (Topic 3 of Precalculus.)
Thus, radian measure can be identified as the length x of an arc of the unit circle. Therefore when we draw the graph of y = sin x (Topic 19), we can imagine the unit circle rolled out in both directions onto the x-axis, thus labeling the x-axis.
Because radian measure can be identified as an arc, the inverse trigonometric functions have their names. "arcsin" is the arc -- the radian measure -- whose sine is a certain number.
In the unit circle, the vertical side AB is sin x.
very small values of x. And we can see that when the point A is very close to C -- that is, when the central angle AOC is very, very small -- then the opposite side AB will be virtually indistinguishable from the arc length AC. That is,
An angle of 1 radian
Note that an angle of 1 radian is a central angle whose subtending arc is equal in length to the radius.
That is often cited as the definition of radian measure. Yet it remains to be proved that an arc equal to the radius in one circle, will subtend the same central angle as an arc equal to the radius in another circle. The main theorem cannot be avoided. (Moreover, although we can define an "angle of 1 radian," does such an angle exist? Can we know it? Is it possible to construct it?)
a) radius? Take π3.
arc is approximately three fifths of the radius.
b) If the radius is 15 cm, approximately how long is the arc?
Problem 2. In a circle whose radius is 4 cm, find the arc length intercepted by each of these angles. Again, take π3.
d) 2π. (Here, the arc length is the entire circumference!)
Problem 3. In which quadrant of the circle does each angle, measured in radians, fall?
figure above.) Therefore, θ = 2 falls in the second quadrant.
figure above.) Therefore, θ = 5 falls in the fourth quadrant.
less than 2¼: 6.28 + 6.28 = 12.56. (See the figure above.) Therefore, θ = 14 falls in the first quadrant.
Proof of the theorem
In any circles, the same ratio of arc length to radius
θ1 = θ2.
implies, on dividing each side by 2π,
But 2πr is the circumference of each circle. And each circumference is an "arc" that subtends four right angles at the center.
Arcs, moreover, have the same ratio to one another as the central angles they subtend. (Theorem 16.) Therefore,
Hence, according to line (1),
θ1 = θ2.
Therefore, the same ratio of arc length to radius determines a unique central angle that the arcs subtend. Which is what we wanted to prove. | http://www.salonhogar.net/themathpage/aTrig/arc-length.htm | 13 |
57 | Precession is a change in the orientation of the rotational axis of a rotating body. It can be defined as a change in direction of the rotation axis in which the second Euler angle (nutation) is constant. In physics, there are two types of precession: torque-free and torque-induced.
In astronomy, "precession" refers to any of several slow changes in an astronomical body's rotational or orbital parameters, and especially to the Earth's precession of the equinoxes. See Precession (astronomy).
Torque-free precession occurs when the axis of rotation differs slightly from an axis about which the object can rotate stably: a maximum or minimum principal axis. Poinsot's construction is an elegant geometrical method for visualizing the torque-free motion of a rotating rigid body. For example, when a plate is thrown, the plate may have some rotation around an axis that is not its axis of symmetry. This occurs because the angular momentum (L) is constant in absence of torques. Therefore, it will have to be constant in the external reference frame, but the moment of inertia tensor (I) is non-constant in this frame because of the lack of symmetry. Therefore, the spin angular velocity vector () about the spin axis will have to evolve in time so that the matrix product remains constant.
The torque-free precession rate of an object with an axis of symmetry, such as a disk, spinning about an axis not aligned with that axis of symmetry can be calculated as follows:
where is the precession rate, is the spin rate about the axis of symmetry, is the angle between the axis of symmetry and the axis about which it precesses, is the moment of inertia about the axis of symmetry, and is moment of inertia about either of the other two perpendicular principal axes. They should be the same, due to the symmetry of the disk.
For a generic solid object without any axis of symmetry, the evolution of the object's orientation, represented (for example) by a rotation matrix that transforms internal to external coordinates, may be numerically simulated. Given the object's fixed internal moment of inertia tensor and fixed external angular momentum , the instantaneous angular velocity is . Precession occurs by repeatedly recalculating and applying a small rotation vector for the short time ; e.g., for the skew-symmetric matrix . The errors induced by finite time steps tend to increase the rotational kinetic energy, ; this unphysical tendency can be counter-acted by repeatedly applying a small rotation vector perpendicular to both and , noting that .
Another type of torque-free precession can occur when there are multiple reference frames at work. For example, the earth is subject to local torque induced precession due to the gravity of the sun and moon acting upon the earth’s axis, but at the same time the solar system is moving around the galactic center. As a consequence, an accurate measurement of the earth’s axial reorientation relative to objects outside the frame of the moving galaxy (such as distant quasars commonly used as precession measurement reference points) must account for a minor amount of non-local torque-free precession, due to the solar system’s motion.
Torque-induced precession (gyroscopic precession) is the phenomenon in which the axis of a spinning object (e.g., a part of a gyroscope) "wobbles" when a torque is applied to it, which causes a distribution of force around the acted axis. The phenomenon is commonly seen in a spinning toy top, but all rotating objects can undergo precession. If the speed of the rotation and the magnitude of the torque are constant, the axis will describe a cone, its movement at any instant being at right angles to the direction of the torque. In the case of a toy top, if the axis is not perfectly vertical, the torque is applied by the force of gravity tending to tip it over.
The device depicted on the right is gimbal mounted. From inside to outside there are three axes of rotation: the hub of the wheel, the gimbal axis, and the vertical pivot.
To distinguish between the two horizontal axes, rotation around the wheel hub will be called spinning, and rotation around the gimbal axis will be called pitching. Rotation around the vertical pivot axis is called rotation.
First, imagine that the entire device is rotating around the (vertical) pivot axis. Then, spinning of the wheel (around the wheelhub) is added. Imagine the gimbal axis to be locked, so that the wheel cannot pitch. The gimbal axis has sensors, that measure whether there is a torque around the gimbal axis.
In the picture, a section of the wheel has been named dm1. At the depicted moment in time, section dm1 is at the perimeter of the rotating motion around the (vertical) pivot axis. Section dm1, therefore, has a lot of angular rotating velocity with respect to the rotation around the pivot axis, and as dm1 is forced closer to the pivot axis of the rotation (by the wheel spinning further), due to the Coriolis effect dm1 tends to move in the direction of the top-left arrow in the diagram (shown at 45°) in the direction of rotation around the pivot axis. Section dm2 of the wheel starts out at the vertical pivot axis, and thus initially has zero angular rotating velocity with respect to the rotation around the pivot axis, before the wheel spins further. A force (again, a Coriolis force) would be required to increase section dm2's velocity up to the angular rotating velocity at the perimeter of the rotating motion around the pivot axis. If that force is not provided, then section dm2's inertia will make it move in the direction of the top-right arrow. Note that both arrows point in the same direction.
The same reasoning applies for the bottom half of the wheel, but there the arrows point in the opposite direction to that of the top arrows. Combined over the entire wheel, there is a torque around the gimbal axis when some spinning is added to rotation around a vertical axis.
It is important to note that the torque around the gimbal axis arises without any delay; the response is instantaneous.
In the discussion above, the setup was kept unchanging by preventing pitching around the gimbal axis. In the case of a spinning toy top, when the spinning top starts tilting, gravity exerts a torque. However, instead of rolling over, the spinning top just pitches a little. This pitching motion reorients the spinning top with respect to the torque that is being exerted. The result is that the torque exerted by gravity - via the pitching motion - elicits gyroscopic precession (which in turn yields a counter torque against the gravity torque) rather than causing the spinning top to fall to its side.
Gyroscopic precession also plays a large role in the flight controls on helicopters. Since the driving force behind helicopters is the rotor disk (which rotates), gyroscopic precession comes into play. If the rotor disk is to be tilted forward (to gain forward velocity), its rotation requires that the downward net force on the blade be applied roughly 90 degrees (depending on blade configuration) before that blade gets to the 12 o'clock position. This means the pitch of each blade will decrease as they pass through 3 o'clock, assuming the rotor blades are turning CCW as viewed from above looking down at the helicopter. The same applies if a banked turn to the left or right is desired; the pitch change will occur when the blades are at 6 and 12 o'clock, as appropriate. Whatever position the rotor disc needs to placed at, each blade must change its pitch to effect that change 90 degrees prior to reaching the position that would be necessary for a non-rotating disc.
To ensure the pilot's inputs are correct, the aircraft has corrective linkages that vary the blade pitch in advance of the blade's position relative to the swashplate. Although the swashplate moves in the intuitively correct direction, the blade pitch links are arranged to transmit the pitch in advance of the blade's position.
Precession is the result of the angular velocity of rotation and the angular velocity produced by the torque. It is an angular velocity about a line that makes an angle with the permanent rotation axis, and this angle lies in a plane at right angles to the plane of the couple producing the torque. The permanent axis must turn towards this line, since the body cannot continue to rotate about any line that is not a principal axis of maximum moment of inertia; that is, the permanent axis turns in a direction at right angles to that in which the torque might be expected to turn it. If the rotating body is symmetrical and its motion unconstrained, and, if the torque on the spin axis is at right angles to that axis, the axis of precession will be perpendicular to both the spin axis and torque axis.
Under these circumstances the angular velocity of precession is given by:
In which Is is the moment of inertia, is the angular velocity of spin about the spin axis, and m*g and r are the force responsible for the torque and the perpendicular distance of the spin axis about the axis of precession. The torque vector originates at the center of mass. Using = , we find that the period of precession is given by:
There is a non-mathematical way of visualizing the cause of gyroscopic precession. The behavior of spinning objects simply obeys the law of inertia by resisting any change in direction. If a force is applied to the object to induce a change in the orientation of the spin axis, the object behaves as if that force was applied 90 degrees ahead, in the direction of rotation. Here is why: A solid object can be thought of as an assembly of individual molecules. If the object is spinning, each molecule's direction of travel constantly changes as that molecule revolves around the object's spin axis. When a force is applied, molecules are forced into a new change of direction at places during their path around the object's axis. This new change in direction is resisted by inertia.
Imagine the object to be a spinning bicycle wheel, held at the axle in the hands of a subject. The wheel is spinning clock-wise as seen from a viewer to the subject’s right. Clock positions on the wheel are given relative to this viewer. As the wheel spins, the molecules comprising it are travelling vertically downward the instant they pass the 3 o'clock position, horizontally to the left the instant they pass 6 o'clock, vertically upward at 9 o'clock, and horizontally right at 12 o'clock. Between these positions, each molecule travels a combination of these directions, which should be kept in mind as you read ahead. If the viewer applies a force to the wheel at the 3 o'clock position, the molecules at that location are not being forced to change direction; they still travel vertically downward, unaffected by the force. The same goes for the molecules at 9 o'clock; they are still travelling vertically upward, unaffected by the force that was applied. But, molecules at 6 and 12 o'clock ARE being "told" to change direction. At 6 o'clock, molecules are forced to veer toward the viewer. At the same time, molecules that are passing 12 o'clock are being forced to veer away from the viewer. The inertia of those molecules resists this change in direction. The result is that they apply an equal and opposite force in response. At 6 o'clock, molecules exert a push directly away from the viewer. Molecules at 12 o'clock push directly toward the viewer. This all happens instantaneously as the force is applied at 3 o'clock. This makes the wheel as a whole tilt toward the viewer. Thus, when the force was applied at 3 o'clock, the wheel behaved as if the force was applied at 6 o'clock--90 degrees ahead in the direction of rotation.
Precession causes another peculiar behavior for spinning objects such as the wheel in this scenario. If the subject holding the wheel removes one hand from the axle, the wheel will remain upright, supported from only one side. However, it will immediately take on an additional motion; it will begin to rotate about a vertical axis, pivoting at the point of support as it continues its axial spin. If the wheel was not spinning, it would topple over and fall if one hand was removed. The initial motion of the wheel beginning to topple over is equivalent to applying a force to it at 12 o'clock in the direction of the unsupported side. When the wheel is spinning, the sudden lack of support at one end of the axle is again equivalent to this force. So instead of toppling over, the wheel behaves as if the force was applied at 3 or 9 o’clock, depending on the direction of spin and which hand was removed. This causes the wheel to begin pivoting at the point of support while remaining upright.
The special and general theories of relativity give three types of corrections to the Newtonian precession, of a gyroscope near a large mass such as the earth, described above. They are:
- Thomas precession a special relativistic correction accounting for the observer's being in a rotating non-inertial frame.
- de Sitter precession a general relativistic correction accounting for the Schwarzschild metric of curved space near a large non-rotating mass.
- Lense-Thirring precession a general relativistic correction accounting for the frame dragging by the Kerr metric of curved space near a large rotating mass.
In astronomy, precession refers to any of several gravity-induced, slow and continuous changes in an astronomical body's rotational axis or orbital path. Precession of the equinoxes, perihelion precession, changes in the tilt of the Earth's axis to its orbit, and the eccentricity of its orbit over tens of thousands of years are all important parts of the astronomical theory of ice ages.
Axial precession (precession of the equinoxes)
Axial precession is the movement of the rotational axis of an astronomical body, whereby the axis slowly traces out a cone. In the case of Earth, this type of precession is also known as the precession of the equinoxes, lunisolar precession, or precession of the equator. Earth goes through one such complete precessional cycle in a period of approximately 26,000 years or 1° every 72 years, during which the positions of stars will slowly change in both equatorial coordinates and ecliptic longitude. Over this cycle, Earth's north axial pole moves from where it is now, within 1° of Polaris, in a circle around the ecliptic pole, with an angular radius of about 23.5 degrees.
Hipparchus is the earliest known astronomer to recognize and assess the precession of the equinoxes at about 1º per century (which is not far from the actual value for antiquity, 1.38º). The precession of Earth's axis was later explained by Newtonian physics. Being an oblate spheroid, the Earth has a nonspherical shape, bulging outward at the equator. The gravitational tidal forces of the Moon and Sun apply torque to the equator, attempting to pull the equatorial bulge into the plane of the ecliptic, but instead causing it to precess. The torque exerted by the planets, particularly Jupiter, also plays a role.
The orbit of a planet around the Sun is not really an ellipse but a flower-petal shape because the major axis of each planet's elliptical orbit also precesses within its orbital plane, partly in response to perturbations in the form of the changing gravitational forces exerted by other planets. This is called perihelion precession or apsidal precession.
Discrepancies between the observed perihelion precession rate of the planet Mercury and that predicted by classical mechanics were prominent among the forms of experimental evidence leading to the acceptance of Einstein's Theory of Relativity (in particular, his General Theory of Relativity), which accurately predicted the anomalies.
|Wikimedia Commons has media related to: Precession|
- Boal, David (2001). "Lecture 26 - Torque-free rotation - body-fixed axes". Retrieved 2008-09-17.
- DIO 9.1 ‡3
- Bradt, Hale (2007). Astronomy Methods. Cambridge University Press. p. 66. ISBN 978 0 521 53551 9.
- Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.)
- An even larger value for a precession has been found, for a black hole in orbit around a much more massive black hole, amounting to 39 degrees each orbit.
|Wikibooks has a book on the topic of: Rotational Motion| | http://en.wikipedia.org/wiki/Precession | 13 |
80 | Mathematics Grade 3
|Printable Version (pdf)|
(1) Students develop an understanding of the meanings of multiplication and division of whole numbers through activities and problems involving equal-sized groups, arrays, and area models; multiplication is finding an unknown product, and division is finding an unknown factor in these situations. For equal-sized group situations, division can require finding the unknown number of groups or the unknown group size. Students use properties of operations to calculate products of whole numbers, using increasingly sophisticated strategies based on these properties to solve multiplication and division problems involving single-digit factors. By comparing a variety of solution strategies, students learn the relationship between multiplication and division.
(2) Students develop an understanding of fractions, beginning with unit fractions. Students view fractions in general as being built out of unit fractions, and they use fractions along with visual fraction models to represent parts of a whole. Students understand that the size of a fractional part is relative to the size of the whole. For example, 1/2 of the paint in a small bucket could be less paint than 1/3 of the paint in a larger bucket, but 1/3 of a ribbon is longer than 1/5 of the same ribbon because when the ribbon is divided into 3 equal parts, the parts are longer than when the ribbon is divided into 5 equal parts. Students are able to use fractions to represent numbers equal to, less than, and greater than one. They solve problems that involve comparing fractions by using visual fraction models and strategies based on noticing equal numerators or denominators.
(3) Students recognize area as an attribute of two-dimensional regions. They measure the area of a shape by finding the total number of same-size units of area required to cover the shape without gaps or overlaps, a square with sides of unit length being the standard unit for measuring area. Students understand that rectangular arrays can be decomposed into identical rows or into identical columns. By decomposing rectangles into rectangular arrays of squares, students connect area to multiplication, and justify using multiplication to determine the area of a rectangle.
(4) Students describe, analyze, and compare properties of two-dimensional shapes. They compare and classify shapes by their sides and angles, and connect these with definitions of shapes. Students also relate their fraction work to geometry by expressing the area of part of a shape as a unit fraction of the whole.
Grade 3 Overview
Operations and Algebraic Thinking
Number and Operations in Base Ten
Number and Operations - Fractions
Measurement and Data
Core Standards of the Course
1. Interpret products of whole numbers, e.g., interpret 5 × 7 as the total number of objects in 5 groups of 7 objects each. For example, describe a context in which a total number of objects can be expressed as 5 × 7.
2. Interpret whole-number quotients of whole numbers, e.g., interpret 56 ÷ 8 as the number of objects in each share when 56 objects are partitioned equally into 8 shares, or as a number of shares when 56 objects are partitioned into equal shares of 8 objects each. For example, describe a context in which a number of shares or a number of groups can be expressed as 56 ÷ 8.
3. Use multiplication and division within 100 to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.1
4. Determine the unknown whole number in a multiplication or division equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 × ? = 48, 5 = _ ÷ 3, 6 × 6 = ?
5. Apply properties of operations as strategies to multiply and divide.2 Examples: If 6 × 4 = 24 is known, then 4 × 6 = 24 is also known. (Commutative property of multiplication.) 3 × 5 × 2 can be found by 3 × 5 = 15, then 15 × 2 = 30, or by 5 × 2 = 10, then 3 × 10 = 30. (Associative property of multiplication.) Knowing that 8 × 5 = 40 and 8 × 2 = 16, one can find 8 × 7 as 8 × (5 + 2) = (8 × 5) + (8 × 2) = 40 + 16 = 56. (Distributive property.)
7. Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. By the end of Grade 3, know from memory all products of two one-digit numbers.
8. Solve two-step word problems using the four operations. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding.3
9. Identify arithmetic patterns (including patterns in the addition table or multiplication table), and explain them using properties of operations. For example, observe that 4 times a number is always even, and explain why 4 times a number can be decomposed into two equal addends.
- Represent a fraction 1/b on a number line diagram by defining the interval from 0 to 1 as the whole and partitioning it into b equal parts. Recognize that each part has size 1/b and that the endpoint of the part based at 0 locates the number 1/b on the number line.
- Represent a fraction a/b on a number line diagram by marking off a lengths 1/b from 0. Recognize that the resulting interval has size a/b and that its endpoint locates the number a/b on the number line.
- Understand two fractions as equivalent (equal) if they are the same size, or the same point on a number line.
- Recognize and generate simple equivalent fractions, e.g., 1/2 = 2/4, 4/6 = 2/3). Explain why the fractions are equivalent, e.g., by using a visual fraction model.
- Express whole numbers as fractions, and recognize fractions that are equivalent to whole numbers. Examples: Express 3 in the form 3 = 3/1; recognize that 6/1 = 6; locate 4/4 and 1 at the same point of a number line diagram.
- Compare two fractions with the same numerator or the same denominator by reasoning about their size. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model.
1. Tell and write time to the nearest minute and measure time intervals in minutes. Solve word problems involving addition and subtraction of time intervals in minutes, e.g., by representing the problem on a number line diagram.
2. Measure and estimate liquid volumes and masses of objects using standard units of grams (g), kilograms (kg), and liters (l).6 Add, subtract, multiply, or divide to solve one-step word problems involving masses or volumes that are given in the same units, e.g., by using drawings (such as a beaker with a measurement scale) to represent the problem.7
3. Draw a scaled picture graph and a scaled bar graph to represent a data set with several categories. Solve one- and two-step “how many more” and “how many less” problems using information presented in scaled bar graphs. For example, draw a bar graph in which each square in the bar graph might represent 5 pets.
4. Generate measurement data by measuring lengths using rulers marked with halves and fourths of an inch. Show the data by making a line plot, where the horizontal scale is marked off in appropriate units— whole numbers, halves, or quarters.
- A square with side length 1 unit, called “a unit square,” is said to have “one square unit” of area, and can be used to measure area.
- A plane figure which can be covered without gaps or overlaps by n unit squares is said to have an area of n square units.
- Find the area of a rectangle with whole-number side lengths by tiling it, and show that the area is the same as would be found by multiplying the side lengths.
- Multiply side lengths to find areas of rectangles with whole-number side lengths in the context of solving real world and mathematical problems, and represent whole-number products as rectangular areas in mathematical reasoning.
- Use tiling to show in a concrete case that the area of a rectangle with whole-number side lengths a and b + c is the sum of a × b and a × c. Use area models to represent the distributive property in mathematical reasoning.
- Recognize area as additive. Find areas of rectilinear figures by decomposing them into non-overlapping rectangles and adding the areas of the non-overlapping parts, applying this technique to solve real world problems.
8. Solve real world and mathematical problems involving perimeters of polygons, including finding the perimeter given the side lengths, finding an unknown side length, and exhibiting rectangles with the same perimeter and different areas or with the same area and different perimeters.
1. Understand that shapes in different categories (e.g., rhombuses, rectangles, and others) may share attributes (e.g., having four sides), and that the shared attributes can define a larger category (e.g., quadrilaterals). Recognize rhombuses, rectangles, and squares as examples of quadrilaterals, and draw examples of quadrilaterals that do not belong to any of these subcategories.
2. Partition shapes into parts with equal areas. Express the area of each part as a unit fraction of the whole. For example, partition a shape into 4 parts with equal area, and describe the area of each part as 1/4 of the area of the shape.
These materials have been produced by and for the teachers of the State of Utah. Copies of these materials may be freely reproduced for teacher and classroom use. When distributing these materials, credit should be given to Utah State Office of Education. These materials may not be published, in whole or part, or in any other format, without the written permission of the Utah State Office of Education, 250 East 500 South, PO Box 144200, Salt Lake City, Utah 84114-4200.
For more information about this core curriculum, contact the USOE Specialist, DAVID SMITH or visit the Mathematics - Elementary Home Page. For general questions about Utah's Core Curriculum, contact the USOE Curriculum Director, Sydnee Dickson . UEN Contact Info: 801-581-2999 | 800-866-5852 | Contact Us | http://www.uen.org/core/core.do?courseNum=5130 | 13 |
208 | A friendly science with concrete postulates, straightforward analysis, ancient history and useful applications
Hydrostatics is about the pressures exerted by a fluid at rest. Any fluid is meant, not just water. It is usually relegated to an early chapter in Fluid Mechanics texts, since its results are widely used in that study. The study yields many useful results of its own, however, such as forces on dams, buoyancy and hydraulic actuation, and is well worth studying for such practical reasons. It is an excellent example of deductive mathematical physics, one that can be understood easily and completely from a very few fundamentals, and in which the predictions agree closely with experiment. There are few better illustrations of the use of the integral calculus, as well as the principles of ordinary statics, available to the student. A great deal can be done with only elementary mathematics. Properly adapted, the material can be used from the earliest introduction of school science, giving an excellent example of a quantitative science with many possibilities for hands-on experiences.
The definition of a fluid deserves careful consideration. Although time is not a factor in hydrostatics, it enters in the approach to hydrostatic equilibrium. It is usually stated that a fluid is a substance that cannot resist a shearing stress, so that pressures are normal to confining surfaces. Geology has now shown us clearly that there are substances which can resist shearing forces over short time intervals, and appear to be typical solids, but which flow like liquids over long time intervals. Such materials include wax and pitch, ice, and even rock. A ball of pitch, which can be shattered by a hammer, will spread out and flow in months. Ice, a typical solid, will flow in a period of years, as shown in glaciers, and rock will flow over hundreds of years, as in convection in the mantle of the earth. Shear earthquake waves, with periods of seconds, propagate deep in the earth, though the rock there can flow like a liquid when considered over centuries. The rate of shearing may not be strictly proportional to the stress, but exists even with low stress. Viscosity may be the physical property that varies over the largest numerical range, competing with electrical resistivity.
There are several familiar topics in hydrostatics which often appear in expositions of introductory science, and which are also of historical interest that can enliven their presentation. The following will be discussed briefly here:
A study of hydrostatics can also include capillarity, the ideal gas laws, the velocity of sound, and hygrometry. These interesting applications will not be discussed in this article. At a beginning level, it may also be interesting to study the volumes and areas of certain shapes, or at a more advanced level, the forces exerted by heavy liquids on their containers. Hydrostatics is a very concrete science that avoids esoteric concepts and advanced mathematics. It is also much easier to demonstrate than Newtonian mechanics.
By a fluid, we have a material in mind like water or air, two very common and important fluids. Water is incompressible, while air is very compressible, but both are fluids. Water has a definite volume; air does not. Water and air have low viscosity; that is, layers of them slide very easily on one another, and they quickly assume their permanent shapes when disturbed by rapid flows. Other fluids, such as molasses, may have high viscosity and take a long time to come to equilibrium, but they are no less fluids. The coefficient of viscosity is the ratio of the shearing force to the velocity gradient. Hydrostatics deals with permanent, time-independent states of fluids, so viscosity does not appear, except as discussed in the Introduction.
A fluid, therefore, is a substance that cannot exert any permanent forces tangential to a boundary. Any force that it exerts on a boundary must be normal to the boundary. Such a force is proportional to the area on which it is exerted, and is called a pressure. We can imagine any surface in a fluid as dividing the fluid into parts pressing on each other, as if it were a thin material membrane, and so think of the pressure at any point in the fluid, not just at the boundaries. In order for any small element of the fluid to be in equilibrium, the pressure must be the same in all directions (or the element would move in the direction of least pressure), and if no other forces are acting on the body of the fluid, the pressure must be the same at all neighbouring points. Therefore, in this case the pressure will be the same throughout the fluid, and the same in any direction at a point (Pascal's Principle). Pressure is expressed in units of force per unit area such as dyne/cm2, N/cm2 (pascal), pounds/in2 (psi) or pounds/ft2 (psf). The axiom that if a certain volume of fluid were somehow made solid, the equilibrium of forces would not be disturbed, is useful in reasoning about forces in fluids.
On earth, fluids are also subject to the force of gravity, which acts vertically downward, and has a magnitude γ = ρg per unit volume, where g is the acceleration of gravity, approximately 981 cm/s2 or 32.15 ft/s2, ρ is the density, the mass per unit volume, expressed in g/cm3, kg/m3, or slug/ft3, and γ is the specific weight, measured in lb/in3, or lb/ft3 (pcf). Gravitation is an example of a body force that disturbs the equality of pressure in a fluid. The presence of the gravitational body force causes the pressure to increase with depth, according to the equation dp = ρg dh, in order to support the water above. We call this relation the barometric equation, for when this equation is integrated, we find the variation of pressure with height or depth. If the fluid is incompressible, the equation can be integrated at once, and the pressure as a function of depth h is p = ρgh + p0. The density of water is about 1 g/cm3, or its specific weight is 62.4 pcf. We may ask what depth of water gives the normal sea-level atmospheric pressure of 14.7 psi, or 2117 psf. This is simply 2117 / 62.4 = 33.9 ft of water. This is the maximum height to which water can be raised by a suction pump, or, more correctly, can be supported by atmospheric pressure.
Professor James Thomson (brother of William Thomson, Lord Kelvin) illustrated the equality of pressure by a "curtain-ring" analogy shown in the diagram. A section of the toroid was identified, imagined to be solidified, and its equilibrium was analyzed. The forces exerted on the curved surfaces have no component along the normal to a plane section, so the pressures at any two points of a plane must be equal, since the fluid represented by the curtain ring was in equilibrium. The right-hand part of the diagram illustrates the equality of pressures in orthogonal directions. This can be extended to any direction whatever, so Pascal's Principle is established. This demonstration is similar to the usual one using a triangular prism and considering the forces on the end and lateral faces separately.
When gravity acts, the liquid assumes a free surface perpendicular to gravity, which can be proved by Thomson's method. A straight cylinder of unit cross-sectional area (assumed only for ease in the arithmetic) can be used to find the increase of pressure with depth. Indeed, we see that p2 = p1 + ρgh. The upper surface of the cylinder can be placed at the free surface if desired. The pressure is now the same in any direction at a point, but is greater at points that lie deeper. From this same figure, it is easy to prove Archimedes's Principle, that the buoyant force is equal to the weight of the displaced fluid, and passes through the center of mass of this displaced fluid.
Ingenious geometric arguments can be used to substitute for easier, but less transparent arguments using calculus. For example, the force on acting on one side of an inclined plane surface whose projection is AB can be found as in the diagram at the right. O is the point at which the prolonged projection intersects the free surface. The line AC' perpendicular to the plane is made equal to the depth AC of point A, and line BD' is similarly drawn equal to BD. The line OD' also passes through C', by proportionality of triangles OAC' and OAD'. Therefore, the thrust F on the plane is the weight of a prism of fluid of cross-section AC'D'B, passing through its centroid normal to plane AB. Note that the thrust is equal to the density times the area times the depth of the center of the area, but its line of action does not pass through the center, but below it, at the center of thrust. The same result can be obtained with calculus by summing the pressures and the moments, of course.
Suppose a vertical pipe is stood in a pool of water, and a vacuum pump applied to the upper end. Before we start the pump, the water levels outside and inside the pipe are equal, and the pressures on the surfaces are also equal, and equal to the atmospheric pressure. Now start the pump. When it has sucked all the air out above the water, the pressure on the surface of the water inside the pipe is zero, and the pressure at the level of the water on the outside of the pipe is still the atmospheric pressure. Of course, there is the vapour pressure of the water to worry about if you want to be precise, but we neglect this complication in making our point. We require a column of water 33.9 ft high inside the pipe, with a vacuum above it, to balance the atmospheric pressure. Now do the same thing with liquid mercury, whose density at 0 °C is 13.5951 times that of water. The height of the column is 2.494 ft, 29.92 in, or 760.0 mm. This definition of the standard atmospheric pressure was established by Regnault in the mid-19th century. In Britain, 30 inHg (inches of mercury) had been used previously.
As a practical matter, it is convenient to measure pressure differences by measuring the height of liquid columns, a practice known as manometry. The barometer is a familiar example of this, and atmospheric pressures are traditionally given in terms of the length of a mercury column. To make a barometer, the barometric tube, closed at one end, is filled with mercury and then inverted and placed in a mercury reservoir. Corrections must be made for temperature, because the density of mercury depends on the temperature, and the brass scale expands, for capillarity if the tube is less than about 1 cm in diameter, and even slightly for altitude, since the value of g changes with altitude. The vapor pressure of mercury is only 0.001201 mmHg at 20°C, so a correction from this source is negligible. For the usual case of a mercury column (α = 0.000181792 per °C) and a brass scale (&alpha = 0.0000184 per °C) the temperature correction is -2.74 mm at 760 mm and 20°C. Before reading the barometer scale, the mercury reservoir is raised or lowered until the surface of the mercury just touches a reference point, which is mirrored in the surface so it is easy to determine the proper position.
An aneroid barometer uses a partially evacuated chamber of thin metal that expands and contracts according to the external pressure. This movement is communicated to a needle that revolves in a dial. The materials and construction are arranged to give a low temperature coefficient. The instrument must be calibrated before use, and is usually arranged to read directly in elevations. An aneroid barometer is much easier to use in field observations, such as in reconnaisance surveys. In a particular case, it would be read at the start of the day at the base camp, at various points in the vicinity, and then finally at the starting point, to determine the change in pressure with time. The height differences can be calculated from h = 60,360 log(P/p) [1 + (T + t - 64)/986) feet, where P and p are in the same units, and T, t are in °F.
An absolute pressure is referred to a vacuum, while a gauge pressure is referred to the atmospheric pressure at the moment. A negative gauge pressure is a (partial) vacuum. When a vacuum is stated to be so many inches, this means the pressure below the atmospheric pressure of about 30 in. A vacuum of 25 inches is the same thing as an absolute pressure of 5 inches (of mercury). Pressures are very frequently stated in terms of the height of a fluid. If it is the same fluid whose pressure is being given, it is usually called "head," and the factor connecting the head and the pressure is the weight density ρg. In the English engineer's system, weight density is in pounds per cubic inch or cubic foot. A head of 10 ft is equivalent to a pressure of 624 psf, or 4.33 psi. It can also be considered an energy availability of ft-lb per lb. Water with a pressure head of 10 ft can furnish the same energy as an equal amount of water raised by 10 ft. Water flowing in a pipe is subject to head loss because of friction.
Take a jar and a basin of water. Fill the jar with water and invert it under the water in the basin. Now raise the jar as far as you can without allowing its mouth to come above the water surface. It is always a little surprising to see that the jar does not empty itself, but the water remains with no visible means of support. By blowing through a straw, one can put air into the jar, and as much water leaves as air enters. In fact, this is a famous method of collecting insoluble gases in the chemical laboratory, or for supplying hummingbird feeders. It is good to remind oneself of exactly the balance of forces involved.
Another application of pressure is the siphon. The name is Greek for the tube that was used for drawing wine from a cask. This is a tube filled with fluid connecting two containers of fluid, normally rising higher than the water levels in the two containers, at least to pass over their rims. In the diagram, the two water levels are the same, so there will be no flow. When a siphon goes below the free water levels, it is called an inverted siphon. If the levels in the two basins are not equal, fluid flows from the basin with the higher level into the one with the lower level, until the levels are equal. A siphon can be made by filling the tube, closing the ends, and then putting the ends under the surface on both sides. Alternatively, the tube can be placed in one fluid and filled by sucking on it. When it is full, the other end is put in place. The analysis of the siphon is easy, and should be obvious. The pressure rises or falls as described by the barometric equation through the siphon tube. There is obviously a maximum height for the siphon which is the same as the limit of the suction pump, about 34 feet. Inverted siphons (which are really not siphons at all) are sometimes used in pipelines to cross valleys. Differences in elevation are usually too great to use regular siphons to cross hills, so the fluids must be pressurized by pumps so the pressure does not fall to zero at the crests. The Quabbin Aqueduct, which supplies water to Boston, includes pumped siphons.
As the level in the supply container falls, the pressure difference decreases. In some cases, one would like a source that would provide a constant pressure at the outlet of the siphon. An ingenious way to arrive at this is shown in the figure, Mariotte's Bottle. The plug must seal the air space at the top very well. A partial vacuum is created in the air space by the fall of the water level exactly equal to the pressure difference between the surface and the end of the open tube connecting to the atmosphere. The pressure at this point is, therefore, maintained at atmospheric while water is delivered. The head available at the nozzle as shown is equal to h. This would make a good experiment to verify the relation V = √(2gh) since h and the horizontal distance reached by the jet for a given fall can both be measured easily, or the discharge from an orifice.
The term "siphon" is often used in a different sense. In biology, a siphon is simply a tubular structure. A thermal siphon is a means to circulate a liquid by convection. A soda siphon is a source of carbonated water, while siphon coffee (or vacuum coffee) is made in an apparatus where the steam from boiling water pushes hot water up above the coffee and filter, and then the vacuum causes the water to descend again when the heat is removed (invented by Löff in 1830). None of these arrangments is actually a siphon in the physicist's sense. The siphon tube used in irrigation, and perhaps Thomson's siphon recorder of 1858, do use the siphon principle. The occasional spelling "syphon" is not supported by the Greek source.
In some cases, especially in plumbing, siphon action is not desired, especially when it may allow dirty water to mix with clean. In these cases, vacuum breakers may be used at high points to prevent this. Siphons work because of atmospheric pressure, and would not operate in a vacuum. In the case of water, pressure reduction would eventually reach the vapor pressure and the water would boil. Mercury, which has a very low vapor pressure, would simply separate leaving a Torricellian vacuum. The siphon would be re-established if the pressure is restored. A liquid column is unstable under a negative pressure.
Evangelista Torricelli (1608-1647), Galileo's student and secretary, a member of the Florentine Academy of Experiments, invented the mercury barometer in 1643, and brought the weight of the atmosphere to light. The mercury column was held up by the pressure of the atmosphere, not by horror vacui as Aristotle had supposed. Torricelli's early death was a blow to science, but his ideas were furthered by Blaise Pascal (1623-1662). Pascal had a barometer carried up the 1465 m high Puy de Dôme, an extinct volcano in the Auvergne just west of his home of Clermont-Ferrand in 1648 by Périer, his brother-in-law. Pascal's experimentum crucis is one of the triumphs of early modern science. The Puy de Dôme is not the highest peak in the Massif Central--the Puy de Sancy, at 1866 m is, but it was the closest. Clermont is now the centre of the French pneumatics industry.
The remarkable Otto von Guericke (1602-1686), Burgomeister of Magdeburg, Saxony, took up the cause, making the first vacuum pump, which he used in vivid demonstrations of the pressure of the atmosphere to the Imperial Diet at Regensburg in 1654. Famously, he evacuated a sphere consisting of two well-fitting hemispheres about a foot in diameter, and showed that 16 horses, 8 on each side, could not pull them apart. An original vacuum pump and hemispheres from 1663 are shown at the right (photo edited from the Deutsches Museum; see link below). He also showed that air had weight, and how much force it did require to separate evacuated hemispheres. Then, in England, Robert Hooke (1635-1703) made a vacuum pump for Robert Boyle (1627-1691). Christian Huygens (1629-1695) became interested in a visit to London in 1661 and had a vacuum pump built for him. By this time, Torricelli's doctrine had triumphed over the Church's support for horror vacui. This was one of the first victories for rational physics over the illusions of experience, and is well worth consideration.
Pascal demonstrated that the siphon worked by atmospheric pressure, not by horror vacui, by means of the apparatus shown at the left. The two beakers of mercury are connected by a three-way tube as shown, with the upper branch open to the atmosphere. As the large container is filled with water, pressure on the free surfaces of the mercury in the beakers pushes mercury into the tubes. When the state shown is reached, the beakers are connected by a mercury column, and the siphon starts, emptying the upper beaker and filling the lower. The mercury has been open to the atmosphere all this time, so if there were any horror vacui, it could have flowed in at will to soothe itself.
The mm of mercury is sometimes called a torr after Torricelli, and Pascal also has been honoured by a unit of pressure, a newton per square metre or 10 dyne/cm2. A cubic centimetre of air weighs 1.293 mg under standard conditions, and a cubic metre 1.293 kg, so air is by no means even approximately weightless, though it seems so. The weight of a sphere of air as small as 10 cm in diameter is 0.68 g, easily measurable with a chemical balance. The pressure of the atmosphere is also considerable, like being 34 ft under water, but we do not notice it. A bar is 106 dyne/cm2, very close to a standard atmosphere, which is 1.01325 bar. In meteorolgy, the millibar, mb, is used. 1 mb = 1.333 mmHg = 100 Pa = 1000 dyne/cm2. A kilogram-force per square centimeter is 981,000 dyne/cm2, also close to one atmosphere. In Europe, it has been considered approximately 1 atm, as in tire pressures and other engineering applications. As we have seen, in English units the atmosphere is about 14.7 psi, and this figure can be used to find other approximate equivalents. For example, 1 psi = 51.7 mmHg. In Britain, tons per square inch has been used for large pressures. The ton in this case is 2240 lb, not the American short ton. 1 tsi = 2240 psi, 1 tsf = 15.5 psi (about an atmosphere!).
The fluid in question here is air, which is by no means incompressible. As we rise in the atmosphere and the pressure decreases, the air also expands. To see what happens in this case, we can make use of the ideal gas equation of state, p = ρRT/M, and assume that the temperature T is constant. Then the change of pressure in a change of altitude dh is dp = -ρg dh = -(pM/RT)gdh, or dp/p = -(Mg/RT)dh. This is a little harder to integrate than before, but the result is ln p = -Mgh/RT + C, or ln(p/p0) = -Mgh/RT, or finally p = p0exp(-Mgh/RT). In an isothermal atmosphere, the pressure decreases exponentially. The quantity H = RT/Mg is called the "height of the homogeneous atmosphere" or the scale height, and is about 8 km at T = 273K. This quantity gives the rough scale of the decrease of pressure with height. Of course, the real atmosphere is by no means isothermal close to the ground, but cools with height nearly linearly at about 6.5°C/km up to an altitude of about 11 km at middle latitudes, called the tropopause. Above this is a region of nearly constant temperature, the stratosphere, and then at some higher level the atmosphere warms again to near its value at the surface. Of course, there are variations from the average values. When the temperature profile with height is known, we can find the pressure by numerical integration quite easily.
The atmospheric pressure is of great importance in meteorology, since it determines the winds, which generally move at right angles to the direction of most rapid change of pressure, that is, along the isobars, which are contours of constant pressure. Certain typical weather patterns are associated with relatively high and relatively low pressures, and how they vary with time. The barometric pressure may be given in popular weather forecasts, though few people know what to do with it. If you live at a high altitude, your local weather reporter may report the pressure to be, say, 29.2 inches, but if you have a real barometer, you may well find that it is closer to 25 inches. At an elevation of 1500 m (near Denver, or the top of the Puy de Dôme), the atmospheric pressure is about 635 mm, and water boils at 95 °C. In fact, altitude is quite a problem in meteorology, since pressures must be measured at a common level to be meaningful. The barometric pressures quoted in the news are reduced to sea level by standard formulas, that amount to assuming that there is a column of air from your feet to sea level with a certain temperature distribution, and adding the weight of this column to the actual barometric pressure. This is only an arbitrary 'fix' and leads to some strange conclusions, such as the permanent winter highs above high plateaus that are really imaginary.
A cylinder and piston is a chamber of variable volume, a mechanism for transforming pressure to force. If A is the area of the cylinder, and p the pressure of the fluid in it, then F = pA is the force on the piston. If the piston moves outwards a distance dx, then the change in volume is dV = A dx. The work done by the fluid in this displacement is dW = F dx = pA dx = p dV. If the movement is slow enough that inertia and viscosity forces are negligible, then hydrostatics will still be valid. A process for which this is true is called quasi-static. Now consider two cylinders, possibly of different areas A and A', connected with each other and filled with fluid. For simplicity, suppose that there are no gravitational forces. Then the pressure is the same, p, in both cylinders. If the fluid is incompressible, then dV + dV' = 0, so that dW = p dV + p dV' = F dx + F' dx' = 0. This says the work done on one piston is equal to the work done by the other piston: the conservation of energy. The ratio of the forces on the pistons is F' / F = A' / A, the same as the ratio of the areas, and the ratios of the displacements dx' / dx = F / F' = A / A' is in the inverse ratio of the areas. This mechanism is the hydrostatic analogue of the lever, and is the basis of hydraulic activation.
The most famous application of this principle is the Bramah hydraulic press, invented in 1785 by Joseph Bramah (1748-1814), who also invented many other useful machines, including a lock and a toilet. Now, it was not very remarkable to see the possibility of a hydraulic press; what was remarkable was to find a way to seal the large cylinder properly. This was the crucial problem that Bramah solved by his leather seal that was held against the cylinder and the piston by the hydraulic pressure itself.
In the presence of gravity, p' = p + ρgh, where h is the difference in elevation of the two cylinders. Now, p' dV' = -dV (p + ρgh) =-p dV - (ρ dV)gh, or the net work done in the process is p' dV' + p dV = -dM gh, where dM is the mass of fluid displaced from the lower cylinder to the upper cylinder. Again, energy is conserved if we take into account the potential energy of the fluid. Pumps are seen to fall within the province of hydrostatics if their operation is quasi-static, which means that dynamic or inertia forces are negligible.
Pumps are used to move or raise fluids. They are not only very useful, but are excellent examples of hydrostatics. Pumps are of two general types, hydrostatic or positive displacement pumps, and pumps depending on dynamic forces, such as centrifugal pumps. Here we will only consider positive displacement pumps, which can be understood purely by hydrostatic considerations. They have a piston (or equivalent) moving in a closely-fitting cylinder, and forces are exerted on the fluid by motion of the piston. We have already seen an important example of this in the hydraulic lever or hydraulic press, which we have called quasi-static. The simplest pump is the syringe, filled by withdrawing the piston and emptied by pressing it back in, as its port is immersed in the fluid or removed from it.
More complicated pumps have valves allowing them to work repetitively. These are usually check valves that open to allow passage in one direction, and close automatically to prevent reverse flow. There are many kinds of valves, and they are usually the most trouble-prone and complicated part of a pump. The force pump has two check valves in the cylinder, one for supply and the other for delivery. The supply valve opens when the cylinder volume increases, the delivery valve when the cylinder volume decreases. The lift pump has a supply valve, and a valve in the piston that allows the liquid to pass around it when the volume of the cylinder is reduced. The delivery in this case is from the upper part of the cylinder which the piston does not enter. Diaphragm pumps are force pumps in which the oscillating diaphragm takes the place of the piston. The diaphragm may be moved mechanically, or by the pressure of the fluid on one side of the diaphragm.
Some positive displacement pumps are shown at the right. The force and lift pumps are typically used for water. The force pump has two valves in the cylinder, while the lift pump has a one valve in the cylinder and one in the piston. The maximum lift, or "suction," is determined by the atmospheric pressure, and either cylinder must be within this height of the free surface. The force pump, however, can give an arbitrarily large pressure to the discharged fluid, as in the case of a diesel engine injector. A nozzle can be used to convert the pressure to velocity, to produce a jet, as for fire fighting. Fire fighting force pumps usually had two cylinders feeding one receiver alternately. The air space in the receiver helped to make the water pressure uniform.
The three pumps on the right are typically used for air, but would be equally applicable to liquids. The Roots blower has no valves, their place taken by the sliding contact between the rotors and the housing. The Roots blower can either exhaust a receiver or provide air under moderate pressure, in large volumes. The bellows is a very old device, requiring no accurate machining. The single valve is in one or both sides of the expandable chamber. Another valve can be placed at the nozzle if required. The valve can be a piece of soft leather held close to holes in the chamber. The bicycle pump uses the valve on the valve stem of the tire or inner tube to hold pressure in the tire. The piston, which is attached to the discharge tube, has a flexible seal that seals when the cylinder is moved to compress the air, but allows air to pass when the movement is reversed. Diaphragm and vane pumps are not shown, but they act the same way by varying the volume of a chamber, and directing the flow with check valves.
Pumps were applied to the dewatering of mines, a very necessary process as mines became deeper. Newcomen's atmospheric engine was invented to supply the power for pumping. The first engine may have been erected in Cornwall in 1710, but the Dudley Castle engine of 1712 is much better known and thoroughly documented. The first pumps used in Cornwall were called bucket pumps, which we recognize as lift pumps, with the pistons somewhat miscalled buckets. They pumped on the up-stroke, when a clack in the bottom of the pipe opened and allowed water to enter beneath the piston. At the same time, the piston lifted the column of water above it, which could be of any length. The piston could only "suck" water 33 ft, or 28 ft more practically, of course, but this occurred at the bottom of the shaft, so this was only a limit on the piston stroke. On the down stroke, a clack in the bucket opened, allowing it to sink through the water to the bottom, where it would be ready to make another lift.
More satisfactory were the plunger pumps, also placed at the bottom of the shaft. A plunger displaced volume in a chamber, forcing the water in it through a check valve up the shaft, when it descended. When it rose, water entered the pump chamber through a clack, as in the bucket pump. Only the top of the plunger had to be packed; it was not necessary that it fit the cylinder accurately. In this case, the engine at the surface lifted the heavy pump rods on the up-stroke. When the atmospheric engine piston returned, the heavy timber pump rods did the actual pumping, borne down by their weight.
A special application for pumps is to produce a vacuum by exhausting a container, called the receiver. Hawksbee's dual cylinder pump, designed in the 18th century, is the final form of the air pump invented by Guericke by 1654. A good pump could probably reach about 5-10 mmHg, the limit set by the valves. The cooperation of the cylinders made the pump much easier to work when the pressure was low. In the diagram, piston A is descending, helped by the partial vacuum remaining below it, while piston B is rising, filling with the low-pressure air from the receiver. The bell-jar receiver, invented by Huygens, is shown; previously, a cumbersome globe was the usual receiver. Tate's air pump is a 19th century pump that would be used for simple vacuum demonstrations and for utility purposes in the lab. It has no valves on the low-pressure side, jsut exhaust valves V, V', so it could probably reach about 1 mmHg. It is operated by pushing and pulling the handle H. At the present day, motor-driven rotary-seal pumps sealed by running in oil are used for the same purpose. At the right is Sprengel's pump, with the valves replaced by drops of mercury. Small amounts of gas are trapped at the top of the fall tube as the mercury drops, and moves slowly down the fall tube as mercury is steadily added, coming out at the bottom carrying the air with it. The length of the fall tube must be greater than the barometric height, of course. Theoretically, a vacuum of about 1 μm can be obtained with a Sprengel pump, but it is very slow and can only evacuate small volumes. Later, Langmuir's mercury diffusion pump, which was much faster, replaced Sprengel pumps, and led to oil diffusion pumps that can reach very high vacua.
The column of water or hydrostatic engine is the inverse of the force pump, used to turn a large head (pressure) of water into rotary motion. It looks like a steam engine, with valves operated by valve gear, but of course is not a heat engine and can be of high efficiency. However, it is not of as high efficiency as a turbine, and is much more complicated, but has the advantage that it can be operated at variable speeds, as for lifting. A few very impressive column of water engines were made in the 19th century, but they were never popular and remained rare. Richard Trevithick, famous for high pressure steam engines, also built hydrostatic engines in Cornwall. The photograph at the right shows a column-of-water engine built by Georg von Reichenbach, and placed in service in 1917. This engine was exhibited in the Deutsches Museum in München as late as 1977. It was used to pump brine for the Bavarian state salt industry. A search of the museum website did not reveal any evidence of it, but a good drawing of another brine pump, with four cylinders and driven by a water wheel, also built by von Reichenbach was found. This machine, a Solehebemaschine ("brine-lifting machine"), entered service in 1821. It had two pressure-operated poppet valves for each cylinder. These engines are brass to resist corrosion by the salt water. Water pressure engines must be designed taking into account the incompressibility of water, so both valves must not close at the same time, and abrupt changes of rate of flow must not be made. Air chambers can be used to eliminate shocks.
Georg von Reichenbach (1771-1826) is much better known as an optical designer than as a mechanical engineer. He was associated with Joseph Fraunhofer, and they died within days of each other in 1826. He was of an aristocratic family, and was Salinenrat, or manager of the state salt works, in southeastern Bavaria, which was centred on the town of Reichenhall, now Bad Reichenhall, near Salzburg. The name derives from "rich in salt." This famous salt region had salt springs flowing nearly saturated brine, at 24% to 26% (saturated is 27%) salt, that from ancient times had been evaporated over wood fires. A brine pipeline to Traunstein was constructed in 1617-1619, since wood fuel for evaporating the brine was exhausted in Reichenhall. The pipeline was further extended to Rosenheim, where there was turf as well as wood, in 1818-10. Von Reichenbach is said to have built this pipeline, for which he designed a water-wheel-driven, four-barrel pump. Maximilian I, King of Bavaria, commissioned von Reichenbach to bring brine from Berchtesgaden, elevation 530 m, to Reichenhall, elevation 470 m, over a summit 943 m high. The pump shown in the photograph pumped brine over this line, entering service in 1816. Fresh water was also allowed to flow down to the salt beds, and the brine was then pumped to the surface. This was a much easier way to mine salt than underground mining. The salt industry of Bad Reichenhall still operates, but it is now Japanese-owned.
Suppose we want to know the force exerted on a vertical surface of any shape with water on one side, assuming gravity to act, and the pressure on the surface of the water zero. We have already solved this problem by a geometrical argument, but now we apply calculus, which is easier but not as illuminating. The force on a small area dA a distance x below the surface of the water is dF = p dA = ρgx dA, and the moment of this force about a point on the surface is dM = px dA = ρgx2 dA. By integration, we can find the total force F, and the depth at which it acts, c = M / F. If the surface is not symmetrical, the position of the total force in the transverse direction can be obtained from the integral of dM' = ρgxy dA, the moment about some vertical line in the plane of the surface. If there happens to be a pressure on the free surface of the water, then the forces due to this pressure can be evaluated separately and added to this result. We must add a force equal to the area of the surface times the additional pressure, and a moment e qual to the product of this force and the distance to the centroid of the surface.
The simplest case is a rectangular gate of width w, and height h, whose top is a distance H below the surface of the water. In this case, the integrations are very easy, and F = ρgw[(h + H)2 - h2]/2 = ρgH(H + 2h)/2 = ρg(h + H/2)Hw. The total force on the gate is equal to its area times the pressure at its centre. M = ρgw[(h + H)3 - h3]/3 = ρg(H2/3 + Hh + h2)Hw, so that c = (H2/3 + Hh + h2)/(h + H/2). In the simple case of h = 0, c = 2H/3, or two-thirds of the way from the top to the bottom of the gate. If we take the atmospheric pressure to act not only on the surface of the water, but also the dry side of the gate, there is no change to this result. This is the reason atmospheric pressure often seems to have been neglected in solving subh problems.
Consider a curious rectangular tank, with one side vertical but the opposite side inclined inwards or outwards. The horizontal forces exerted by the water on the two sides must be equal and opposite, or the tank would scoot off. If the side is inclined outwards, then there must be a downwards vertical force equal to the weight of the water above it, and passing through the centroid of this water. If the side is inclined inwards, there must be an upwards vertical force equal to the weight of the 'missing' water above it. In both cases, the result is demanded by ordinary statics. What we have here has been called the 'hydrostatic paradox.' It was conceived by the celebrated Flemish engineer Simon Stevin (1548-1620) of Brugge, the first modern scientist to investigate the statics of fluids and solids. Consider three tanks with bottoms of equal sizes and equal heights, filled with water. The pressures at the bottoms are equal, so the vertical force on the bottom of each tank is the same. But suppose that one tank has vertical sides, one has sides inclined inward, and third sides inclined outwards. The tanks do not contain the same weight of water, yet the forces on their bottoms are equal! I am sure that you can spot the resolution of this paradox.
Sometimes the forces are required on curved surfaces. The vertical and horizontal components can be found by considering the equilibrium of volumes with a plane surface equal to the projected area of the curved surface in that direction. The general result is usually a force plus a couple, since the horizontal and vertical forces are not necessarily in the same plane. Simple surfaces, such as cylinders, spheres and cones, may often be easy to solve. In general, however, it is necessary to sum the forces and moments numerically on each element of area, and only in simple cases can this be done analytically.
If a volume of fluid is accelerated uniformly, the acceleration can be added to the acceleration of gravity. A free surface now becomes perpendicular to the total acceleration, and the pressure is proportional to the distance from this surface. The same can be done for a rotating fluid, where the centrifugal acceleration is the important quantity. The earth's atmosphere is an example. When air moves relative to the rotating system, the Coriolis force must also be taken into account. However, these are dynamic effects and are not strictly a part of hydrostatics.
Archimedes, so the legend runs, was asked to determine if the goldsmith who made a golden crown for Hieron, Tyrant of Syracuse, had substituted cheaper metals for gold. The story is told by Vitruvius. A substitution could not be detected by simply weighing the crown, since it was craftily made to the same weight as the gold supplied for its construction. Archimedes realized that finding the density of the crown, that is, the weight per unit volume, would give the answer. The weight was known, of course, and Archimedes cunningly measured its volume by the amount of water that ran off when it was immersed in a vessel filled to the brim. By comparing the results for the crown, and for pure gold, it was found that the crown displaced more water than an equal weight of gold, and had, therefore, been adulterated.
This story, typical of the charming way science was made more interesting in classical times, may or may not actually have taken place, but whether it did or not, Archimedes taught that a body immersed in a fluid lost apparent weight equal to the weight of the fluid displaced, called Archimedes' Principle. Specific gravity, the ratio of the density of a substance to the density of water, can be determined by weighing the body in air, and then in water. The specific gravity is the weight in air divided by the loss in weight when immersed. This avoids the difficult determination of the exact volume of the sample.
To see how buoyancy works, consider a submerged brick, of height h, width w and length l. The difference in pressure on top and bottom of the brick is ρgh, so the difference in total force on top and bottom of the brick is simply (ρgh)(wl) = ρgV, where V is the volume of the brick. The forces on the sides have no vertical components, so they do not matter. The net upward force is the weight of a volume V of the fluid of density ρ. Any body can be considered made up of brick shapes, as small as desired, so the result applies in general. This is just the integral calculus in action, or the application of Professor Thomson's analogy.
Consider a man in a rowboat on a lake, with a large rock in the boat. He throws the rock into the water. What is the effect on the water level of the lake? Suppose you make a drink of ice water with ice cubes floating in it. What happens to the water level in the glass when the ice has melted?
The force exerted by the water on the bottom of a boat acts through the centre of gravity B of the displaced volume, or centre of buoyancy, while the force exerted by gravity on the boat acts through its own centre of gravity G. This looks bad for the boat, since the boat's c.g. will naturally be higher than the c.g. of the displaced water, so the boat will tend to capsize. Well, a board floats, and can tell us why. Should the board start to rotate to one side, or heel, the displaced volume immediately moves to that side, and the buoyant force tends to correct the rotation. A floating body will be stable provided the line of action of the buoyant force passes through a point M above the c.g. of the body, called the metacentre, so that there is a restoring couple when the boat heels. A ship with an improperly designed hull will not float. It is not as easy to make boats as it might appear.
Let Bo be the centre of buoyancy with the ship upright; that is, it is the centre of gravity of the volume V of the displaced water. γV = W, the weight of the ship. If the ship heels by an angle Δθ, a wedge-shaped volume of water is added on the right, and an equal volume is removed on the left, so that V remains constant. The centre of buoyancy is then moved to the right to point B. We can find the x-coordinate of B by taking moments of the volumes about the y-axis. Therefore, V (BoB) = V(0) + moment of the shaded volume - moment of the equal compensating volume. If dA is an element of area in the y=0 plane, then the volume element is xΔθdA (this automatically makes the volume to the left of x=0 negative), and the moment is this times x. Note that contributions from x>0 and x<0 are both positive, as they should be. Now, ∫x2dA is just the moment of inertia of the water-level area of the ship, I. Therefore, V(BoB) = IΔθ. Now, (BoB)/Δθ = (MBo), since for small Δθ the tangent is equal to the angle. Finally, then, (MG) = (I/V) - (BoG).
The moment tending to restore the ship to upright is W times the righting arm GZ = MG x Δθ. Therefore, the ship tends to roll with a certain period. A small GM means a small restoring torque, and so a long roll period. A ship with a small GM is said to be tender, which is desirable for passenger ships and for gun platforms (warships). A passenger ship may have a roll period of 28s or so, while a cargo ship may have a period of 13-15s. A ship with a large GM and a short roll period is called stiff. Metacentric heights are typically 1 to 2 metres.
The combination of a small GM and a small freeboard was originally considered desirable for a warship, since it made a stable gun platform and presented a minimum area that had to be armoured. HMS Captain, an early turret ironclad launced in 1869, was such a ship. The ship capsized off Finisterre in 1870 in a gale when the topsails were not taken in promptly enough and the ship heeled beyond its 14° maximum. HMS Sultan, a broadside ironclad launched in 1870, had metacentric height of only 3 feet for stability, but proved unsafe for Atlantic service.
The free surface effect can greatly reduce the stability of a ship. For example, if the hull has taken water, when the ship heels this weight moves to the low side and counters the buoyancy that should give the ship stability. Longitudinal baffles reduce the effect (division into thirds reduces the effect by a factor of 9), and are absolutely necessary for ships like tankers. In 2006, imprudent shifting of ballast water caused MV Cougar Ace, with its cargo of Mazdas, to list 80°. The ship was eventually righted, however, since it did not take water. A list, incidentally, is a permanent heel.
Longitudinal stability against pitching is analyzed similarly.
Archimedes's Principle can also be applied to balloons. The Montgolfier brothers' hot air balloon with a paper envelope ascended first in 1783 (the brothers got Pilâtre de Rozier and Chevalier d'Arlandes to go up in it). Such "fire balloons" were then replaced with hydrogen-filled balloons, and then with balloons filled with coal gas, which was easier to obtain and did not diffuse through the envelope quite as rapidly. Methane would be a good filler, with a density 0.55 that of air. Slack balloons, like most large ones, can be contrasted with taut balloons with an elastic envelope, such as weather balloons. Slack balloons will not be filled full on the ground, and will plump up at altitude. Balloons are naturally stable, since the center of buoyancy is above the center of gravity in all practical balloons. Submarines are yet another application of buoyancy, with their own characteristic problems.
Small neoprene or natural rubber balloons have been used for meteorological observations, with hydrogen filling. A 10g ceiling balloon was about 17" in diameter when inflated to have a free lift of 40g. It ascended 480ft the first minute, 670ft in a minute and a half, and 360ft per minute afterwards, to find cloud ceilings by timing, up to 2500ft, when it subtended about 2' of arc, easily seen in binoculars. Large sounding balloons were used to lift a radiosonde and a parachute for its recovery. An AN/AMT-2 radiosonde of the 1950's weighed 1500g, the paper parachute 100g, and the balloon 350g. The balloon was inflated to give 800g free lift, so it would rise 700-800 ft/min to an altitude of about 50,000 ft (15 km) before it burst. This balloon was about 6 ft in diameter when inflated at the surface, 3 ft in diameter before inflation. The information was returned by radio telemetry, so the balloon did not have to be followed optically. Of intermediate size was the pilot balloon, which was followed with a theodolite to determine wind directions and speeds. At night, a pilot balloon could carry a light for ceiling determinations.
The greatest problem with using hydrogen for lift is that it diffuses rapidly through many substances. Weather balloons had to be launched promptly after filling, or the desired free lift would not be obtained. Helium is a little better in this respect, but it also diffuses rapidly. The lift obtained with helium is almost the same as with hydrogen (density 4 compared to 2, where air is 28.97). However, helium is exceedingly rare, and only its unusual occurrence in natural gas from Kansas makes it available. Great care must be taken when filling balloons with hydrogen to avoid sparks and the accumulation of hydrogen in air, since hydrogen is exceedingly flammable and explosive over a wide range of concentrations. Helium has the great advantage that it is not inflammable.
The hydrogen for filling weather balloons came from compressed gas in cylinders, from the reaction of granulated aluminium with sodium hydroxide and water, or from the reaction of calcium hydroxide with water. The chemical reactions are 2Al + 2NaOH + 2H2O → 2NaAlO2 + 3H2, or CaH2 + 2H2O → Ca(OH) 2 + 2H2. In the first, silicon or zinc could be used instead of aluminium, and in the second, any similar metal hydride. Both are rather expensive sources of hydrogen, but very convenient when only small amounts are required. Most hydrogen is made from the catalytic decomposition of hydrocarbons, or the reaction of hot coke with steam. Electrolysis of water is an expensive source, since more energy is used than is recovered with the hydrogen. Any enthusiasm for a "hydrogen economy" should be tempered by the fact that there are no hydrogen wells, and all the hydrogen must be made with an input of energy usually greater than that available from the hydrogen, and often with the appearance of carbon. Although about 60,000 Btu/lb is available from hydrogen, compared to 20,000 Btu/lb from gasoline, hydrogen compressed to 1000 psi requires 140 times as much volume for the same weight as gasoline. For the energy content of a 13-gallon gasoline tank, a 600-gallon hydrogen tank would be required. The critical temperature of hydrogen is 32K, so liquid storage is out of the question for general use.
The specific gravity of a material is the ratio of the mass (or weight) of a certain sample of it to the mass (or weight) of an equal volume of water, the conventional reference material. In the metric system, the density of water is 1 g/cc, which makes the specific gravity numerically equal to the density. Strictly speaking, density has the dimensions g/cc, while specific gravity is a dimensionless ratio. However, in casual speech the two are often confounded. In English units, however, density, perhaps in lb/cuft or pcf, is numerically different from the specific gravity, since the weight of water is 62.5 lb/cuft.
Things are complicated by the variation of the density of water with temperature, and also by the confusion that gave us the distinction between cc and ml. The milliliter is the volume of 1.0 g of water at 4°C, by definition. The actual volume of 1.0 g of water at 4°C is 0.999973 cm3 by measurement. Since most densities are not known, or needed, to more than three significant figures, it is clear that this difference is of no practical importance, and the ml can be taken equal to the cc. The density of water at 0°C is 0.99987 g/ml, at 20° 0.99823, and at 100°C 0.95838. The temperature dependence of the density may have to be taken into consideration in accurate work. Mercury, while we are at it, has a density 13.5955 at 0°C, and 13.5461 at 20°C.
The basic idea in finding specific gravity is to weigh a sample in air, and then immersed in water. Then the specific gravity is W/(W - W'), if W is the weight in air, and W' the weight immersed. The denominator is just the buoyant force, the weight of a volume of water equal to the volume of the sample. This can be carried out with an ordinary balance, but special balances, such as the Jolly balance, have been created specifically for this application. Adding an extra weight to the sample allows measurement of specific gravities less than 1.
A pycnometer is a flask with a close-fitting ground glass stopper with a fine hole through it, so a given volume can be accurately obtained. The name comes from the Greek puknos, a word meaning "density." If the flask is weighed empty, full of water, and full of a liquid whose specific gravity is desired, the specific gravity of the liquid can easily be calculated. A sample in the form of a powder, to which the usual method of weighing cannot be used, can be put into the pycnometer. The weight of the powder and the weight of the displaced water can be determined, and from them the specific gravity of the powder.
The specific gravity of a liquid can be found with a collection of small weighted, hollow spheres that will just float in certain specific gravities. The closest spheres that will just float and just sink put limits on the specific gravity of the liquid. This method was once used in Scotland to determine the amount of alcohol in distilled liquors. Since the density of a liquid decreases as the temperature increases, the spheres that float are an indication of the temperature of the liquid. Galileo's thermometer worked this way.
A better instrument is the hydrometer, which consists of a weighted float and a calibrated stem that protrudes from the liquid when the float is entirely immersed. A higher specific gravity will result in a greater length of the stem above the surface, while a lower specific gravity will cause the hydrometer to float lower. The small cross-sectional area of the stem makes the instrument very sensitive. Of course, it must be calibrated against standards. In most cases, the graduations ("degrees") are arbitrary and reference is made to a table to determine the specific gravities. Hydrometers are used to determine the specific gravity of lead-acid battery electrolyte, and the concentration of antifreeze compounds in engine coolants, as well as the alcohol content of whiskey.
J. T. Bottomley, Hydrostatics (London: William Collins, 1882). Found in a used-bookshop for 10p ($0.20). For "school science," with no calculus but excellent, painstaking explanation and practical applications. 142pp.
S. L. Loney, Elements of Hydrostatics (Cambridge: Cambridge Univ. Press, 1956) 2nd ed. (1904). Also for schools, 253pp. Some calculus in an appendix.
R. L. Daugherty and J. B. Franzini, Fluid Mechanics, 6th ed. (New York: McGraw-Hill, 1965). Chapter 2. A typical engineering treatment in a classic text, of course with calculus.
For more information on the barometer and diffusion pump, see the article on Mercury.
The website of the Deutsches Museum is positively excellent. This is the best science museum in the world. It has not become mostly a medium of entertainment and advertising, as so many others have, but where you can still see original and unusual artifacts. The website contains actual information for others than children, and is well-illustrated. Unfortunately, it does not have illustrations of most of the exhibits, only selected ones, so it does not make it possible to visit the museum from where you are. Such a resource would be very welcome, and would rise above internet shallowness. Knowing German helps a lot, of course, but there is random English here and there.
A. Wolf, A History of Science, Technology and Philosophy in the 16th and 17th Centuries, 2nd ed., Vol. I (Gloucester, MA: Peter Smith, 1968). The index is in Vol II.
J. C. Poggendorff, Geschichte der Physik, (1878). Facsimile reprint by Zentral-Antiquariat der DDR, 1964.
Composed by J. B. Calvert
Created 11 May 2000
Last revised 5 January 2007 | http://mysite.du.edu/~jcalvert/tech/fluids/hydstat.htm | 13 |
194 | A circle is the collection of points equidistant from
a given point, called the center. A circle is named
after its center point. The distance from the center to any point
on the circle is called the radius, (r),
the most important measurement in a circle. If you know a circle’s
radius, you can figure out all its other characteristics. The diameter (d)
of a circle is twice as long as the radius (d =
2r) and stretches between endpoints on the
circle, passing through the center. A chord also extends from endpoint
to endpoint on the circle, but it does not necessarily pass through
the center. In the figure below, point C is
the center of the circle, r is the
radius, and AB is a chord.
Tangents are lines that intersect a circle at only one
point. Tangents are a new addition to the SAT. You can bet that
the new SAT will make sure to cram at least one tangent question
into every test.
Just like everything else in geometry, tangent lines are
defined by certain fixed rules. Know these rules and you’ll be able
to handle anything the SAT throws at you. Here’s the first: A radius
whose endpoint is the intersection point of the tangent line and
the circle is always perpendicular to the tangent line. See?
And the second rule: Every point in space outside the
circle can extend exactly two tangent lines to the circle. The distance
from the origin of the two tangents to the points of tangency are
always equal. In the figure below, XY = XZ.
Tangents and Triangles
Tangent lines are most likely to appear in conjunction
is the area of triangle QRS if RS is
tangent to circle Q?
You can answer this question only if you know
the rules of circles and tangent lines. The question doesn’t tell
you that QR is the radius of the circle;
you just have to know it: Because the circle is named circle Q,
point Q must be the center of the
circle, and any line drawn from the center to the edge of the circle
is the radius. The question also doesn’t tell you that QR is
perpendicular to RS. You have to know
that they’re perpendicular because QR is
a radius and RS is a tangent that
meet at the same point.
If you know how to deduce those key facts about this circle,
then the actual math in the question is simple. Since QR
perpendicular, and angle RQS
is 60°, triangle QRS
triangle. The image tells you that side QR
the side opposite the 30
° angle equals 4
is the height of the triangle.
To calculate the area, you just have to figure out which of the
other two sides is the base. Since the height and base of the triangle
must be perpendicular to each other, side RS
be the base. To find RS
, use the 1:
:2 ratio. RS
the side opposite 60°
, so it’s the
. The area of triangle QRS is
1/2(4)(4) = 8
Central Angles and Inscribed Angles
An angle whose vertex is the center of the circle is called
a central angle.
The degree of the circle (the slice of pie) cut by a central
angle is equal to the measure of the angle. If a central angle is 25º,
then it cuts a 25º arc in the circle.
An inscribed angle is an angle formed by two chords originating
from a single point.
An inscribed angle will always cut out an arc in the circle
that is twice the size of the degree of the inscribed
angle. If an inscribed angle has a degree of 40, it
will cut an arc of 80º in the circle.
If an inscribed angle and a central angle cut out the
same arc in a circle, the central angle will be twice as large as
the inscribed angle.
Circumference of a Circle
The circumference is the perimeter of the circle. The
formula for circumference of a circle is
where r is the radius. The
formula can also be written C = πd,
where d is the diameter. Try to find
the circumference of the circle below:
Plugging the radius into the formula, C =
2πr = 2π (3) = 6π.
An arc is a part of a circle’s circumference.
An arc contains two endpoints and all the points on the circle between
the endpoints. By picking any two points on a circle, two arcs are
created: a major arc, which is by definition the longer arc, and
a minor arc, the shorter one.
Since the degree of an arc is defined by the central or
inscribed angle that intercepts the arc’s endpoints, you can calculate
the arc length as long as you know the circle’s radius and the measure
of either the central or inscribed angle.
The arc length formula is
where n is the measure of
the degree of the arc, and r is the
Here’s the sort of question the SAT might ask:
||Circle D has
radius 9. What is the length of arc AB?
In order to figure out the length of arc AB
you need to know the radius of the circle and the measure of
, the inscribed angle that intercepts
the endpoints of AB
. The question
tells you the radius of the circle, but it throws you a little curveball
by not providing you with the measure of
. Instead, the question puts
in a triangle and tells you the
measures of the other two angles in the triangle. Like we said,
only a little curveball: You can easily figure out the measure of
because, as you (better) know, the
three angles of a triangle add up to 180º
Since angle c is an inscribed
angle, arc AB must be 120º.
Now you can plug these values into the formula for arc length:
Area of a Circle
If you know the radius of a circle, you can figure out
its area. The formula for area is:
where r is the radius. So
when you need to find the area of a circle, your real goal is to
figure out the radius.
Area of a Sector
A sector of a circle is the area enclosed by a central
angle and the circle itself. It’s shaped like a slice of pizza.
The shaded region in the figure below is a sector:
There are no analogies on the SAT anymore, but here’s
one anyway: The area of a sector is related to the area of a circle
just as the length of an arc is related to the circumference. To
find the area of a sector, find what fraction of
the sector makes up and multiply
this fraction by the area of the circle.
where n is the measure of
the central angle that forms the boundary of the sector, and r is
Try to find the area of the sector in the figure below:
The sector is bounded by a 70° central angle in a circle
whose radius is 6. Using the formula, the area of the
Polygons and Circles
We’ve talked already about triangles in circle problems.
But all kinds of polygons have also been known to make cameos on
SAT circle questions. Here’s an example:
is the length of minor arc BE if the
area of rectangle ABCD is 18?
To find the length of minor arc BE
you have to know two things: the radius of the circle and the measure
of the central angle that intersects the circle at points B
. Because ABCD
a rectangle, and rectangles only have right angles, figuring out
the measure of the central angle is simple.
, so the measure
of the central angle is 90°
Finding the radius of the circle is a little tougher.
From the diagram, you can see that the radius is equal to the height
of the rectangle. To find the height of the rectangle, you can use
the fact that the area of the rectangle is 18, and
the length is 6. Since A = bh,
and you know the values of both a and b,
Now that you’ve got the radius and measure of the angle,
plug them into the arc length formula to find the length of minor | http://www.sparknotes.com/testprep/books/newsat/chapter20section6.rhtml | 13 |
58 | The reefs surrounding the Gilbert Islands (Republic of Kiribati, Central Pacific), like many other island locations throughout the world, have undergone rapid and intensive environmental changes over the past 100 years.
One such change has been the reduction of the number of shark species present in their waters, even though sharks play an important part in the economy and culture of the Gilbertese.
Detail of FMNH 99071 showing the teeth of Carcharhinus obscurus attached using braided cord. (Credit: Drew J, Philipp C, Westneat MW
Two species of sharks previously unreported in both the historic records or contemporary studies were discovered in a new analysis of weapons made from shark teeth used by 19th century islanders. The find was reported in a study published in the open access journal PLOS ONE by Joshua Drew from Columbia University and colleagues from the Field Museum of Natural History.
Using the novel data source of shark tooth weapons of the Gilbertese Islanders housed in natural history museums, they were able to show that two species of shark, the Spot-tail (Carcharhinus sorrah) and the Dusky (C. obscurus), were both present in the islands during the last half of the 19th century but not reported in any historical literature or contemporary ichthyological surveys of the region.Analysed 120 weapons
For the current study, the researchers analysed a collection of 120 of these weapons from the Field Museum of Natural History, including some that resemble clubs, daggers, lances, spears and swords. They identified eight species of sharks based on the teeth used in these weapons, two of which have never been reported from these waters, in either historical surveys or contemporary analysis. Both these species are currently common in other areas, so while it is possible that they may still be living undiscovered in the GIlberts, it is more likely that the local populations have been driven to extinction.Gilbertese shark tooth weapon (FMNH 99071). (Credit: Drew J, Philipp C, Westneat MW (2013)
Given the importance of these species to the ecology of the Gilbert Island reefs and to the culture of the Gilbertese people, documenting the shifts in fauna represents an important step toward restoring the ecological and cultural diversity of the area.
The combined data from weapons, literature, and museum collections shows how an increase in the diversity of sampling allows us to better explore the oceans.Carcharhinus obscurus , via Wikimedia Commons
Source: PLOSoneMore Information
- Drew J, Philipp C, Westneat MW (2013) Shark Tooth Weapons from the 19th Century Reflect Shifting Baselines in Central Pacific Predator Assemblies. PLoS ONE 8(4): e59855. doi:10.1371/journal.pone.0059855
PLOSone. Shark tooth weapons reveal lost species. Past Horizons. April 05 2013, from http://www.pasthorizonspr.com/index.php/archives/04/2013/shark-tooth-weapons-reveal-lost-species For Archaeology News – Archaeology Research – Archaeology Press Releases
PAPA International offers a brief history of aerial photography, from the first time a camera took flight, until it developed into a business, with very practical applications.
The first known aerial photograph was taken in 1858 by French photographer and balloonist, Gaspar Felix Tournachon, known as “Nadar”. In 1855 he had patented the idea of using aerial photographs in mapmaking and surveying, but it took him 3 years of experimenting before he successfully produced the very first aerial photograph. It was a view of the French village of Petit-Becetre taken from a tethered hot-air balloon, 80 meters above the ground. This was no mean feat, given the complexity of the early collodion photographic process, which required a complete darkroom to be carried in the basket of the balloon!
Unfortunately, Nadar’s earliest photographs no longer survive, and the oldest aerial photograph known to be still in existence is James Wallace Black’s image of Boston from a hot-air balloon, taken in 1860. Following the development of the dry-plate process, it was no longer necessary carry so much equipment, and the first free flight balloon photo mission was carried out by Triboulet over Paris in 1879.
PAPA International, The Professional Aerial Photographers’ Association, is a professional trade organization, whose members are aerial photographers throughout the world.Read on and learn about what happened next – papainternational.org
Friday 5 April 2013, marks the 90th anniversary of the death of the Egyptologist Lord Canarvon and the start of the mysterious curse of Tutankhamen, but author and University of Manchester Egyptologist Dr Joyce Tyldesley points out the real story is far from sinister.
She argues that an exclusive media deal coupled with the subsequent reliance on non-expert comment helped fuel rumours of a curse. Although she also notes that the curse of Tutankhamen is now far more famous than both the original Egyptian king and the men who first unearthed his treasure laden tomb.
It was in November 1922 when the Egyptologist Howard Carter and his team, including Lord Carnarvon, first entered the tomb of Tutankhamen. Their discovery received worldwide media attention, but an exclusive deal with The Times left scores of journalists sitting in the dust outside with nothing to see and no one to interview.
Consequently newspapers turned to all sorts of “experts” to comment on the tomb, including popular fiction authors like Arthur Conan Doyle. Most prominent of all was the popular novelist Marie Corelli, whose comments regarding the health of Lord Carnarvon helped to ignite rumours of a curse.The curse begins George Herbert, 5th Earl of Carnarvon, at Howard Carter’s home on the Theban west bank [Public domain], via Wikimedia CommonsIn a report in The Express on 24 March 1923 about Lord Carnarvon’s health Marie Corelli wrote: “I cannot but think that some risks are run by breaking into the last rest of a king of Egypt whose tomb is specially and solemnly guarded, and robbing him of possessions. This is why I ask: was it a mosquito bite that has so seriously infected Lord Canarvon?”
When, just a few days later Lord Carnarvon succumbed to his illness, Marie Corelli was hailed as a clairvoyant and a legend was born.
Dr Tyldesley remarks: “Finally the world’s press had a story they could publish without deferring to The Times; a human tragedy far more compelling than the disappointingly slow-moving events at the tomb. As with all celebrity deaths, the story rapidly gathered its own momentum and soon there were reports of sinister goings on. At the very moment of Carnarvon’s death all the lights in Cairo had been mysteriously extinguished and at his English home Carnarvon’s dog, Susie, let out a great howl and died.”
However, as Dr Tyldesley makes clear in her book, ‘Tutankhamen’s Curse: The Developing History of an Egyptian king’, a power cut in Cairo is far from unusual and given the time differences rather than dying simultaneously, Susie actually died four hours after her master.
But never letting the facts get in the way of a good story the press continued with the line that Carnarvon had succumbed to an ancient curse. It was Marie Corelli again who brought this to life with her phrase “death comes on wings to he who enters the tomb of a Pharaoh” and it was soon accepted that this or a slight variation was carved either over the entrance to Tutankhamen’s tomb or somewhere inside it.
However no evidence of this inscription has ever been found and Dr Tyldesley says it’s highly unlikely Tutankhamen would have felt the need to have one inscribed on his tomb.
“In a land where only about 5% of the population was literate it seems unlikely that those tempted to rob could actually read any warning. Instead it was widely accepted that the dead had the power to interfere with the living.”Not letting facts get in the way of a good story
But the absence of any concrete proof did nothing to quell the rumours. As the years went on more deaths were attributed to the curse including Prince Ali Kemal Fahmy Bey who had visited the tomb – he was shot by his wife in 1923, Georges Bénédite the Head of the Department of Antiquities at the Louvre Museum who died in 1926 after seeing the tomb and in 1934 Albert Lythgoe the Egyptologist at the Metropolitan Museum of Art in New York who had seen the open sarcophagus of Tutankhamen a decade before.
Right up until the 1970s deaths were being ascribed to the curse including among the flight crew that brought Tutankhamen’s 1972 exhibition to London.
However, Howard Carter himself found it necessary time and time again to report that Tutankhamen’s tomb contained no biological booby traps, poisons or curse. In fact, of those who had first crept into the Burial Chamber, only Lord Canarvon had died prematurely.
It’s widely believed that Lord Canarvon died from blood poisoning after accidently cutting a mosquito bite whilst shaving. He was after all 57 years old at a time when the average male life expectancy at birth in the UK was just that. His health had also been severely weakened by a near-fatal car crash in Germany in 1901.
Other popular theories include the suggestion that Carnarvon might have been infected by a bite from a mosquito which had itself been contaminated by drinking Tutankhamen’s embalming fluids. This was first put forward by the Daily Mail and gained in popularity when the mummy’s autopsy revealed the scar on Tutankhamen’s face which was widely accepted as a mosquito bite linking Tutankhamen to Carnarvon. Unfortunately this theory doesn’t stand up as there were no mosquitoes in the dry Valley of the Kings before the Aswan dam was built in the 1960s.
Sir Arthur Conan Doyle was the first to suggest that poisonous spores may have been included in the tomb. But this seems extremely unlikely given that ancient Egyptian medicine did not understand the causes of illnesses and sicknesses were attributed to malevolent spirits.
A suggestion he could have been poisoned by inhaling ancient and toxic bat guano that was heaped on the tomb floor can be ruled out as no bats had penetrated the sealed tomb.
And finally, the idea that Carnarvon might have been killed by radiation within the tombs has become increasingly popular. However, there is no evidence to support this theory.
So why has the concept of Tutankhamen’s curse persisted? Dr Tyldesley concludes:
“It’s a testament to the popularity of the occult that the modern legend of Tutankhamen’s curse continues to be believed even today. However, it’s not really surprising that this aspect of the story has lasted. Given the choice between focussing on the pretty average life of King Tut, a tomb they weren’t allowed to see and a relatively uneventful death, journalists can’t be blamed for wanting to write about a mysterious ancient curse; no matter how unlikely its existence really is.”
Deaths popularly attributed to Tutankhamun’s ‘curse’:
- Lord Carnarvon, financial backer of the excavation team who was present at the tomb’s opening, died on April 5, 1923 after a mosquito bite became infected; he died 4 months, and 7 days after the opening of the tomb.
- George Jay Gould I, a visitor to the tomb, died in the French Riviera on May 16, 1923 after he developed a fever following his visit.
- Egypt’s Prince Ali Kamel Fahmy Bey died July 10, 1923: shot dead by his wife.
- Colonel The Hon. Aubrey Herbert, MP, Carnarvon’s half-brother, became completely blind and died 26 September 1923 from blood poisoning related to a dental procedure intended to restore his eyesight.
- Woolf Joel, a South African millionaire and visitor to the tomb, died November 13, 1923: shot dead in Johannesburg by blackmailer Baron Kurt von Veltheim whose real name was Karl Frederic Moritz Kurtze.
- Sir Archibald Douglas-Reid, a radiologist who x-rayed Tutankhamun’s mummy, died January 15, 1924 from a mysterious illness.
- Sir Lee Stack, Governor-General of Sudan, died November 19, 1924: assassinated while driving through Cairo.
- A. C. Mace, a member of Carter’s excavation team, died in 1928 from arsenic poisoning
- The Hon. Mervyn Herbert, Carnarvon’s half brother and the aforementioned Aubrey Herbert’s full brother, died May 26, 1929, reportedly from “malarial pneumonia”.
- Captain The Hon. Richard Bethell, Carter’s personal secretary, died November 15, 1929: found smothered in his bed.
- Richard Luttrell Pilkington Bethell, 3rd Baron Westbury, father of the above, died February 20, 1930; he supposedly threw himself off his seventh floor apartment.
- Howard Carter opened the tomb on February 16, 1923, and died well over a decade later on March 2, 1939; however, some have still attributed his death to the ‘curse’.
Source: University of Manchester
- “The Mummy’s Curse: Mummymania in the English-speaking World”, Jasmine Day, 2006, Routledge
- Egyptology Online @ Manchester
University of Manchester. Curse of Tutankhamen – 90 years on. Past Horizons. April 05, 2013, from http://www.pasthorizonspr.com/index.php/archives/04/2013/curse-of-tutankhamen-90-years-on For Archaeology News – Archaeology Research – Archaeology Press Releases
Archaeologists digging near a spa in southern Israel have uncovered Byzantine-era remains that include a large wine-press and a unique clay lantern decorated with crosses looking like a miniature church
The stone remnants of what must have been a significant wine-making apparatus include compartments for storing grapes, a treading floor, and pits for collecting liquid, all spread over an area of more than 100 yards. It would have been in use about 1,500 years ago, the Israel Antiquities Authority said in a statement Thursday.Read more on www.timesofisrael.com
Indigenous people that lived in southeastern Brazil in the late 1800s shared some genetic sequences with Polynesians, an analysis of their remains shows. The finding offers some support for the possibility that Pacific islanders traded with South America thousands of years ago, but researchers say that the distinctive DNA sequences, or haplogroups, may have entered the genomes of the native Brazilians through the slave trade during the nineteenth century.
Most scientists agree that humans arrived in the Americas between 15,000 and 20,000 years ago, probably via the Bering land bridge linking northeastern Asia with what is now Alaska.
But the precise timing and the number of ‘migration waves’ is unclear, owing largely to variations in early Americans’ physical features, says Sérgio Pena, a molecular geneticist at the Federal University of Minas Gerais in Belo Horizonte, Brazil.Read the full article on nature.com
By Fernando Contreras Rodrigo and Cristina Bravo Asensio
The archaeological site of Sanisera is located on the northern coast of Menorca, three kilometres south of the Cape of Cavalleria, in the port of Sanitja and within the territory of Santa Teresa.Aerial view of the port of Sanitja and location of archaeological site. Image: The Sanisera Field School
Ongoing archaeological excavations have shown that Roman Sanisera was occupied in the Late Republican period and throughout the Early Empire period. Its population prospered from the 4th century AD and especially during the Vandal occupation and beginning of Byzantine rule. Later, from the 7th century AD Sanisera gradually decayed until it became de-populated around the beginning of the 9th century AD.
In the early Eighties excavations were carried out in the central part of the city and during this time, several burials and the remains of an Early Christian basilica were found. In the mid nineties several systematic surveys identified six necropolii containing cist tombs, which surround the perimeter of the city and may date from between the 4th to the 6th centuries AD.
The most recent excavations took place between 2008 and 2012 in an area to the west of the Port of Sanitja and very close to the shoreline, successfully locating two previously unknown buildings (10 & 11).
Building 10 is a rectilinear structure with a total floor space of approximately 600 m². It is made up of 18 rooms, including two kitchens (with a preserved hearth and in situ millstone as well as three cisterns), a metal foundry (with a small circular furnace and several pits), bedrooms (with small corner hearths) and even a latrine.
This building seems to have been built and occupied during the Late Empire period as an extension of the urban planning of Sanisera towards its northern limits.Incense burner of Tanit – Enamelled Belt buckle – Sestertius of Empress Sabina
Some interesting finds from the land levelling layer includes a Punic-Ebussitan incense burner which represents the goddess Tanit and from the latest period of occupation, Islamic green glaze ceramics and two silver coins (dírhams) dating to 812 and 825.Ivory sheet with a central cross design – Silver dirham (AD 825)
The second structure (Building 11), located at the south eastern portion of the excavated area has a semicircular apse on its western end, giving it a basilica like layout. Both the stratigraphic sequence along with the artefacts found within the structure shows an occupation span very similar to that of Building 10. However, it appears that this structure, unlike Building 10, has been remodelled six times over several centuries and has had various uses.
The phases that have been analysed in depth are its last three; those dating from the middle of the 3rd century AD until the end of the port of Sanitja’s occupation in the 9th century AD. In its fourth phase in the Late Empire period (middle of the 3rd century AD and during most of the 4th century AD) this structure seems to have functioned as a house and had incorporated part of the earlier walls into its design. In its final pre- Islamic phase as the city was losing its importance (between the early 7th until 9th century AD) it also functioned as a dwelling. However, it is its penultimate phase that is the most interesting.Early Christian basilica of Sanisera (5th and 6th centuries AD) Religious function
The building undertook a major transformation during Sanitja’s period of prosperity (during the Vandal occupation and beginning of Byzantine rule in Menorca) and took on a basilical layout.
The stratigraphic sequence revealed pottery and 32 Vandal coins, all of which helped establish occupation from the mid 5th to mid 6th century AD.
Its re-modelling involved the construction of some new walls and blocking up of old entrances. However, it also incorporated parts of the pre-existing structure and was transformed into a basilica composed of five separate rooms which related to a flagstone paved central nave and an apse at the west end (Room 10), two side naves that shared similar dimensions (Rooms11&12) and two other spaces (Rooms 13&14) to the east.
The usual layout for a Christian basilica places the apse at the eastern end, but more unusually this one is on the west. However, there are many examples that do not follow the standard model, including the Spanish basilicas of Begastri, San Vicente Martir (Córdoba) or Marialba (Léon), all of them following a north-south orientation.A baptistry
Room 11 of the basilica contains some interesting features which indicate that it performed a special function. Its entrance way is monumental (the largest in the whole building) and is made up of large squared sandstone blocks of a higher quality than anywhere else in the structure.
The wall on its northern side contains painted wall plaster in red and yellow ochre hues and along with further evidence of paint work in Room 13, this is the only decoration that has been found.
The third important feature of this room is a centrally located rectangular pit that follows a north-south orientation and is crossed by a second shorter pit oriented east-west. This feature is considered to be the baptismal font.Baptismal font located in the centre of Room 11 – (sketch of layout) Ecclesiastical complex
The initial hypothesis is that both buildings belonged to the same ecclesiastical complex and consisted of a building for religious worship along with an associated ecclesiastical community house and pilgrim’s refuge. It is also thought that there is a connection with the funerary complex (Necropolis 04), located just 65 metres to the west, but outside the city limits. In 2012 excavations began at this site and so far a total of 15 tombs have been studied, all of them containing an average of 4 – 5 individuals.
The Early Christian basilica that was excavated in the eighties and the six Late Roman necropolii which surround the port city are also interesting elements that highlight the “sacred” nature of Sanitja in times when pilgrimage routes could have existed to the island of Menorca.
Ecclesiastical complexes proliferated in different areas of the Mediterranean between the 5th and 6th centuries AD, and many of them show clear similarities to the one at the port of Sanitja. After the Fall of Rome the church was becoming increasingly powerful and pilgrimage started to play an important function in Early Christian life.
Source: Sanisera Field SchoolBibliography
- ALCAIDE, S. 2011: Arquitectura Cristiana Balear en la Antigüedad Tardía (Tesis Doctoral). Institut Català d’Arqueologia Clàssica. Universitat Rovira i Virgili. Tarragona.
- ALONSO, A.1983: “Las Estancias Absidiadas en las Villae Romanas de Extremadura”, Nora: Revista de arte, geografía e historia, 4, 199-206.
- CASANOVAS, M. A. 2005: Historia de Menorca. Palma de Mallorca.
- CABALLERO, L.; ULBERT, T.1975: La Basílica Paleocristiana de Casa Herrera en las cercanías de Mérida (Badajoz), Madrid.
- CHAVARRÍA, A. 2007: El final de las Villae en Hispania. (Siglos IV-VII d.C.), Bibliothèque de l’Antiquité Tardive, Turnhout.
- MAROT, T. 1997: “Aproximación a la circulación monetaria en la península ibérica y las islas baleares durante los siglos V y VI: la incidencia de las emisiones vándalas y bizantinas”, Revue Numismatique, Volumen 6, Número 152, 157 – 190.
- SÁNCHEZ, I. 2009: “Arquitectura sacra de época tardía en Hispalis. Algunas reflexiones”, Archivo Español de Arqueología, Vol. 28, 255-274.
- VIZCAÍNO, J. 2009: La Presencia Bizantina en Hispania (Siglos VI-VII). La Documentación Arqueológica, Murcia.
- Archaeology Digs 2013. The Sanisera Field School offers 16 different courses in Europe focusing on the survey and excavation of the Roman city of Sanisera, bioarchaeology and maritime archaeology. Students gain fieldwork experience in both archaeology and biological anthropology.
Sanisera Field School. Excavating an Early Christian basilica at Sanisera. Past Horizons. April 04, 2013, from http://www.pasthorizonspr.com/index.php/archives/04/2013/Excavating an Early Christian basilica at Sanisera For Archaeology News – Archaeology Research – Archaeology Press Releases
David Lentz from the University of Cincinnati focuses on Cerén, a farming village that was smothered under several metres of volcanic ash in the late sixth century.
Lentz will present his research, “The Lost World of the Zapotitan Valley: Cerén and its Paleoecological Context,” at the 78th annual meeting of the Society for American Archaeology, held on 3-7 April 2013 in Honolulu. More than 3,000 scientists from around the world attend the event to learn about research covering a broad range of topics and time periods.
Cerén, now a UNESCO World Heritage Site known as Joya de Cerén, was discovered in El Salvador in the late 1970s when a governmental construction project unearthed what turned out to be ancient ceramic pottery and other clay structures. The initial archaeological excavation was directed by Payson Sheets, a faculty member at the University of Colorado and a friend of Lentz.Ridged and furrowed land, believed to be a maize field. Photo provided by David Lentz, University of Cincinnati. Remarkably well preserved
Cerén is sometimes called “the Pompeii of Central America,” and much like that doomed ancient Roman city, the wreckage of Cerén was remarkably well preserved by its volcanic burial shroud.
“What this meant for me, is this site had all these plant remains lying on the ground,” Lentz says. “Not only do we find these plant remains well preserved, but we find them where the people left them more than a thousand years ago, and that is really extraordinary.”
Lentz specializes in paleoethnobotany and often in his work – including at other Maya sites – he’s left to interpret complex meaning from splinters of charred wood and hard nut fragments. The Mayas’ tropical environment, which isn’t conducive to preserving plant remains, doesn’t make things any easier.
But the situation was different at Cerén. The village’s sudden and complete ruin sealed it under layers of preservative ash. So Lentz’s research there is still challenging but in an unfamiliar way.
“It was tricky because we kept encountering things we’d never encountered before at a Maya site,” Lentz says. “They were just invisible because of the lack of preservation.”
A few examples of what Lentz and his team have discovered at Cerén:
- Large quantities of a root crop (malanga, a relative of taro) that previously had not been associated with Maya agriculture. They found another “invisible” crop of manioc alongside the more anticipated fields of maize, and they found grasses no longer in existence on the modern-day El Salvador landscape.
- The first discovery of a Maya kitchen, complete with intensively planted household garden. “We could tell what was planted around the houses,” Lentz says. “This is fabulous because people have long debated how the Maya did all this. Now we have a real example.”
- A household with more than 70 ceramic pots, many used to store beans, peppers and other plant matter. Having that many vessels in one home was an unusual discovery for what is thought to be a small, farming village.
- Large plots of neatly rowed land, evidence of ridge and furrow agriculture. Lentz also posits that the people of Cerén surrounded their homes with orchard trees. These discoveries seemingly debunk the common theory that the Maya employed a slash-and-burn agriculture method.
- A raised, paved pathway called a “sacbe,” which was used by the Maya for ceremonial and commercial purposes. Lentz plans additional research on the sacbe to see what other significant discoveries could be made by following the path.
From these new discoveries come many lessons, a lot of them ecological. Lentz has studied how the Mayas effectively implemented systems of agriculture and arboriculture. He is intrigued by what made these methods successful, considering the Maya population was much denser than what exists on the modern landscape.What is thought to be a Maya shaman’s house at Cerén. Photo provided by David Lentz, University of Cincinnati.
His findings at Cerén give him new pieces to plug into the Maya puzzle. Furthermore, they help us understand how humankind affects the natural world.
“Cerén is regarded internationally as one of the treasures of the world,” Lentz says. “What’s been found there gives you a real idea of what things were like in the past and how humans have modified things. I think what we’re learning there is revolutionising our concept of the ancient past in Mesoamerica.”
Source: The University of CincinnatiMore Information
The University of Cincinnati. Volcanic burial ground allows detailed insight into Maya crops. Past Horizons. April 03, 2013, from http://www.pasthorizonspr.com/index.php/archives/04/2013/volcanic-burial-ground-allows-detailed-insight-into-maya-crops For Archaeology News – Archaeology Research – Archaeology Press Releases
In the middle of the Bronze Age, around 1000 BCE, the quantity of metal artefacts traded in the Baltic Sea region increased dramatically. Around that same time, a new type of monument appeared along the coasts; stones set on edge and arranged in the form of ships, built by the maritime culture involved in that same metal trade.A wide maritime network
These Bronze Age maritime groups were part of a network that extended across large parts of northern Europe and with links further to the south: a network maintained due to the increasing dependence on bronze and other important raw materials as a means of social status and cultural dependency.
Archaeologists have long assumed that bronze was imported to Scandinavia from the south, and recent analyses has now confirmed this hypothesis. However, the people who conducted the trade and formed the networks are rarely addressed, not to mention the locations of where they met.
‘One reason why the meeting places of the Bronze Age are not discussed very often is that we have been unable to find them. Which is in contrast to the trading centres of the [later] Viking Age, which have been easy to locate due to the wealth of archaeological material that was left behind,’ says the author of the thesis Joakim Wehlin from the University of Gothenburg and Gotland University.
In his thesis, Wehlin analysed the entirety of archaeological material from the stone ships and also the placement of these monuments within the landscape of Gotland. The thesis offers a new and extensive account of the stone ships and suggests that the importance of the Baltic Sea during the Scandinavian Bronze Age, not least as a waterway, has been underestimated in previous research.
The stone ships can be found across the whole Baltic Sea region; especially on the larger islands with a significant cluster on Gotland. The ships have long been thought to have served as graves and for this reason they have been viewed as vessels intended to take the deceased into the afterlife.Skeppssättning (Stone ship), Gnisvärd, Gotland Image: Roine Johansson (Flickr, used under a CC BY-NC-SA 3.0) The site as a meeting place
‘My study shows a different picture,“ says Wehlin
“It seems the whole body was typically not buried within the ship, and a significant percentage of stone ships have no graves within them at all. Instead, they sometimes show remains of other types of activities. So with the absence of the dead, the traces of the living begin to appear.’
Wehlin suggests that the stone ships and the activities that may have taken place around them point to a people who were focused on maritime trade and connections. Details in the ship monuments indicate that they were built not so much as spectral ships, but as representations of real vessels.
Wehlin feels that the stone ships can even give clues about the ship-building techniques and structural dimensions and this provides further insight into the ships that sailed the Baltic Sea during the Bronze Age.
This period in prehistory shows the ship as a dominant element of the visual culture; carved in stone, decorated on bronze artefacts or built as stone constructions. The ships visualised in different media seem to refer to factual ships and the variety could indicate different functional ships.Early trading ports
Using terrain analysis, Wehlin has located what he feels are a number of potential meeting places – which could even be described as early trading ports.
In one part of the study area in the north-east of Gotland the water system consists of the Hörsne River which later becomes the Gothem River (the largest river in Gotland). The river runs north-east through the wetlands of Lina bog, and continues to its mouth at Åminne and the Bay of Vitviken into the Baltic Sea.
The Lina bog was – prior to the draining campaign in 1947 – the largest on Gotland. This area appears as a maritime landscape with a large inland wetland, bogs, river-systems, river-mouth, coast and sea situated within a rich Bronze Age landscape. The area might well have been important as a communication hub between the east and west coast one that continued into historic times.
Wehlin believes that it is no coincidence that one of the largest clusters of ship settings, almost 15 % of the total number of such monuments, appears in this region.
He suggests that people who were part of a maritime institution; boat builders, seafarers, people with knowledge and skills required for overseas journeys, such as navigation, trade etc., might have had a special place in the society. If so, they may be connected to the ship setting tradition. These features can be seen as a primary instrument for collective identification, akin to the rock-carving sites in Bohuslän. The burials that are present near these sites become secondary activities related to the power of place.
Source: University of GothenburgMore Information
- Approaching the Gotlandic Bronze Age from Sea. Future possibilities from a maritime perspective by Joakim Wehlin,
- Published in: Martinsson-Wallin, H. (2010). Baltic Prehistoric Interactions and Transformations: The Neolithic to the Bronze Age. Gotland University Press 5. s. 89-109. Bronze Age Gotland
University of Gothenburg. Investigating Bronze Age stone ships on Gotland. Past Horizons. April 01, 2013, from http://www.pasthorizonspr.com/index.php/archives/04/2013/investigating-bronze-age-stone-ships-on-gotland For Archaeology News – Archaeology Research – Archaeology Press Releases
The monastery attracted so many students and monks from around greater India that its administration built an annex to house the seekers of enlightenment coming to meditate there, archaeologists at Quaid-e-Azam University (QAU) have discovered.
Another notable finding was that the main compound of the monastery, located in present-day Badal Pur, is at least 300 years older than archaeologists previously estimated. The main compound, which consists of 55 “monk cells”, was excavated between 2005 and 2012.
See on tribune.com.pk
Initial funding has been secured for an ambitious archaeological project to uncover a lost 17th century town in Northern Ireland.
The site beside Dunluce Castle (above) on the scenic Causeway Coast has been hailed as potentially the region’s own “little Pompeii”.
The Heritage Lottery Fund (HLF) has now provided more than £300,000 for an excavation project and signalled the potential for a total package of £4 million.
The ruins of the castle have stood on the rocky coastal outcrop near Bushmills in north Antrim for centuries but it was only four years ago that archaeologists re-discovered a lost settlement beside the landmark.
Established in 1608 by the first Earl of Antrim, Randal MacDonnell, the town was destroyed in the uprising of 1641 and was eventually abandoned in 1680.Read more about this fascinating project on belfasttelegraph.co.uk
Preservation of a body is an interesting phenomenon, whether it be the evanescent embalming at a funeral home to prevent the body from decaying at the wake, or preservation for hundreds of years as is the case with Rosalia Lombardo in the Palermo catacombs.
Embalming is a three-fold process of sanitation, presentation and presentation. While the process has ancient roots and is found throughout the world, the modern technique was not possible until the Civil War, when the high number of bodies needing to be shipped over distances necessitated research and led to Dr. Thomas Holmes discovering a method of arterial preservation.
This was later improved in 1867, the August Wilhelm von Hofmann discovered formaldehyde. Primarily it involves the replacement of fluids and blood with chemicals to prevent putrefaction.
Displaying the Famous Political Dead – Katy MeyersRead the full article on bonesdontlie.wordpress.com
The Sasanian Persian siege that destroyed Roman-held Dura-Europos, Syria, ca. 256 C.E. left some of the best evidence ever recovered for the nature and practices of ancient warfare.
Perhaps the most dramatic of the archaeological deposits, excavated in the early 1930s, were those resulting from the mining duel around Tower 19 on the city’s western wall, during which at least 19 Roman soldiers and one Sasanian became entombed.
Recent reanalysis of the excavation archive suggested that the mine evidence still held one unrecognized deadly secret: the Roman soldiers who perished there had not, as Robert du Mesnil du Buisson (the original excavator) believed, died by the sword or by fire but had been deliberately gassed by the Sasanian attackers.
This article discusses the implications of this conclusion for our understanding of early Sasanian military capabilities and reviews the question of possible re-excavation in search of the casualties of Tower 19, whose remains were neither studied nor retained.
| American Journal of ArchaeologyRead the full free article from 2011 here on www.ajaonline.org
Learn more about this site;
Cambridge Archaeological Unit is leading an excavation at the site being developed for a new Cambridge University campus. The archaeologists have uncovered a remarkable landscape including five separate cemeteries, two funerary monuments, two Roman roads, Bronze Age ring-ditch ‘circles’ and thousands of finds including some 30 cremation urns, 25 skeletons, a spearhead and an array of brooches.Excavation of a 1st/2nd century Roman cremation. © C.A.U. Photos by Dave Webb The layers of time
They believe the site was first occupied 3,500 years ago and while pockets of Iron Age settlement have been identified (c. 600BC–AD50), most evident is the Middle Bronze Age.
Remains from this period date to between 1500–1200 BC and aside from sub-rectangular ditched enclosures, the excavations have uncovered a series of ring-ditches. Associated with cremation burials, these relate to marking the land with monuments to the ancestors. It is probable that the area saw activity in earlier periods, but the Middle Bronze Age would have seen the first substantive settlement when presumably tree-cover was cleared (which will be established through the study of pollen evidence).
The vast archaeological exposure of the gravel ridge, flanked by heavy claylands, runs up through the north west Cambridge lands.Aerial view of the excavation site. © C.A.U. Photo by Paul Bailey, SkyHIgh
The full extent of the excavated area will stretch across a swathe of both geographic location and human/archaeological time. Its southeastern end coincides with the Traveller’s Rest Pit Quarry, where in the early years of the last century Burkitt and Marr, both of the University of Cambridge, studied the gravel beds and collected quantities of Palaeolithic flint implements. Nineteenth century quarrying in adjacent fields also yielded a number of rich Roman funerary remains.A large number of burial grounds
The excavation’s northwestern end will lie opposite Girton College, where in the late 19th and early 20th centuries a major Anglo-Saxon cemetery had been uncovered and excavated. Relevant to the current site-work, the cemetery included Roman burials and sculptural fragments, including a lion’s head recovered from a pit.
It is believed that the dig will uncover a landscape larger than Roman Cambridge itself and will reveal a complex network of communities and road alignments that will provide insights into the nature of the landscape.
Archaeologists believe the countryside was lattice-like, criss-crossed with roads and trackways that linked up its many farms and settlements.
Christopher Evans, head of the archaeological unit leading the dig said:
“This scale and scope of excavation work has not been attempted before [in this area]. For more than a millennium, the landscape of the site has been uninterrupted farmland.
“We have discovered that vibrant prehistoric settlements inhabited the land and settlements grew with complexity in the Roman age.”
Pushing the time-frame nearer the present, at the northern end of the site-area the team found, entirely unexpectedly, the zigzagging lines of WWII military practice trenches. When examining Luftwaffe aerial photographs it was clear that the Germans were well aware of the existence of the trenches as they have been clearly marked on the 1940s images.
120,000 cubic metres of topsoil have been moved so far and the excavation area covers 14 of the 150-hectare development site for north west Cambridge, it is clear there is still much to find and more to learn in this landmark project.Gallery of the excavation so far. Cambridge Archaeological Unit. Cite this article
Cambridge Archaeological Unit. Layers of time uncovered: Bronze Age to World War II. Past Horizons. March 26, 2013, from http://www.pasthorizonspr.com/index.php/archives/03/2013/layers-of-time-uncovered-bronze-age-to-world-war-ii) For Archaeology News – Archaeology Research – Archaeology Press Releases
The advent of social networking sites like Facebook and Twitter have made us all more connected, but long-distance social networks existed long before the Internet.
An article published this week in the Proceedings of the National Academy of Sciences sheds light on the transformation of social networks in the late pre-Hispanic American Southwest and shows that people of that period were able to maintain surprisingly long-distance relationships with nothing more than their feet to connect them.
Led by University of Arizona anthropologist Barbara Mills, the study is based on analysis of more than 800,000 painted ceramic and more than 4,800 obsidian artefacts dating from A.D. 1200-1450, uncovered from more than 700 sites in the western Southwest, in what is now Arizona and western New Mexico.
Barbara Mills, director, UA School of Anthropology Social network analysis
With funding from the National Science Foundation, Mills, director of the UA School of Anthropology, worked with collaborators at Archeology Southwest in Tucson to compile a database of more than 4.3 million ceramic artefacts and more than 4,800 obsidian artefacts, from which they drew for the study.
They then applied formal social network analysis to see what material culture could teach them about how social networks shifted and evolved during a period that saw large-scale demographic changes, including long-distance migration and coalescence of populations into large villages.Dramatic changes
Their findings illustrate dramatic changes in social networks in the Southwest over the 250-year period between A.D. 1200 and 1450. They found, for example, that while a large social network in the southern part of the Southwest grew very large and then collapsed, networks in the northern part of the Southwest became more fragmented but persisted over time.
“Network scientists often talk about how increasingly connected networks become, or the ‘small world’ effect, but our study shows that this isn’t always the case,” said Mills, who led the study with co-principal investigator and UA alumnus Jeffery Clark, of Archaeology Southwest.
“Our long-term study shows that there are cycles of growth and collapse in social networks when we look at them over centuries,” Mills said. “Highly connected worlds can become highly fragmented.”Maintaining relationships
Another important finding was that early social networks do not appear to have been as restricted as expected by settlements’ physical distance from one another. Researchers found that similar types of painted pottery were being created and used in villages as far as 250 kilometres apart, suggesting people were maintaining relationships across relatively large geographic expanses, despite the only mode of transportation being walking.
“They were making, using and discarding very similar kinds of assemblages over these very large spaces, which means that a lot of their daily practices were the same,” Mills said. “That doesn’t come about by chance; it has to come about by interaction – the kind of interaction where it’s not just a simple exchange but where people are learning how to make and how to use and ultimately discard different kinds of pottery.”
“That really shocked us, this idea that you can have such long distance connections. In the pre-Hispanic Southwest they had no real vehicles, they had no beasts of burden, so they had to share information by walking,” she said.
The application of formal social network analysis – which focuses on the relationships among nodes, such as individuals, household or settlements – is relatively new in the field of archaeology, which has traditionally focused more on specific attributes of those nodes, such as their size or function.
The UA study shows how social network analysis can be applied to a database of material culture to illustrate changes in network structures over time.
“We already knew about demographic changes – where people were living and where migration was happening – but what we didn’t know was how that changed social networks,” Mills said. “We’re so used to looking traditionally at distributions of pottery and other objects based on their occurrence in space, but to see how social relationships are created out of these distributions is what network analysis can help with.”Important implications
One of Mills’s collaborators on the project was Ronald Breiger, renowned network analysis expert and a UA professor of sociology, with affiliations in statistics and government and public policy, who says being able to apply network analysis to archaeology has important implications for his field.
“Barbara (Mills) and her group are pioneers in bringing the social network perspective to archaeology and into ancient societies,” said Breiger, who worked with Mills along with collaborators from the UA School of Anthropology; Archaeology Southwest; the University of Wisconsin, Milwaukee; Hendrix College; the University of Colorado, Boulder; the Santa Fe Institute; and Archaeological XRF Laboratory in Albuquerque, N.M.
“What archaeology has to offer for a study of networks is a focus on very long-term dynamics and applications to societies that aren’t necessarily Western, so that’s broadening to the community of social network researchers,” Breiger said. “The coming together of social network and spatial analysis and the use of material objects to talk about culture is very much at the forefront of where I see the field of social network analysis moving.”
Going forward, Mills hopes to use the same types of analyses to study even older social networks.
“We have a basis for building on, and we’re hoping to get even greater time depth. We’d like to extend it back in time 400 years earlier,” she said. “The implications are we can see things at a spatial scale that we’ve never been able to look at before in a systematic way. It changes our picture of the Southwest.”
Source: University of Arizona
- Barbara J. Mills et al, Transformation of social networks in the late pre-Hispanic US Southwest, PNAS March 25, 2013 201219966
University of Arizona. Study traces cycles of growth and collapse in social networks. Past Horizons. March 27, 2013, from http://www.pasthorizonspr.com/index.php/archives/03/2013/study-traces-cycles-of-growth-and-collapse-in-social-networks For Archaeology News – Archaeology Research – Archaeology Press Releases
Restoration workers discover nine secret crypts hidden under the ruins of Coventry’s bombed cathedral.
Work has been taking place after a crack appeared in part of the 14th Century ruins, in September 2011.
It was already known there were two crypts, but Dr Jonathan Foyle, the chief executive of the World Monuments Fund, which is overseeing the work, said it was like finding a “subterranean wonderland”.
It is thought the crypts were originally used as burial places for the nobility. Some contain human bones, which are thought to have been cleared from the cemetery which was built on for the new cathedral.Read more on www.bbc.co.uk
Three ball game courts, two terraced buildings and even a 1,000 year old residential area have all been revealed in the El Tajín archaeological zone in Veracruz, Mexico.
Archaeologists from the National Institute of Anthropology and History (INAH) have used the latest remote sensing technology to investigate pre-Hispanic sites for the first in time in the country.
In addition to locating these remains that were hidden by vegetation, the use of this new technology will help determine the condition of the site as a whole.Lidar image showing approach to ball court. Image: INAH Years of exploration
Dr. Guadalupe Zetina Gutiérrez, the principal researcher at El Tajín and a remote sensing and geographic information systems (GIS) specialist, reported that two years of exploration using this technology has resulted in these exciting new finds, which now require archaeological excavation.
The addition of the three newly discovered ball game courts increases the number of this type of structure in El Tajin from 17 to 20. “This number could increase even more” he remarked, “as we are working on the digital model of each sector of the site in turn, this discovery represents only those detected in the southern and northern sectors.”
All the ball courts so far located on the site vary both in dimension and characteristics; and in the case of the three new examples this is also true.
With an accuracy of up to 5cm technology, LiDAR can create an accurate digital model of the site that can then be analysed using GIS software.
Zetina Gutierrez explained that they were also able to locate two terraces consisting of platforms of around 10 to 12 metres in height, in the upper part of the old city, from where there would have been a panoramic view of El Tajín.Comparison between the pyramid of the niches and one of the Terraces. Image: INAH New era for technology
The archaeologist is extremely excited at the discovery of a new residential area in the western part of the nucleus of El Tajín and commented that previously searching for new elements of architecture such as this was a huge investment of time, labour and materials.
The INAH specialists also used a total of 60,000 thermographic images to identify cracks and structural problems on the monuments but no major damage was found.LiDAR image of Terrace. Image: INAH
New technology has not only served to make a three-dimensional survey of El Tajín and an inventory of the structures that exist, but has also supplied new data to inform the direction of conservation.
Zetina Gutiérrez concluded that this was a new era for archaeology in Mexico.
Source: National Institute of Anthropology and History (INAH)
El Tajín is open to the public and became a World Heritage Site in 1992.More Information
- El Tajín, Abode of the Dead ( Archaeology Magazine)
- El Tajin, Pre-Hispanic City (UNESCO)
- Instituto Nacional de Antropología e Historia
National Institute of Anthropology and History (INAH). New technology reveals El Tajin’s hidden ball courts. Past Horizons. March 26, 2013, from http://www.pasthorizonspr.com/index.php/archives/03/2013/new-technology-reveals-el-tajins-many-hidden-buildings For Archaeology News – Archaeology Research – Archaeology Press Releases
New research into Thonis-Heracleion, a sunken port-city that served as the gateway to Egypt in the first millennium BC, was examined at a recent international conference at the University of Oxford. The port city, situated 6.5 kilometres off today’s coastline, was one of the biggest commercial hubs in the Mediterranean before the founding of Alexandria.
The Oxford Centre for Maritime Archaeology at the University of Oxford is collaborating on the project with the European Institute for Underwater Archaeology (IEASM) in cooperation with Egypt’s Ministry of State for Antiquities.An archaeologist measures the feet of a colossal red granite statue at the site of Thonis-Heracleion in Aboukir Bay.©Franck Goddio/Hilti Foundation, photo: Christoph Gerigk Port of entry
This obligatory port of entry, known as ‘Thonis’ by the Egyptians and ‘Heracleion’ by the Greeks, was where seagoing ships are thought to have unloaded their cargoes to have them assessed by temple officials and taxes extracted before transferring them to Egyptian ships that went upriver. In the ports of the city, divers and researchers are currently examining 64 Egyptian ships, dating between the eighth and second centuries BC, many of which appear to have been deliberately sunk. Researchers say the ships were found beautifully preserved, in the mud of the sea-bed. With 700 examples of different types of ancient anchor, the researchers believe this represents the largest nautical collection from the ancient world.
“The survey has revealed an enormous submerged landscape with the remains of at least two major ancient settlements within a part of the Nile delta that was criss-crossed with natural and artificial waterways,” said Dr Damian Robinson, Director of the Oxford Centre for Maritime Archaeology at the University of Oxford. Dr Robinson, who is overseeing the excavation of one of the submerged ships known as Ship 43, has discovered that the Egyptians had a unique shipbuilding style. He is also examining why the boats appear to have been deliberately sunk close to the port.Several ship graveyards
“One of the key questions is why several ship graveyards were created about one mile from the mouth of the River Nile. Ship 43 appears to be part of a large cluster of at least ten other vessels in a large ship graveyard,” explained Dr Robinson. “This might not have been simple abandonment, but a means of blocking enemy ships from gaining entrance to the port-city. Seductive as this interpretation is, however, we must also consider whether these boats were sunk simply to use them for land reclamation purposes.”The stele of Thonis-Heracleion (1.90m) had been ordered by Pharaoh Nectanebo I (378-362 BC) and is almost identical to the stele of Naukratis in the Egyptian Museum of Cairo. The place where it was supposed to be erected is explicitly mentioned: Thonis-Heracleion.©Franck Goddio/Hilti Foundation, photo: Christoph Gerigk Maritime trade in the ancient world
The port and its harbour basins also contain a collection of customs decrees, trading weights, and evidence of coin production. The material culture, for example, coin weights, was also discussed at the conference, placing this into the wider narrative of how maritime trade worked in the ancient world.
Elsbeth van der Wilt, from the University of Oxford, said: “Thonis-Heracleion played an important role in the network of long-distance trade in the Eastern Mediterranean, since the city would have been the first stop for foreign merchants at the Egyptian border. Excavations in the harbour basins yielded an interesting group of lead weights, likely to have been used by both temple officials and merchants in the payment of taxes and the purchasing of goods. Amongst these are an important group of Athenian weights. They are a significant archaeological find because it is the first time that weights like these have been identified during excavations in Egypt.”300 statuettes and amulets
Another Oxford researcher, Sanda Heinz, is analysing more than 300 statuettes and amulets from the Late and Ptolemaic Periods, including Egyptian and Greek subjects. The majority depict Egyptian deities such as Osiris, Isis, and their son Horus. “The statuettes and amulets are generally in excellent condition,” she said. “The statuettes allow us to examine their belief system and at the same time have wider economic implications. These figures were mass-produced at a scale hitherto unmatched in previous periods. Our findings suggest they were made primarily for Egyptians; however, there is evidence to show that some foreigners also bought them and dedicated them in temples abroad.”
Franck Goddio, Director of the European Institute of Underwater Archaeology and Visiting Senior Lecturer in Maritime Archaeology at the Oxford Centre for Maritime Archaeology, commented: “The discoveries we have made in Thonis-Heracleion since 2000 thanks to the work of a multidisciplinary team and the support of the Hilti Foundation are encouraging. Charts of the city’s monuments, ports and channels are taking shape more clearly and further crucial information is gathered each year.”
Source: University of Oxford
- The Oxford Centre for Maritime Archaeology
- European Institute of Underwater Archaeology
- Hilti Foundation
University of Oxford. Research sheds light on ancient Egyptian port and ship graveyard. March 26, 2013, from http://www.pasthorizonspr.com/index.php/archives/03/2013/research-sheds-light-on-ancient-egyptian-port-and-ship-graveyard For Archaeology News – Archaeology Research – Archaeology Press Releases
Recent years have seen the emergence of scholarship on the history of archaeology and receptions of the classical past.
Neither of these trends has fully engaged with the visual evidence, particularly that of photography, or with the material form of the archive itself.
Using archival photographs taken at the site of Dura-Europos from 1928 to 1937, this article explores how the study of archaeological photographs and archaeological archives can contribute to our understanding of the history and epistemology of archaeology.Read and download the whole free article here on ajaonline.org
Descriptions of the preparation of ancient Egyptian mummies that appear in both scientific and popular literature are derived largely from accounts by the Greek historians Herodotus and Diodorus Siculus.A different story to the ancient texts Egyptian mummy in the Vatican Museum. Image: Joshua Sherurcij via Wikimedia Commons
According to the 5th century BCE Greek historian Herodotus, the ancient Egyptians used cedar oil enemas to remove the stomach and intestines of poorer individuals, with only the elite affording manual evisceration. However, an investigation by researchers from the University of Western Ontario suggests a very different reality to the process.
This new research, based on a detailed examination of 150 ancient mummies from the 5th Dynasty (2494 BCE ) to the Roman and Coptic periods of the first centuries CE, has been published in the February issue of HOMO – Journal of Comparative Human Biology.Three modes
Herodotus described three modes of embalming ranging from:
- the rich were eviscerated via a slit through the side of the abdomen with an obsidian blade, through which organs were removed.
- the less well off had their insides removed with cedar oil that was pumped into the body cavity.
- the poorest clients had their intestines flushed out with an enema.
In addition, Herodotus claimed the brain was always removed during the embalming process and other accounts suggest the heart was always left in place within the body.Comparing empirical data to historical record
Using published descriptions in the literature for 150 mummies and 3D reconstructions from computed tomography data for 7 mummies, this study compares empirical data with classical descriptions of evisceration, organ treatment and body cavity treatment.
The researchers realised that although there is a rich quantity of data available from these sources, there is a tendency to focus on modern and classical stereotypes of the mummification process.
If the classical and contemporary accounts by Herodotus and Diodorus Siculus were correct, then the above three modes for different classes of people should be adhered to. In addition, the heart should be present in the overwhelming majority of eviscerated mummies, at least in the Late and Ptolemaic periods within which these authors wrote.A different story
The team found that rich and poor alike most commonly had the transabdominal slit performed, although for the elites evisceration was sometimes performed through a slit in the anus. Given the proportions of elite to common mummification, they also found a very low indication that cedar oil enemas were used.
The removal of the heart seems also to coincide only with a transitional period when the middle classes gained access to mummification, so getting to keep the heart may have become a status symbol after that point as only a quarter of the mummies examined had the heart left in place.
And, whereas Herodotus had suggested mummies underwent brain removal, the researchers found that a fifth of the brains were actually left inside the skull. Almost all the others had been pulled out through the nose.CT scan and reconstruction of Lady Hudson, showing (A) the pelvic packing and damaged thorax (note the coffin boards visible beneath the thoracic cavity); and (B) the linen packing in the pelvis with the small pool of resin on it hinting at a transabdominal evisceration. Greater variety of practice over time and geographic location
In spite of the lack of detail present in descriptions of mummies throughout much of the literature, there is substantial evidence for a largely unappreciated variability in the mummification tradition and indeed for much of the study, there is a direct contradiction from the classical descriptions.
To be fair to Herodotus and Diodorus, it is possible that they reported on what they saw at a particular mummification workshop and at a particular point in time and therefore could not have appreciated the full range of variation in the practice throughout Egypt over the course of three millennia.
Source: HOMO – Journal of Comparative Human BiologyMore Information
- A.D. Wade, A.J. Nelson Radiological evaluation of the evisceration tradition in ancient Egyptian mummies HOMO – Journal of Comparative Human Biology, Volume 64, Issue 1, February 2013, Pages 1–28
- Egyptian Way of Mummification According to Herodotus
HOMO – Journal of Comparative Human Biology. Our Title. Past Horizons. March 25, 2013, from http://www.pasthorizonspr.com/index.php/archives/03/2013/how-to-eviscerate-an-egyptian-mummy-the-truth-revealed For Archaeology News – Archaeology Research – Archaeology Press Releases
Maize was key in early Andean civilisation
Police return smuggled Neolithic artefacts to Kosovo
Stone Circle begins to reveal secrets in Scotland
Please note that now your favourite podcast – along with a great deal of additional features – is available also as an app for iPhone, iPad and iPod Touch on iTunes Store.
Stone Pages with BAJR and Past Horizons presents the long running archaeology based podcast with the latest archaeology news, mainly related to prehistory, megalithic monuments and discoveries.
- CBA History
- Support Us | http://www.archaeologyuk.org/aggregator/sources/23?page=7 | 13 |
66 | Pre-AlgebraGeometry Terms: Plane Figures
5-1 and5-2 Terms: Points, Lines, and Planes
1. A _______________ is an exactlocation in space. It is usually represented as a dot, but it has no size at all.
2. A _______________ is a straightpath that extends without end in opposite directions.
3. A _______________ is a part of aline. It has one endpoint and extends without end in one direction.
4. A ________________ is a part of aline or a ray that extends from one endpoint to another.
5. A ________________ is a perfectlyflat surface that extends infinitely in all directions.
6. ________________________ are pointsthat lie on the same line.
7. Figures are ___________________ ifthey have the same shape and same size.
8. A ________________ is a line thatintersects any two or more other lines.
5-1 and5-2 Terms: Angles
1. An _____________is formed by tworays with a common endpoint.
2. Two rays are the sides of an angle.This common endpoint is called the ______________.
3. A _________________ angle is anangle that measures exactly 90 degrees.
4. An _______________ angle is anangle that measures less than 90 degrees.
5. An ________________ angle is anangle that measures more than 90 degrees but less than 180 degrees.
6. A _________________ angle is anangle that measures exactly 180 degrees.
7. If the sum of the measures of twoangles is 90 degrees, then the angles are __________________ angles.
8. If the sum of the measures of twoangles is 180 degrees, then the angles are _______________ angles.
9. Intersecting lines form two pairsof __________________ angles. Theopposite angles are always congruent.
5-2Terms: Parallel and Perpendicular Lines
1. When lines, segments, or raysintersect, they form angles. If the angles formed by two intersecting lines areequal to 90 degrees, the lines are _____________________ lines.
2. Some lines in the same plane do notintersect at all. These lines are _______________ lines.
3. _____________ lines do notintersect, and yet they are also not parallel.
4. ________________ angles are theopposite angles formed by two intersecting lines. These angles have the samemeasure, so they are congruent.
5. A _______________ is a line thatintersects any two or more lines. __________ angles are formed when atransversal intersects two lines. When those two lines are ____________, all ofthe acute angles formed are congruent, and all of the obtuse angles formed arecongruent. These obtuse and acute angles are ______________________.
5-3 Terms: Triangles
1. A ________________ triangle has nocongruent sides and no congruent angles.
2. An _________________ triangle has at least 2 congruent sides and 2congruent angles.
3. In an ________________________triangle all of the sides and all of the angles are congruent.
4. In an _______________ triangle, allof the angles are acute.
5. An _________________ triangle has one obtuse angle.
6. A _______________ triangle has oneright angle.
7. The Triangle Sum Theorem: the sum of the angles in a triangle is _____degrees.
1. A ______________ is a closed planefigure formed by three or more line segments.
2. Each line segment forms a_____________ of the polygon, and meets, but does not cross.
3. A polygon with 3 sides and 3 anglesis called a ________________.
4. A polygon with 4 sides and 4 anglesis called a ____________________.
5. A polygon with 5 sides and 5 anglesis called a ____________________.
6. A polygon with 6 sides and 6 anglesis called a ____________________.
7. A polygon with 7 sides and 7 anglesis called a ______________________.
8. A polygon with 8 sides and 8 anglesis called an ____________________.
9. A polygon with 9 sides and 9 anglesis called a ____________________.
10. A polygon with 10 sides and 10angles is called a ____________________.
11. A __________________ polygon is apolygon in which all sides are congruent and all angles are congruent.
12. Explain why a circle is not apolygon.
1. A ___________________ has two pairsof parallel sides.
2. A ___________________ has fourcongruent sides.
3. A ____________________ has fourright angles.
4. A ____________________ has fourcongruent sides and four right angles.
5. A _____________________ has exactlyone pair of parallel sides.
6. A _____________________ has exactlytwo pairs of congruent, adjacent sides.
7. The sum of the angles of a quadrilateralis ______ degrees.
1. A _________________ changes theposition or orientation of a figure.
2. A _________________ is when afigure slides along a straight line without turning.
3. A _________________ is when afigure turns around a fixed point.
4. A _________________ is when thefigure flips across a line of reflection, creating a mirror image.
5. A _________________ enlarges orreduces a figure.
7.RP Ratios and Proportional Relationships
· ratio- a comparison of two quantities by division ( 12 to 25, 12/25, 12:25)
· equivalent ratios- ratios that name the same comparison
· proportion- an equation that states that two ratios are equivalent
· rate- a ratio that compares two quantities measured in different units ( 55mi/h )
· unit rate- a rate in which the second quantity in the comparison is one unit (55 mi/h the hour is per 1 hour)
· unit price- a unit rate used to compare prices
· unit conversion factor- a fraction used in unit conversion in which the numerator and denominator represent the same amount but are in different units ( 60 min/1 h, 100 cm/ 1 m)
· cross product- the product of numbers on the diagonal when comparing two ratios (heart method)
Seventh Grade Accelerated Math(Pre-Algebra) Vocabulary #1
Expressions and Equations(7.EE.1, 7.EE.4, 7.EE.4a)
· expression- a variable or combinationof variables, numbers, and symbols that represents a mathematical relationship 5+ 3 3y - 2 (18 + n)/4
· algebraicexpression-an expression that contains at least one variable 3y - 2
· verbalexpression-a word or phrase "theproduct of 7 and m"
· equation- a statement that shows twomathematical expressions are equal
9x +3 = 4x - 7
· variable- a quantity that changes orcan have different values; a symbol, a letter or picture, that stands for avariable quantity 2n +3 n is the variable
· substitute- to replace a variable witha number or another expression in an algebraic expression
· coefficient- a numerical factor in aterm of an algebraic expression
5x 5 is the coefficient 4.7y² 4.7 is the coefficient
· constant- a value that does notchange 1,954 + a 1,954 is the constant
· solve- to find the value thatmakes the equation true
· inverse- "opposite"operations; operations that "undo" each other
addition and subtraction areinverse operations...so are multiplication and division
· isolatethe variable-to get the variable alone on one side of an equation or inequality in order tosolve the equation or inequality
· propertiesof equality-
ü addition property of equality- you can add the same amountto both sides of an equation and the statement will still be true
ü subtraction property ofequality-you can subtract the same amount to both sides of an equation and the statementwill still be true
ü multiplication property ofequality-you can multiply the same amount to both sides of an equation and the statementwill still be true
Name__________________________ Date_____________ Block__________
SeventhGrade Accelerated Math (Pre-Algebra) Vocabulary #2
Expressionsand Equations (7.EE.1, 7.4EE.4, 7.EE.4a, 7.EE.4b)
· multi- step problem-problems that require more than one computation or operation, or theapplication of more than one mathematical principle or property
· inequality-a mathematical sentence that compares two unequal expressions using one of thesymbols < , > , ≤ , ≥ , or ≠
· algebraic inequality-an inequality that contains at least one variable
d + 3 > 10 5a > b + 3
· solution set-the set of values that make a statement true
· term-the parts of an expression that are added or subtracted
7x + 5 - 3y² + 2x 7x, 5, 3y² and 2x are terms
· like terms-two or more terms that have the same variable raised to the same power
7x + 5 - 3y² + 2x 7x and 2x are like terms
· simplify-combine all possible operations, including like terms
It is important for your child to reveiw all of the math vocabulary discussed and explored throughout the course of the year. They need to be familiar with the words and meanings in order to solve mathematical problems. They may use their math notebook to brush up on their vocabulary. | http://www.waynecountyschools.org/Page/17456 | 13 |
99 | In mathematics, a limit is the value that a function or sequence "approaches" as the input or index approaches some value. Limits are essential to calculus (and mathematical analysis in general) and are used to define continuity, derivatives, and integrals.
In formulas, limit is usually abbreviated as lim as in lim(an) = a, and the fact of approaching a limit is represented by the right arrow (→) as in an → a.
Limit of a function
means that f(x) can be made to be as close to L as desired by making x sufficiently close to c. In that case, the above equation can be read as "the limit of f of x, as x approaches c, is L".
Augustin-Louis Cauchy in 1821, followed by Karl Weierstrass, formalized the definition of the limit of a function as the above definition, which became known as the (ε, δ)-definition of limit in the 19th century. The definition uses ε (the lowercase Greek letter epsilon) to represent a small positive number, so that "f(x) becomes arbitrarily close to L" means that f(x) eventually lies in the interval (L - ε, L + ε), which can also be written using the absolute value sign as |f(x) - L| < ε. The phrase "as x approaches c" then indicates that we refer to values of x whose distance from c is less than some positive number δ (the lower case Greek letter delta)—that is, values of x within either (c - δ, c) or (c, c + δ), which can be expressed with 0 < |x - c| < δ. The first inequality means that the distance between x and c is greater than 0 and that x ≠ c, while the second indicates that x is within distance δ of c.
Note that the above definition of a limit is true even if f(c) ≠ L. Indeed, the function f need not even be defined at c.
For example, if
then f(1) is not defined (see division by zero), yet as x moves arbitrarily close to 1, f(x) correspondingly approaches 2:
|1.900||1.990||1.999||⇒ undefined ⇐||2.001||2.010||2.100|
Thus, f(x) can be made arbitrarily close to the limit of 2 just by making x sufficiently close to 1.
In other words,
This can also be calculated algebraically, as for all real numbers .
Now since is continuous in at 1, we can now plug in 1 for , thus .
In addition to limits at finite values, functions can also have limits at infinity. For example, consider
- f(100) = 1.9900
- f(1000) = 1.9990
- f(10000) = 1.99990
As x becomes extremely large, the value of f(x) approaches 2, and the value of f(x) can be made as close to 2 as one could wish just by picking x sufficiently large. In this case, the limit of f(x) as x approaches infinity is 2. In mathematical notation,
Limit of a sequence
Consider the following sequence: 1.79, 1.799, 1.7999,... It can be observed that the numbers are "approaching" 1.8, the limit of the sequence.
- For every real number ε > 0, there exists a natural number n0 such that for all n > n0, |an − L| < ε.
Intuitively, this means that eventually all elements of the sequence get arbitrarily close to the limit, since the absolute value |an − L| is the distance between an and L. Not every sequence has a limit; if it does, it is called convergent, and if it does not, it is divergent. One can show that a convergent sequence has only one limit.
The limit of a sequence and the limit of a function are closely related. On one hand, the limit as n goes to infinity of a sequence a(n) is simply the limit at infinity of a function defined on the natural numbers n. On the other hand, a limit L of a function f(x) as x goes to infinity, if it exists, is the same as the limit of any arbitrary sequence an that approaches L, and where an is never equal to L. Note that one such sequence would be L + 1/n.
Limit as "standard part"
In non-standard analysis (which involves a hyperreal enlargement of the number system), the limit of a sequence can be expressed as the standard part of the value of the natural extension of the sequence at an infinite hypernatural index n=H. Thus,
Here the standard part function "st" associates to each finite hyperreal, the unique finite real infinitely close to it (i.e., the difference between them is infinitesimal). This formalizes the natural intuition that for "very large" values of the index, the terms in the sequence are "very close" to the limit value of the sequence. Conversely, the standard part of a hyperreal represented in the ultrapower construction by a Cauchy sequence , is simply the limit of that sequence:
In this sense, taking the limit and taking the standard part are equivalent procedures.
Convergence and fixed point
A formal definition of convergence can be stated as follows. Suppose as goes from to is a sequence that converges to , with for all . If positive constants and exist with
then as goes from to converges to of order , with asymptotic error constant
Given a function with a fixed point , there is a nice checklist for checking the convergence of the sequence .
- 1) First check that p is indeed a fixed point:
- 2) Check for linear convergence. Start by finding . If....
|then there is linear convergence|
|then there is at least linear convergence and maybe something better, the expression should be checked for quadratic convergence|
- 3) If it is found that there is something better than linear the expression should be checked for quadratic convergence. Start by finding If....
|then there is quadratic convergence provided that is continuous|
|then there is something even better than quadratic convergence|
|does not exist||then there is convergence that is better than linear but still not quadratic|
Topological net
An alternative is the concept of limit for filters on topological spaces.
See also
|The Wikibook Calculus has a page on the topic of: Limits|
- Limit of a sequence
- Rate of convergence: the rate at which a convergent sequence approaches its limit
- Cauchy sequence
- Limit of a function
- Banach limit defined on the Banach space that extends the usual limits.
- Limit in category theory
- Asymptotic analysis: a method of describing limiting behavior
- Big O notation: used to describe the limiting behavior of a function when the argument tends towards a particular value or infinity
- Convergent matrix
- Stewart, James (2008). Calculus: Early Transcendentals (6th ed.). Brooks/Cole. ISBN 0-495-01166-5.
- Larson, Ron; Edwards, Bruce H. (2010). Calculus of a single variable (Ninth ed.). Brooks/Cole, Cengage Learning. ISBN 978-0-547-20998-2.
- Numerical Analysis, 8th Edition, Burden and Faires, Section 2.4 Error Analysis for Iterative Methods | http://en.wikipedia.org/wiki/Limit_(mathematics) | 13 |
54 | Enter an pressure quantity to convert, and select the from and to units.
Pressure = Force / Unit Area
The ratio of force applied per unit area, is pressure. The equation would be Force / Unit Area = Pressure. Force is created by the weight an object exerts onto another object. The measurement of pressure is clear cut for solid on solid, however liquid and gas are a little different. These types of matter need to be confined.
In this lesson you will learn:
• What the pressure is when a solid applies physical force to another solid?
• What results when solid applies physical force on confined fluid?
• What are the results when the applied force is gravity?
Pressure of Solid on Solid
When applying force to a solid object, pressure represents the amount of force applied divided by the surface area of which the pressure is applied. The equation we use for pressure is:
P = F / A
• P is for Pressure
• F is for Applied Force
• A is for the surface area for which the force is applied
• F / A represents F divided by A
Push on an object with the force of 20 pounds, and lets say your hand is the area of 10 square inches, the pressure exerted would be represented as 20 / 10 = 2 pounds per square inch.
Pressure = Force / Area
Mathematically speaking, the smaller the surface area the greater the pressure. Furthermore the greater the surface area, speeding out the pressure, the less pressure per unit area.
Solid Object Applying Pressure to a Restricted Fluid
When confined in a cylinder or container, pressure can be calculated on a liquid or gas. By using a piston to exert force, pressure would be created. The pressure would be calculated by using the area of the piston. Therefore the force exerted divided by the area of the piston would equal the pressure created: P = F / A.
Thus the pressure is equal thought the container or cylinder, pressing equally on all inside surfaces. Using a bicycle pump for example, pressure is created inside the pump and transported through a hose and into the tube of the tire. Therefore the air continues to be confined.
Pressure is exerted in all directions in a fluid.
The greater the Force the greater the Pressure in the container or cylinder.
Caused By Gravity
Weight of an object is the force caused by gravity. Therefore we can substitute weight for force in the pressure equation. Thus the weight (W) of an object divided by the surface area (A) where the weight is exerted equals the pressure (P) created by that object.
P = W / A
When placing an object on the floor, the weight of the object divided by the area of the floor is the pressure over the area of contact.
Pressure = Weight / Area
Example With Shoes
For example women's high heel shoes have a very small surface area at the heel, therefore creating greater pressure per area of the floor. Thus shoes of this type often cause damage to some types of flooring due to the increased pressure at the heel.
The average shoe distributes a persons weight over 20 square inches. Using a 100 pound person for example the equation would be 100 / 20 = 5 pounds per square inch on the floor.
A spiked heel however is only 0.25 square inches. Again using a 100 pound person the equation would be 100 / 0.25 = 400 pounds per square inch at the heel on the floor. Remember the smaller the surface area the greater the pressure. Thus creating enough pressure to damage some floors.
The weight of a container or cylinder full of liquid would press on the bottom, similar to that of a solid object. The pressure would be calculated the same as if the weight was from a solid object:
P = W / A
One difference in a fluid is that the pressure is exerted equally in all directions for both the sides and the bottom.
Contained gases and liquids present pressure due to weight at all points in the fluid.
Pressure is the amount of Force on an object over a area. The equation for calculating pressure is P = F / A. Pressure can be calculated when a solid is being pressed on to another solid. Pressure can also be determined on fluid when liquids or gases are confined in a container. The weight of an object can also create force. | http://www.convertauto.com/pressure-conversion | 13 |
65 | The Way of the Java/Variables and types
As I mentioned in the last chapter, you can put as many statements as you want in main. For example, to print more than one line:
class Hello // main: generate some simple output public static void main (String args) System.out.println ("Hello, world."); // print one line System.out.println ("How are you?"); // print another
Also, as you can see, it is legal to put comments at the end of a line, as well as on a line by themselves.
The phrases that appear in quotation marks are called strings, because they are made up of a sequence (string) of letters. Actually, strings can contain any combination of letters, numbers, punctuation marks, and other special characters.
println is short for print line, because after each line it adds a special character, called a newline, that causes the cursor to move to the next line of the display. The next time println is invoked, the new text appears on the next line.
Often it is useful to display the output from multiple print statements all on one line. You can do this with the print command:
class Hello // main: generate some simple output public static void main (String args) System.out.print ("Goodbye, "); System.out.println ("cruel world!");
In this case the output appears on a single line as Goodbye, cruel world!. Notice that there is a space between the word Goodbye and the second quotation mark. This space appears in the output, so it affects the behavior of the program.
Spaces that appear outside of quotation marks generally do not affect the behavior of the program. For example, I could have written:
class Hello public static void main (String args) System.out.print ("Goodbye, "); System.out.println ("cruel world!");
This program would compile and run just as well as the original. The breaks at the ends of lines (newlines) do not affect the program's behavior either, so I could have written:
class Hello public static void main (String args) System.out.print ("Goodbye, "); System.out.println ("cruel world!");
That would work, too, although you have probably noticed that the program is getting harder and harder to read. Newlines and spaces are useful for organizing your program visually, making it easier to read the program and locate syntax errors.
One of the most powerful features of a programming language is the ability to manipulate variables. A variable is a named location that stores a value. Values are things that can be printed and stored and (as we'll see later) operated on. The strings we have been printing ("Hello, World.", "Goodbye, ", etc.) are sequences of values.
In order to store a value, you have to create a variable. Since the values we want to store are strings, we will declare that the new variable is a string:
This statement is a declaration, because it declares that the variable named fred has the type String. Each variable has a type that determines what kind of values it can store. For example, the int type can store integers, and it will probably come as no surprise that the String type can store strings.
You may notice that some types begin with a capital letter and some with lower-case. Java is a case-sensitive language, so you should take care to get it right.
To create an integer variable, the syntax is int bob;, where bob is the arbitrary name you made up for the variable. In general, you will want to make up variable names that indicate what you plan to do with the variable. For example, if you saw these variable declarations:
String firstName; String lastName; int hour, minute;
you could probably make a good guess at what values would be stored in them. This example also demonstrates the syntax for declaring multiple variables with the same type: hour and second are both integers (int type).
Now that we have created some variables, we would like to store values in them. We do that with an assignment statement.
fred = "Hello."; // give fred the value "Hello." hour = 11; // assign the value 11 to hour minute = 59; // set minute to 59
This example shows three assignments, and the comments show three different ways people sometimes talk about assignment statements. The vocabulary can be confusing here, but the idea is straightforward:
- When you declare a variable, you create a named storage location.
- When you make an assignment to a variable, you give it a value.
As a general rule, a variable has to have the same type as the value you assign it. You cannot store a String in minute or an integer in fred.
On the other hand, that rule can be confusing, because there are many ways that you can convert values from one type to another, and Java sometimes converts things automatically. So for now you should remember the general rule, and we'll talk about special cases later.
Another source of confusion is that some strings look like integers, but they are not. For example, fred can contain the string "123", which is made up of the characters 1, 2 and 3, but that is not the same thing as the number 123.
fred = "123"; // legal fred = 123; // not legal
You can print the value of a variable using the same commands we used to print Strings.
class Hello public static void main (String args) String firstLine; firstLine = "Hello, again!"; System.out.println (firstLine);
This program creates a variable named firstLine, assigns it the value "Hello, again!" and then prints that value. When we talk about printing a variable, we mean printing the value of the variable. To print the name of a variable, you have to put it in quotes. For example: System.out.println ("firstLine");
If you want to get a little tricky, you could write
String firstLine; firstLine = "Hello, again!"; System.out.print ("The value of firstLine is "); System.out.println (firstLine);
The output of this program is:
The value of firstLine is Hello, again!
I am pleased to report that the syntax for printing a variable is the same regardless of the variable's type.
int hour, minute; hour = 11; minute = 59; System.out.print ("The current time is "); System.out.print (hour); System.out.print (":"); System.out.print (minute); System.out.println (".");
The output of this program is
The current time is 11:59.
WARNING: It is common practice to use several print commands followed by a println, in order to put multiple values on the same line. But you have to be careful to remember the println at the end. In many environments, the output from print is stored without being displayed until the println command is invoked, at which point the entire line is displayed at once. If you omit println, the program may terminate without ever displaying the stored output!
A few sections ago, I said that you can make up any name you want for your variables, but that's not quite true. There are certain words that are reserved in Java because they are used by the compiler to parse the structure of your program, and if you use them as variable names, it will get confused. These words, called keywords, include public, class, void, int, and many more.
The complete list is available at this Java doc.
Rather than memorize the list, I would suggest that you take advantage of a feature provided in many Java development environments: code highlighting. As you type, different parts of your program should appear in different colors. For example, keywords might be blue, strings red, and other code black. If you type a variable name and it turns blue, watch out! You might get some strange behavior from the compiler.
Operators are special symbols that are used to represent simple computations like addition and multiplication. Most of the operators in Java do exactly what you would expect them to do, because they are common mathematical symbols. For example, the operator for adding two integers is +.
The following are all legal Java expressions whose meaning is more or less obvious:
1+1 hour-1 hour*60 + minute minute/60
Expressions can contain both variable names and numbers. In each case the name of the variable is replaced with its value before the computation is performed.
Addition, subtraction and multiplication all do what you expect, but you might be surprised by division. For example, the following program:
int hour, minute; hour = 11; minute = 59; System.out.print ("Number of minutes since midnight: "); System.out.println (hour*60 + minute); System.out.print ("Fraction of the hour that has passed: "); System.out.println (minute/60);
would generate the following output:
Number of minutes since midnight: 719 Fraction of the hour that has passed: 0
The first line is what we expected, but the second line is odd. The value of the variable minute is 59, and 59 divided by 60 is 0.98333, not 0. The reason for the discrepancy is that Java is performing integer division.
When both of the operands are integers (operands are the arguments of an instruction), the result must also be an integer, and by convention integer division always rounds down, even in cases like this where the next integer is so close.
A possible alternative in this case is to calculate a percentage rather than a fraction:
System.out.print ("Percentage of the hour that has passed: "); System.out.println (minute*100/60);
The result is:
Percentage of the hour that has passed: 98
Again the result is rounded down, but at least now the answer is approximately correct. In order to get an even more accurate answer, we could use a different type of variable, called floating-point, that is capable of storing fractional values. We'll get to that in the next chapter.
Order of operations
precedence order of operations
When more than one operator appears in an expression the order of evaluation depends on the rules of precedence. A complete explanation of precedence can get complicated, but just to get you started:
Multiplication and division take precedence (happen before) addition and subtraction. So 2*3-1 yields 5, not 4, and 2/3-1 yields -1, not 1 (remember that in integer division 2/3 is 0).
If the operators have the same precedence they are evaluated from left to right. So in the expression minute*100/60, the multiplication happens first, yielding 5900/60, which in turn yields 98. If the operations had gone from right to left, the result would be 59*1 which is 59, which is wrong.
Any time you want to override the rules of precedence (or you are not sure what they are) you can use parentheses. Expressions in parentheses are evaluated first, so 2 * (3-1) is 4. You can also use parentheses to make an expression easier to read, as in (minute * 100) / 60, even though it doesn't change the result.
Operators for Strings
In general you cannot perform mathematical operations on Strings, even if the strings look like numbers. The following are illegal (if we know that fred has type String)
fred - 1 "Hello"/123 fred * "Hello"
By the way, can you tell by looking at those expressions whether fred is an integer or a string? Nope. The only way to tell the type of a variable is to look at the place where it is declared.
Interestingly, the + operator does work with Strings, although it does not do exactly what you might expect. For Strings, the + operator represents concatenation, which means joining up the two operands by linking them end-to-end. So "Hello, " + "world." yields the string "Hello, world." and fred + "ism" adds the suffix ism to the end of whatever fred is, which is often handy for naming new forms of bigotry.
So far we have looked at the elements of a programming language---variables, expressions, and statements---in isolation, without talking about how to combine them.
One of the most useful features of programming languages is their ability to take small building blocks and compose them. For example, we know how to multiply numbers and we know how to print; it turns out we can do both at the same time:
System.out.println (17 * 3);
Actually, I shouldn't say at the same time, since in reality the multiplication has to happen before the printing, but the point is that any expression, involving numbers, strings, and variables, can be used inside a print statement. We've already seen one example:
System.out.println (hour*60 + minute);
But you can also put arbitrary expressions on the right-hand side of an assignment statement:
int percentage; percentage = (minute * 100) / 60;
This ability may not seem so impressive now, but we will see other examples where composition makes it possible to express complex computations neatly and concisely.
WARNING: There are limits on where you can use certain expressions; most notably, the left-hand side of an assignment statement has to be a variable name, not an expression. That's because the left side indicates the storage location where the result will go. Expressions do not represent storage locations, only values. So the following is illegal: minute+1 = hour;.
- variable A named storage location for values. All
variables have a type, which is declared when the variable is created.
- value A number or string (or other thing to be named later)
that can be stored in a variable. Every value belongs to one type.
- type A set of values. The type of a variable
determines which values can be stored there. So far, the types we have seen are integers (int in Java) and strings (String in Java).
- keyword A reserved word that is used by the compiler
to parse programs. You cannot use keywords, like public, class and void as variable names.
- statement A line of code that represents a command or
action. So far, the statements we have seen are declarations, assignments, and print statements.
- declaration A statement that creates a new variable and
determines its type.
- assignment A statement that assigns a value to a variable.
- expression A combination of variables, operators and
values that represents a single result value. Expressions also have types, as determined by their operators and operands.
- operator A special symbol that represents a simple
computation like addition, multiplication or string concatenation.
- operand One of the values on which an operator operates.
- precedence The order in which operations are evaluated.
- concatenate To join two operands end-to-end.
- composition The ability to combine simple
expressions and statements into compound statements and expressions in order to represent complex computations concisely. | http://en.m.wikibooks.org/wiki/The_Way_of_the_Java/Variables_and_types | 13 |
72 | The California coastal region has been subjected to intense tectonic forces for millions of years. Folding, faulting of marine sediments, and associated volcanism resulted in the formation of the Klamath and the Salmon Mountains in northern California and the Coast Ranges that extend along most of the California coast. Terrestrial, marine, and volcanic rocks deposited in intermontane valleys compose the aquifers herein called the Coastal Basins aquifers (fig. 102). The California Department of Water Resources considers more than 100 coastal basins to be "significant" because of the amount of ground water potentially obtainable or the scarcity of surface-water sources in a basin. Nearly all of the large population cen-ters in California are located in the coastal basins.
The climate along the coast of California is moderated by the Pacific Ocean and is essentially Mediterranean, characterized by cool winters and warm summers. Precipitation is seasonal and usually in the form of rain. The greatest amounts of precipitation fall during late autumn, winter, and early spring. Precipitation amounts are greatest in northern California and progressively decrease southward. Altitude also influences precipitation patterns; the greatest amounts of precipitation fall in the mountains. Potential annual evaporation in the valleys exceeds annual precipitation from San Francisco Bay southward. As a result, most unregulated rivers in southern California are dry in their lower reaches during the summer months.
The intermontane basins in the coastal mountains of California are structural troughs or depressions that parallel the coastline and formed as a result of folding and faulting (fig. 103). Most of the folds and faults trend northwestward and result from the deformation of older rocks by the intense pressures of colliding continental plates. The rocks that underlie the basins and form the surrounding mountains are primarily marine sediments and metamorphic and igneous rocks, all of which are of Mesozoic age but locally include rocks of Cenozoic age.
The basins are partly filled with unconsolidated and semiconsolidated marine sedimentary rocks that were deposited during periodic encroachment of the sea and with un-consolidated continental deposits that consist of weathered igneous and sedimentary rock material which was transported into the basins primarily by mountain streams. These marine sediments and continental deposits are tens of thousands of feet thick in some basins. In the basins just north of San Fran- cisco Bay, permeable basalt and tuff compose a portion of the materials overlying the older consolidated rocks. In most basins, however, almost all of the permeable material consists of unconsolidated continental deposits, primarily sand and gravel (fig. 103).
In all the basins, most of the freshwater is contained in aquifers that consist of continental deposits of sand and gravel that might be interbedded with confining units of fine-grained material, such as silt and clay. The aquifers and confining units compose an aquifer system. Water enters a typical coastal-basin aquifer in several ways. Runoff from precipitation in the surrounding mountains infiltrates the permeable sediments of the valley floor either at the basin margins or through streambeds where the water table is lower than the water level in the stream. Precipitation that falls on the valley floor provides some direct recharge, but in the coastal basins, most of the precipitation evaporates or is transpired by plants. In a few basins that are hydraulically connected to other basins, water can enter an aquifer system as lateral subsurface flow from an adjacent basin. Of these methods of recharge, runoff from the mountains and percolation through streambeds provide the largest amounts of water to the ground-water system.
Natural movement of water in the aquifers is generally parallel to the long axis of the basin (fig. 103) because of impermeable rocks that commonly form a barrier between the basin and the sea. However, in a few coastal basins, most notably in the Los AngelesOrange County coastal plain, the coastal barrier is absent, and the natural direction of flow is perpendicular to the long axis of the basin or from the inland mountains to the sea. Before major development, ground water in all the basins discharged directly into the ocean or into bays connected to the ocean. After development, however, most or all the ground water is withdrawn by wells in the basins.
Although all the coastal basins have similar hydrogeologic settings, each is different in its geologic history and land- and water-use characteristics. Because it is beyond the scope of this Atlas to describe all of the coastal basin aquifers, only the basins with the largest ground-water withdrawals are described in this section.
FRESH GROUND-WTER USE AND MANAGEMENT
During the early years of ground-water development in the coastal basins, from the 1850's to early 1900's, the principal use of water was for irrigated agriculture. Although agricultural ground-water use remains substantial, urbanization has gradually replaced most agricultural land in the larger basins and the greatest collective ground-water demand is now for public supply. The largest water users are cities and suburbs from San Francisco southward, but because of the unequal distribution of rainfall, most of the freshwater is in northern California. Accordingly, it has become necessary to regulate streamflow and import water into many coastal basins from the Sierra Nevada, the Colorado River, the Owens Valley, and northern California through an extensive system of aqueducts.
For many years, rapidly growing populations in several basins resulted in ground-water withdrawals that exceeded natural recharge on a long-term basis; this led to marked water-level declines. The consequences of these excessive withdrawals ranged from mild, such as increased pumping costs, to severe, such as land subsidence and seawater intrusion. Today (1995), ground water in the coastal basins of California is carefully managed. The current supply of water from all sources, including imported water, approximately balances demand. However, because the natural recharge to many basins, especially southward from the San Francisco Bay area, is far less than the volume of ground water currently (1995) withdrawn, increases in population will require either additional imports, more conservation, an increase in the amount of water now reclaimed, or a combination of all three. More than any other environmental factor, water availability will likely determine the size of the population these basins can support.
EUREKA AREA BASINS
The Eureka area basins, which consist of the Mad River Valley, the Eureka Plain, and the Eel River Valley, are located southwestward of the Klamath Mountains at the north end of the Coast Ranges (fig. 104). The basins are not densely populated; agriculture and timber are the major industries in the area, and pastureland accounts for most of the agricultural acreage.
The predominant feature of the Eureka area is Humboldt Bay, which is separated from the ocean by spits. Humboldt Bay, the northern end of which is known as Arcata Bay, extends 12 miles parallel to the coastline and is 0.5 to 4 miles wide. The land on the inland side of the bay is flat to gently rolling. The shoreline of the bay has a well-developed beach from which dunes extend inland a short distance over the alluvial plain.
The major streams that drain the area are the Eel River, which flows into the Pacific Ocean south of Humboldt Bay, and the Mad River, which flows into the Pacific Ocean north of the bay. Several small streams also flow into Humboldt Bay. All the streams are tidally influenced and have brackish-water marshes and mud flats along their banks for as much as 1 to 2 miles inland.
Coastal northern California has a Temperate Oceanic climate, which is characterized by moderate temperature and precipitation. Dense fog is frequent and tends to attenuate temperature fluctuations. The average annual precipitation at Eureka is approximately 40 inches per year, most of which falls during the autumn and winter months. Precipitation increases with altitude, and amounts are greater inland in the foothills and mountains.
Aquifers and Confining Units
Unconsolidated deposits of sand, gravel, silt, and clay, which are Pliocene and younger and primarily of alluvial origin, compose the Eureka area aquifers (fig. 104). Near the coast, the alluvial deposits interfinger with estuarine sediments and locally are underlain by marine sediments. The thickness of the unconsolidated deposits ranges from only a few feet to as much as 1,000 feet (fig. 105). The unconsolidated deposits range from coarse to fine grained. The most permeable deposits are surficial alluvium and dune sands. Virtually all fresh ground water is withdrawn from these deposits, but deeper beds yield water in some places. The permeability of the unconsolidated sediments varies with location, however, and well yields vary accordingly. Consolidated and semicon-solidated rocks of minimal permeability form the boundaries of the aquifer system.
Distinct confining units are scarce in the unconsolidated deposits, but large total thicknesses of fine-grained sediments can impede vertical flow sufficiently to create an increase in hydraulic head with depth. Consequently, depending upon the permeability and depth of the water-yielding deposits at a particular location, ground water can be under either confined or unconfined conditions.
The primary fresh ground-water body in the Eureka area is in the Eel River Valley, where ground water under unconfined, or water-table, conditions is available nearly everywhere at depths of 30 feet or less. An exception is in the vicinity of Ferndale, where sediments are fine grained, have minimal permeability, and yield little water to wells except near the mouths of streams, where the sediments are coarse grained and fluvial. A perched water table is above clay beds that form a local confining unit in terrace deposits near the Eel River. Water in the deeper parts of the aquifer in the Eel River Valley, near Humboldt Bay in the Eureka Plain, and in the Mad River Valley, between Eureka and Arcata, is under confined or partially confined conditions.
The aquifer is recharged primarily by runoff from the hills that surround the stream valleys and by seepage from the upper reaches of streams. Minor recharge is by lateral movement of water from adjacent rocks and by direct precipitation. Deeply-buried sediments are recharged by precipitation where they crop out and by leakage from shallower water-yielding beds to which they are hydraulically connected, especially where withdrawals from the deep sediments are sufficient to cause a downward hydraulic gradient. Ground-water movement in the surficial deposits is generally toward the coast (fig. 106), where the water mostly discharges into estuarine reaches of the rivers; some water discharges directly into Humboldt Bay or the Pacific Ocean, or is withdrawn by wells. Water in the deeper sediments is discharged by vertical flow to shallower deposits where the hydraulic gradient is upward, or is withdrawn by deep wells.
Fresh Ground-Water Withdrawals
Irrigation of pastureland accounts for most ground-water use in the Eureka area, followed by withdrawals for industry and public supply. Most of the withdrawals for irrigation are in the coastal plain of the Eel River Valley. The cities of Eureka and Arcata use surface water for their public supplies, whereas many of the smaller communities use ground water. Total estimated ground-water withdrawals during 1972 were 9,000 acre-feet in the Mad River Valley, 15,000 acre-feet in the Eureka Plain, and 10,000 acre-feet in the Eel River Valley. This is more than double the estimated total withdrawal of 15,000 acre-feet during 1952, but current (1995) rates of ground-water withdrawal do not appear to be in excess of natural recharge. Therefore, no shortage of water is likely as long as surface-water supplies remain adequate to supply municipal demands.
The quality of ground water in the Eureka area is generally acceptable for most uses, although concentrations of dissolved iron in water from many wells may exceed the U.S. Environmental Protection Agency's secondary drinking-water recommendation of 300 micrograms per liter. Chloride concentrations in excess of the 250 milligrams per liter drinking-water recommendation are reported in water from wells near the Eel River as much as 4 miles inland from the Pacific Ocean, suggesting that the source of the chloride is brackish water from the tidal reaches of the river. Shallow wells in the dune sands also are prone to seawater intrusion because they must obtain freshwater from a thin lens that floats on saltwater. Excessive withdrawals or minimal recharge lower the freshwater head in the dunes and allow salty water to be drawn into wells.
NORTH SAN FRANCISCO BAY AREA VALLEYS
Among the Coast Ranges north of San Francisco Bay are several valleys underlain by aquifers from which moderate to large volumes of water are withdrawn (fig. 107).The Petaluma, the Sonoma, the Napa, and the SuisunFairfield Valleys drain into San Pablo Bay, and the Santa Rosa Basin drains into the Russian River, which empties into the Pacific Ocean.
The north San Francisco Bay area valleys contain a mixture of urban and agricultural lands. Population growth since the 1970's has been rapid and urban areas have replaced much of what was formerly agricultural land. Nonetheless, agriculture remains important to the local economy. Orchard crops are a significant part of the total agricultural output but are being replaced in many areas by vineyards. Many formerly unplanted hillsides now grow wine grapes.
Ground-water supplies in the area are limited by local availability and, to a degree, by the quality of the water. Most of the water used in the area is surface water, much of which is either derived from the Russian River or imported from the Central Valley. Ground water, however, remains the primary source of supply for agriculture, stock watering, and domestic uses and is an important source for municipal supply.
The five drainage basins of the north San Francisco Bay area valleys are structural troughs filled to great depths with marine and continental sediments and volcanic deposits. The basins each have a flat to gently rolling valley floor formed primarily on alluvial fan deposits. The slope of the fan steepens near the foothills at the base of the surrounding mountains. Some streams that drain mountain valleys are perennial only in their upper reaches because the water table falls below the level of the streambed during the dry season. The lower stream reaches are seasonally intermittent.
The North San Francisco Bay area has a Mediterranean-type climate, characterized by moderate temperatures and markedly seasonal precipitation that falls primarily during late autumn to early spring. Precipitation amounts are dependent on altitude, with average annual amounts that range from less than 20 inches in some valley locations to more than 60 inches in the higher elevations of the Coast Ranges.
Aquifers and Confining Units
The principal water-yielding materials in the north San Francisco Bay area valleys are unconsolidated and semiconsolidated marine and continental sediments and unwelded tuffaceous beds in volcanic rocks (fig. 108). Consolidated rocks of Cretaceous and Jurassic age that underlie the entire area have little permeability and form the boundaries of the ground-water flow system. The permeability and extent of water-yielding deposits varies considerably. In all the valleys, alluvial-fan deposits and stream-valley alluvium compose the major part of the aquifer. Locally, marine and estuarine deposits of sand beneath the Santa Rosa and the Petaluma Basins are an important source of ground water. Volcanic tuff of Pliocene age in the areas of volcanic rocks shown in figure 108 yields water to wells in the Sonoma and the Petaluma Basins.
Ground water is under unconfined, or water-table, conditions in shallow alluvial deposits and locally where it is near the land surface in other types of rocks. The ground water is confined or semiconfined in deeper parts of the alluvial deposits and nonalluvial formations. Because of their lenticular nature, water-yielding deposits in the north San Francisco Bay area valleys are generally discontinuous and isolated. Further, many of the deep deposits are displaced by faults. As a result, the valleys are a collection of variously connected and isolated aquifers. Generally, the alluvial-fan and stream-valley alluvial deposits in each basin are sufficiently continuous to be considered single aquifers; however, because of the geologic complexity of the area and the limited availability of data, the exact extent and degree of continuity of many deep aquifers is unknown.
Ground-Water Flow System
Recharge to the ground-water flow system enters permeable sediments at the valley margins primarily as runoff from precipitation in the mountains and hills that surround the val-leys. Other sources of recharge are precipitation that falls directly on permeable deposits in low-lying areas of the valleys and seepage through streambeds in areas where the water table is lower than the stream level and the streambed sediments are sufficiently permeable to permit infiltration into the aquifers. Discharge is by seepage to gaining reaches of streams, spring discharge, evapotranspiration, and withdrawals from wells.
All the basins are drained by streams that are perennial only in their upper reaches. The lower reaches become dry in summer because of infiltration where they are underlain by permeable deposits. The ground-water flow system in most basins is essentially self-contained, and interbasin transfer of water is minor.
Ground-water movement generally followed surface-water drainage under natural, or predevelopment, conditions (fig. 109). The discontinuous nature of deep water-yielding materials makes it impossible to construct accurate basinwide water-level maps. However, the ground-water flow pattern in deep aquifers is likely to be similar to that of shallow aquifers. Withdrawals alter the direction of ground-water movement locally and can affect significant changes in regional flow patterns if withdrawal rates are relatively large. Present-day (1995) flow patterns, in general, do not differ significantly from those of predevelopment conditions except locally near withdrawal centers. Withdrawal in the past, however, has reversed the freshwater gradient and induced the intrusion of saltwater in the lower parts of the Napa, the Sonoma, and the Petaluma Valleys.
Fresh Ground-Water Withdrawals
Surface and ground water are used conjunctively in the north San Francisco Bay area valleys. Several municipalities obtain a significant amount of their supplies from imported surface water, either from the Russian River or the aqueduct systems that serve San Francisco Bay area cities to the south. Nonetheless, ground water is used for some municipal supplies, as well as for irrigation, stock watering, and domestic uses.
Currently (1995), ground-water recharge and discharge are approximately in balance on an average annual basis in most areas, and withdrawals in excess of recharge are not common. However, because of the relatively limited storage capacity of the aquifers, as well as water-quality concerns, the amount of additional ground water that can be withdrawn without adverse effects is restricted. Although lowering of the water table can allow infiltration of additional recharge that might normally be rejected, withdrawal in excess of recharge can deplete ground-water reserves and possibly cause the migration of poor-quality water into wells. Careful monitoring of local and regional water levels will always be necessary to ensure proper use of the resource.
The quality of ground water in the north San Francisco Bay area valleys is generally suitable for most purposes. However, some problems, such as locally large concentrations of chloride, sodium, boron, nitrate, iron, and manganese, might restrict use of ground water for some applications.
Large concentrations of chloride can make water unusable for drinking and can also be toxic to plants. Drinking-water recommendations of the U.S. Environmental Protection Agency suggest a chloride concentration of less than 250 milligrams per liter. However, chloride in concentrations as low as 106 milligrams per liter may be toxic to some plants; such concentrations have been detected in ground water in the Santa Rosa Basin. Sources of chloride in the north San Francisco Bay area aquifers include seawater intrusion, thermal water, and dissolved minerals from marine and volcanic rocks. The valleys most affected by large chloride concentrations are the Petaluma, the Sonoma, and the Napa, in which seawater intrusion caused by excessive ground-water withdrawals has been the primary source (fig. 110). Reduced withdrawals and increased surface-water imports have helped alleviate the salinity problem.
Excessive sodium in irrigation water can be toxic to plants and can decrease soil permeability. Possible sources of sodium in the north San Francisco Bay area valleys include cation exchange between ground water and clay minerals, upward migration of salty water along faults, dissolved minerals in water from marine sediments, thermal water, and seawater intrusion.
Sodium is often the dominant cation in ground water in the north San Francisco Bay area valleys and has been reported locally in concentrations in excess of 250 milligrams per liter, which is sufficiently large to be of concern. The problem is widespread in the Santa Rosa Basin, where large sodium concentrations are thought to be related primarily to cation exchange. The source of excessive sodium in the other four valleys could be one or all of the sources listed above, but seawater intrusion is the primary source in the alluvial-fan deposits in the southern ends of the Petaluma, the Sonoma, and the Napa Valleys.
Although essential to plant growth in small amounts, boron in excess of 0.5 milligram per liter can be stressful or toxic to many plants, and water with a boron concentration of greater than 2.0 milligrams per liter is toxic to most plants. Boron is usually associated with water that has a large sodium concentration and the sources for boron in the area are generally the same as for sodium. Ground water that has a boron concentration of 0.5 milligram per liter or larger is found in scattered wells throughout the north San Francisco Bay area.
The presence of nitrate in ground water is usually an indication of contamination by septic tanks, fertilizers, or waste from farm animals. Large nitrate concentrations can cause methemoglobinemia (a blood disease) in infants, and State drinking-water standards in California have been set at 45 milligrams per liter of nitrate, or 10 milligrams per liter nitrogen. Nitrate is not a widespread problem in most of the north San Francisco Bay area, except locally, northwest of Petaluma (fig. 110) where nitrate concentrations are as large as three times the maximum allowed for drinking water. The probable sources appear to be septic-tank leachate plus livestock and poultry manure that was placed in unlined pits.
SANTA CLARA VALLEY
The Santa Clara Valley is located at the southern end of San Francisco Bay (fig. 111). Once devoted largely to agriculture, most of the land in the valley is now dedicated to industrial and urban uses. Population growth has resulted in a large water demand, which has exceeded the valley's natural supply since the early 1940's. Withdrawals of ground water in excess of recharge caused large water-level declines, which were followed by seawater intrusion and land subsidence surpassed in California only by that in the San Joaquin Valley. Since the 1940's, importation of surface water has been essential to the control of these problems.
The Santa Clara Valley is in a structural trough that parallels the northwest-trending Coast Ranges. The drainage basin, which includes San Francisco Bay, is bounded by the Santa Cruz Mountains on the southwest and the Diablo Range on the northeast. The basin is about 75 miles long and has a maximum width of 45 miles. The San Andreas Fault is in the Santa Cruz Mountains to the southwest, and the Hayward Fault is on the northeast side of the valley and parallels the Diablo Range (fig. 112). The Santa Clara Valley, which occupies the southern end of the basin, is about 60 miles long, about 30 miles of which extends southeastward beyond San Francisco Bay. The valley has a maximum width of about 15 miles and a total area of about 590 square miles. The altitude of the valley floor ranges from about 350 feet at the southern end to sea level at San Francisco Bay.
The Mediterranean climate of the valley is moderate and has distinct wet and dry seasons. The wet season extends from November to April. Average annual rainfall is about 14 inches on the valley floor.
Aquifers and Confining Units
The aquifer system of the Santa Clara Valley is bounded on three sides by the relatively impermeable consolidated rocks that form the mountains surrounding the valley (fig. 112) and underlie the valley at depth. Ground water in the valley is contained primarily in coarse-grained lenticular deposits of sand and gravel that alternate with discontinuous beds of fine-grained clay and silt that have minimal permeability. The combined thickness of the coarse- and fine-grained deposits is as much as 1,000 feet in some parts of the valley. The alluvial-fan and river-channel deposits near the valley margins contain a higher percentage of coarse-grained materials than deposits near the valley axis and are thus more permeable.
Although interspersed with coarse-grained channel deposits, the cumulative thickness of clay and silt is sufficient in the central two-thirds of the valley to produce confined conditions in the subsurface from southeast of San Jose to beneath San Francisco Bay. Water below a depth of 150 to 200 feet in that area is confined or semiconfined, whereas shallower water is generally unconfined. The confined part of the aquifer system is as much as 800 feet thick, but locally it contains beds of fine-grained deposits that separate it into zones of permeable material sufficiently distinct to be recognized as individual aquifers.
Ground-Water Flow System
Water enters the aquifer system at the valley margins by infiltration from the small streams that emanate from the mountains and by rainfall that falls directly on the valley floor. The natural, or predevelopment, flow pattern was generally parallel to the direction of stream drainage, and water that did not leave the aquifer system by way of evapotranspiration discharged into San Francisco Bay (fig. 112). The Hayward Fault acts as a major impediment to flow on the northeastern side of the valley.
In 1915, the hydraulic head was above land surface throughout much of the valley, and flowing wells were common. However, by 1967, an increase in ground-water withdrawals, as well as below-normal rainfall, resulted in water-level declines of more than 200 feet below 1915 levels in some parts of the valley (fig. 113). Large withdrawals lowered water levels to below sea level over much of the valley and reversed the freshwater gradient in the confined zone from seaward to landward. This reversal resulted in seawater intrusion that was detected in wells as far as 10 miles inland. The large withdrawals also caused widespread land subsidence.
Beginning in the mid-1960's, the decline in artesian head was halted and reversed by a combination of surface-water imports and decreased ground-water withdrawals (fig. 114). Water importation into the Santa Clara Valley began in about 1940 by way of the Hetch Hetchy Aqueduct and was increased in the mid-1960's through the South Bay Aqueduct. By 1980, surface-water imports approximately equaled ground-water withdrawals. Projections of future water demand made in 1983, however, indicated that by 2000 small amounts of additional surface water will have to be imported and the distribution system improved.
Fresh Ground-Water Withdrawals
Significant ground-water use in the Santa Clara Valley began about 1900 with the development of irrigated agriculture. Average annual agricultural withdrawals increased from about 40,000 acre-feet per year from 1915 to 1920 to a maximum 5-year average of about 103,000 acre-feet per year from 1945 to 1950. After 1945, urban and industrial development increased rapidly, and irrigated acreage began to decline; agricultural withdrawals decreased to an average 20,000 acre-feet per year from 1970 to 1975. Meanwhile, municipal and industrial withdrawals increased from about an average of 22,000 acre-feet per year from 1940 to 1945 to an average of 131,000 acre-feet per year from 1970 to 1975. Total withdrawals in the valley increased from 50,000 acre-feet per year from 1915 to 1920 to 185,000 acre-feet per year from 1960 to 1965, and then declined to about 150,000 acre-feet per year from 1970 to 1975 when surface-water imports increased sufficiently to offset the excessive ground-water withdrawals.
Municipal and industrial ground-water use began to exceed agricultural use early in the 1960's. Currently (1995), municipalities and industries account for about 90 percent of the water used in the valley. Ground water used for irrigated agriculture averaged about 14,000 acre-feet per year from 1975 to 1980, while the amount withdrawn for municipal and industrial use was about 150,000 acre feet per year. The combined annual agricultural, municipal, and industrial ground-water withdrawals of about 164,000 acre-feet were about one-half of the total water used in the valley.
The Santa Clara Valley is underlain by large amounts of clay that readily compacts as a result of excessive ground-water withdrawal, thus causing land subsidence. Land subsidence has been evident over much of the valley and is greater than 8 feet in some places (fig. 115). Subsidence has resulted in flooding in coastal areas and damage to roads, bridges, railroads, and sewer systems. The cost of remedial measures has been estimated to be between $30 million and $50 million annually. The rate of subsidence slowed in 1967 as increased surface-water imports and reduced ground-water withdrawals allowed the hydraulic head to stabilize and start to recover. Under current (1995) conditions of ground-water use and availability, further subsidence is not likely. However, because the compression of the clay is irreversible, land subsidence that has already occurred is permanent.
Ground-water quality is not a serious concern in the Santa Clara Valley, except near San Francisco Bay, where seawater has intruded locally as a result of large ground-water withdrawals. The encroachment has been arrested, for the most part, by a decrease in withdrawals and an increase in recharge from surface-water sources.
The Salinas Valley, the largest southern California coastal basin, lies within the southern Coast Ranges between the San Joaquin Valley and the Pacific Ocean (fig. 116). The valley is drained by the Salinas River and extends approximately 150 miles from the headwaters to the mouth of the river at Monterey Bay. The total drainage area of the basin is about 5,000 square miles.
The major land uses in the Salinas Valley are agriculture, rangeland, forest, and urban development. In general, forest lands are on steep slopes, rangelands are in rolling to steep hills, and agricultural and urban development are in areas where slopes are gentle, especially near the Salinas River and its tributaries. Agriculture is the primary water use in the basin and is most intensive near the coast between the city of Salinas and Monterey Bay, where land is devoted primarily to vegetable production (fig. 117). Less land is under cultivation south of King City, where the major crops are grain and wine grapes. Most of the water used in the basin is ground water withdrawn near where it is used. No water is imported, and all recharge originates as precipitation in the drainage basin.
The Salinas Valley lies almost entirely in a northwest-trending structural trough filled principally by unconsolidated continental deposits. The valley is bounded by the San Andreas Fault on the northeast and by a series of aligned and interconnected faults on the southwest (fig. 118). The mountains that bound the valley were formed by uplift and deformation caused by crustal shortening and are underlain by consolidated marine sediments, intrusive igneous rocks, and metamorphic rocks.
The Salinas drainage basin (fig. 118) is bounded on the south by the La Panza Range, on the southwest by the Santa Lucia Range, and on the northwest by the Sierra de Salinas; the 200-mile- long Diablo Range and the shorter Gabilan Range bound the basin on the northeast. The mountains that form the northeastern, northwestern, and southwestern margins of the basin slope steeply and are dissected by streams that have carved steep canyons into the valley walls. The southeastern margin is characterized by gently rolling hills and broad valleys.
The Salinas Valley is about 30 miles wide in the south, about 20 miles wide in the middle of the valley, and about 10 miles wide in the flat lowland areas north of Greenfield. The valley floor has an altitude of about 1,200 feet at Santa Margarita in the south and about 400 feet at San Ardo, and is near sea level at the shoreline of Monterey Bay. Stream gradients are relatively steep in the southern headwater region, and the valley floor is deeply dissected by the streams. As the valley becomes less steep from near San Ardo to Monterey Bay, stream gradients lessen also, and the tributary drainage area becomes smaller.
Climate in the valley is Mediterranean and is moderated by the Pacific Ocean; summers are mild and winters are cool. Precipitation is almost entirely rain, which falls mostly in late autumn, winter, and early spring. Little rain falls from May through October; 87 percent of the yearly total falls from November through April. Average annual rainfall ranges from about 12 to 40 inches within the basin and depends mainly upon altitude. Rainfall on the valley floor ranges from about 12 inches near the center of the valley to about 16 inches near the base of the surrounding mountains.
Aquifers and Confining Units
The Salinas Valley aquifer is contained within a structural trough that is underlain and bounded on the margins by a complex of igneous and metamorphic rocks of pre-Tertiary age. The crystalline rocks in the trough are overlain by consolidated sedimentary rocks of marine origin that yield only small volumes of water (fig. 119). The consolidated marine sedimentary rocks are in turn overlain by semiconsolidated deposits of marine origin. Unconsolidated continental deposits, which include the Paso Robles Formation and constitute the principal aquifer, are the uppermost deposits and fill the valley to depths of 1,000 feet or more.
Freshwater is contained mainly in the unconsolidated basin-fill deposits. In a few areas, sufficient water for domestic and stock use can be obtained from the semiconsolidated and consolidated rocks where they are fractured or weathered. The deeper unconsolidated deposits contain the largest amount of water in storage, but the shallower alluvial-fan and stream-valley deposits currently (1995) yield more water. Very permeable deposits of windblown sand are generally above the water table and, therefore, largely unsaturated; they do, however, form important recharge areas.
The Salinas Valley aquifer system is divisible into upper and lower ground-water basins. The upper basin extends from near the headwaters of the Salinas River and its tributaries to San Ardo, where the unconsolidated deposits narrow. The lower basin extends from San Ardo to Monterey Bay.
In the upper ground-water basin, the degree of confinement varies locally and depends on the presence and total thickness of deposits of fine-grained materials (fig. 120). Most ground water in deep deposits is confined, but the shallow ground water is, for the most part, unconfined. The thickest area of the aquifer in the upper basin is in the Estrella Valley where the unconsolidated deposits are as much as 1,750 feet thick. Recharge in the upper basin is from precipitation, infiltration from streams, and irrigation return flow. Ground-water discharge in the upper basin is by loss to streams, withdrawals by wells, and evapotranspiration. Nearly all the tributary flow to the Salinas River is in the upper basin.
Ground water in the lower basin is mostly under water-table conditions except on the northwest side of the valley from near Gonzales to Monterey Bay. In this area, a clay layer near the land surface provides varying degrees of confinement to water in the aquifers below. Infiltration from the Salinas River provides most of the recharge for aquifers in the lower basin. Nearly all the discharge in the lower basin is by withdrawals from wells.
Ground-Water Flow System
Ground-water movement in most of the valley is in the direction of surface-water flow and follows the gradient of the land surface seaward (fig. 121). In dry years, withdrawals in the upper basin can disturb the natural ground-water flow patterns, but normally the flow direction approximates predevelopment conditions. In the lower basin, however, large withdrawals near Salinas have diverted the natural seaward ground-water flow, and much of the water now moves toward wells.
Much of the water that enters the upper basin from the surrounding mountains runs directly into the Salinas River and its tributary streams. The streams gain water by seepage from the aquifer system because the water table in the headwater area of the Salinas River is above the river level. From approximately San Miguel in the upper valley to Soledad, which is about 40 miles upriver from Monterey Bay, the water table in the vicinity of the river is approximately at the same altitude as the river level, so no water moves between the stream and the aquifer. At Soledad, the water-table altitude is lower than the river level, and the river loses water to the aquifer system.
Fresh Ground-Water Withdrawals
Water use in the valley has increased with a growth in agriculture and population. During 1985, total ground-water withdrawals approximately equaled the basinwide annual recharge of about 700,000 acre-feet. Water levels in the upper basin have shown little decline because of minimal ground-water development. Throughout the lower basin, however, agricultural and municipal withdrawals caused a general decline until the mid-1950's. In 1956, the flow of the Salinas River became perennial with the regulation of the Nacimiento River. Ground-water levels ceased to decline from San Ardo to Gonzalez because the increased streamflow maintained recharge to the ground-water system by seepage from the river. In 1967, a second dam was completed on the San Antonio River and helped to maintain the year-round flow of the river. Nonetheless, large and increasing withdrawals near Monterey Bay downstream from the city of Salinas have resulted in continued water-level declines, although availability of surface-water recharge has increased. Water levels in wells in this area have remained below sea level since the late 1940's and have resulted in saltwater encroachment from Monterey Bay.
Ground-water quality in the upper basin is generally acceptable for most uses, except in local areas. Dissolved-solids concentrations in the water range from about 200 to 700 milligrams per liter. The only major area of concern is the so-called Bitterwater area in the upper basin (fig. 122), where boron and arsenic that have been leached from aquifer materials and consolidated rocks can be at excessive levels. Despite the near absence of regional water-quality problems, agricultural and industrial activities have resulted in localized aquifer contamination.
Dissolution of gypsum beds in the deep unconsolidated sediments and in consolidated marine deposits on the east side of the valley causes large concentrations of sulfate in ground and surface waters along the Salinas River and San Lorenzo Creek (fig. 122). Dissolved-solids concentrations in ground water in this area are as much as 3,000 milligrams per liter. Because the ground-water system receives recharge from surface water in the lower basin, ground-water quality in areas without gypsum beds can be affected by infiltration from streams that drain areas with such beds. Water in streams on the southwest side of the valley is less mineralized and partly dilutes the highly mineralized water during the wet season.
Aquifers near the coast are subject to seawater contamination when ground-water withdrawals in the area exceed natural recharge. Large withdrawals for agricultural and municipal supplies have lowered the potentiometric surface east of the city of Salinas until it is considerably below sea level (fig. 121). As a result, the natural freshwater gradient has been reversed from seaward to landward, which allows saltwater to enter the aquifer system where it crops out on the sea floor. Saltwater intrusion was already a concern when monitoring began in 1943, and, as of 1995, the area affected has increased greatly in size. The contamination has resulted in the abandonment of some wells.
Another area of concern in the lower basin is on the east side of the Salinas River between Soledad and Salinas (fig. 122). Organic pollutants and excessive nitrate concentrations that result from industrial and agricultural activity are possible threats to ground-water quality in this area.
LOS ANGELESORANGE COUNTY COASTAL PLAIN AQUIFER SYSTEM
The Los AngelesOrange County coastal plain aquifer system is located in southern California and is contained in a coastal plain basin that extends over an area of approximately 860 square miles (fig. 123). Ground-water development began in the basin in the 1870's, when the demands of irrigated agriculture began to exceed surface-water supplies; however, urbanization subsequently displaced most of the agriculture in the basin, and today the predominant use of water is for public supply. Because metropolitan Los Angeles and the surrounding area is one of the largest population centers in the world (fig. 124), the demand for water is great. In addition to local ground-water sources, water is imported from the Colorado River, the Owens Valley, and northern California by an aqueduct system. Also, reclaimed wastewater is spread in recharge areas for ground-water replenishment and is pumped into the aquifer system near the coast to prevent seawater intrusion.
The Los AngelesOrange County coastal plain basin is bounded on the north and east by the Santa Monica Mountains and the Puente Hills, on the south by the San Joaquin Hills, and on the west by the Pacific Ocean (fig. 125). The mountains are underlain by consolidated rocks of igneous, metamorphic, and marine-sedimentary origin. These consolidated rocks surround and underlie thick unconsolidated alluvial deposits. The major drainages in the basin are the Los Angeles, the San Gabriel, and the Santa Ana Rivers, all of which have headwaters outside the basin.
Marine sediments deposited during periodic encroachment of the sea and alluvium derived from weathering and erosion of the rocks in the surrounding mountains have filled the basin with a thick sequence of deposits. The surface of the basin is relatively flat, but upwarping along the NewportInglewood Uplift (fig. 125) has formed hills that rise in places as much as 400 feet above the surrounding coastal plain. Also, along the coast from just north of Long Beach southward to the San Joaquin Mountains, resistant sediments of late Pleistocene age underlie several mesas. The mesas are separated by erosional gaps through which the major drainages either now flow or have flowed historically.
Climate in the basin is Mediterranean, characterized by warm summers, cool winters, and markedly seasonal rainfall. Nearly all rain falls from late autumn to early spring; virtually no precipitation falls during the summer. The average annual rainfall in Los Angeles is about 15 inches. Potential evapotranspiration in the coastal plain exceeds precipitation on an annual basis, and, under natural conditions, the lower reaches of rivers that drain the basin are dry in summer.
Aquifers and Confining Units
The Los AngelesOrange County coastal plain basin is a structural basin formed by folding of the consolidated sedimentary, igneous, and metamorphic rocks that underlie the basin at great depths. Although the subsurface structure of the basin is complex, two major northwest-trending troughs (fig. 126), which are separated for most of their length by an uplifted and faulted structural zone, contain the sediments that compose the aquifer system. These sediments are as thick as 30,000 feet in some areas.
The coastal plain aquifer system is made up of as many as 11 locally named aquifers. Each consists of a distinct layer of water-yielding sand and gravel usually separated from other sand and gravel beds by clay and silt confining units (fig. 126). In many places, however, either the water-yielding sediments are in direct hydraulic contact or the intervening confining units contain sufficient sand and gravel to allow water to pass between adjacent aquifers.
A layer of clay and silty clay of marine and continental origin, which is at or near the land surface over most of the basin, is a competent confining unit where it does not contain large amounts of sand and gravel. This confining unit ranges from less than 1 foot to about 180 feet in thickness. Near Santa Monica and San Pedro Bays, the confining unit is not present, and ground water is under unconfined, or water-table, conditions.
Freshwater is contained within deposits that range in age from Holocene to late Pliocene. The main freshwater body extends from depths of less than 100 to about 4,000 feet. At greater depths, the water is saline and unpotable. The freshwater body is thickest near the axis of the troughs where water-yielding sediments reach their greatest thickness and thinnest where these sediments overlie anticlines or become thin at the margins of the aquifer system.
Ground-Water Flow System
Before development, ground water in the basin flowed generally toward the Pacific Ocean (fig. 127). Natural recharge, which is virtually all from precipitation, entered the aquifer system at the basin margins as runoff from the mountains, losses along stream channels, subsurface flow from adjacent basins to the north, and precipitation that fell directly on the basin floor. Where aquifers are hydraulically connected and sufficient differences in hydraulic head existed, some water doubtless flowed from one aquifer to another.
Structural features, such as faults and anticlines, alter or restrict ground-water flow at several places in the Los AngelesOrange County coastal plain basin. The most prominent structural zone is the NewportInglewood Uplift (figs. 125 and 126), which trends northwestward, extends virtually the entire length of the basin, and is approximately perpendicular to the direction of natural ground-water flow. The aquifers might be interrupted by faults or thinned near the upwarped structural zone, but, for the most part, such restrictions are not complete barriers to ground-water flow. The sediments that form the mesas along much of the coast have minimal permeability and also impede ground-water flow; however, erosion formed gaps in the mesas, which subsequently filled with alluvial deposits; these gaps allow water to move between the inland aquifers and the sea.
Ground-water flow directions have been altered by withdrawals in the basin. Rapid urban development and the accompanying increase in withdrawals resulted in severe declines in water levels that began in the early 1900's. As a consequence, ground-water gradients near the coast reversed from seaward to landward in some areas in the 1920's, and saltwater intrusion was detected in 1932. Virtually the entire coastline was affected by the 1950's (fig. 128). The hydraulic gradient is now (1995) primarily from recharge areas toward withdrawal centers, rather than toward the ocean (fig. 129). Withdrawals in the deeper aquifers also have created a downward hydraulic gradient over much of the basin. Large expanses of the basin surface have been urbanized, thus decreasing the potential for direct recharge to the aquifer system and increasing the potential for saltwater intrusion.
The critical need for a solution to the seawater encroachment problem brought about coordinated management of water use in the basin beginning in the 1950's. Several methods have been employed to stop the steady progression of seawater into the basin (fig. 130). Ground water, which accounted for 40 to 50 percent of the water used in the basin, was augmented by large amounts of surface water imported from the Colorado River, the Owens Valley, and northern California. In some areas, particularly near the coast, withdrawals have been reduced or wells abandoned (fig. 130A). This has, to some extent, lessened the landward gradient. Artificial recharge (fig. 130B) through ponds or by water spreading, using imported water or reclaimed wastewater, replaces some of the ground water lost from storage and partly compensates for the loss of recharge potential that results from urbanization. Three barriers have been constructed near the coast in areas where seawater was encroaching into the freshwater aquifer system. The barriers consist of a series of either pumping wells that will remove saltwater from the aquifer and form a trough barrier (fig. 130C) or injection wells that pump reclaimed waste water into the permeable sediments, and thus establish a narrow zone in which the freshwater gradient is seaward (fig. 130D). In some places, wells that withdraw saline water from the aquifer system on the seaward side of the injection wells are part of the barrier. To date (1995), the combination of methods has been successful in halting seawater intrusion and also reducing the area of the basin with ground-water levels below sea level (fig. 131).
The quality of water in the confined aquifers in the basin is generally suitable for most uses. Dissolved-solids concentrations in the water are generally less than 500 milligrams per liter and concentrations of chloride do not exceed drinking-water standards recommended by the U.S. Environmental Protection Agency. Water imported from the Owens Valley and the Colorado River for recharge has larger concentrations of dissolved solids, chloride, and sulfate than the ground water, but the quality of the mixed native ground water and imported water is within State and Federal standards for drinking water. | http://pubs.usgs.gov/ha/ha730/ch_b/B-text4.html | 13 |
50 | 030 -- Elementary Algebra -- Exercise Solutions
All percent problems are solved by referring to the basic formula that relates percentage, percent, and the base of the percentage.
That formula is Percentage = (Percent)(Base)
Notice that the formula contains just three quantities (percentage, percent, and base). If any two of these three quantities are known, their value may be substituted into the basic formula and it may then be solved to determine the third.
are only three percent related problems:
(1) Given Percent and Base, calculate Percentage
(2) Given Percent and Percentage, calculate Base
(3) Given Percentage and Base, calculate Percent
we let A represent Percentage, let P represent percent and B represent the
Base, the the basic formula may be written as
A = PB
which is read as follows: The percentage A is P percent of the base B.
Use this verbal statement and the basic formula to help translate percent problems into an equation.
Finally remember the meaning of percent is "per 100" and convert all percents to decimals.
Translate the following statement into an equation: 12
is 40% of what number?
Solution: Clearly 40% is the percent P. Compare 12 is 40% of what number with percentage A is P percent of the base B and it is pretty clear that in this problem the percentage is 12 and the unknown quantity is the base.
This yields the equation 12 = (40%)(B) and converting percent to a decimal gives the equation 12 = 0.40B
the following statement into an equation:
99 is what percent of 200?
Solution: Clearly the missing part is percent. Compare 99 is what percent of 200 with percentage A is P percent of the base B and it is pretty clear that in this problem 99 is the percentage and 200 is the base.
This yields the equation 99 = P(200)
17: Change the following to decimals:
a. 35% means 35 per 100 which may be written as the ratio 35/100 from which we get the decimal 0.35.
Therefore 35% = 0.35 (or you can just move the decimal point two places to the left -- that's what division by 100 does)
b. 3.5% means 3.5 per hundred whcih may be written as the ratio 3.5/100 from which we get the decimal 0.035
Therefore 3.5% = 0.035 (or you can just move the decimal point two places to the left -- that's what division by 100 does)
c. 350% means 350 per hundred which may be written as the ration 350/100 from which we get 3.50
Therefore 350% = 03.5 (or you can just move the decimal point two places to the left -- that's what division by 100 does)
d. 1/2 % is best understood by changing 1/2% to 0.5% which means 0.5 per hundred which may be written as the ratio .5/100
from which we get the decimal 0.005
Therfore 1/2 % = 0.005 (or you can just move the decimal point two places to the left -- that's what division by 100 does)
18. Change each decimal to a percent
existing 1000 foot water line had to be extended to a length of 1525 feet
to reach a new restroom facility. Describe a quantity in two ways. Write the
The new length is 1525 feet.
The new length is also the old length plus the extension.
Let x be the length of the extension, then the new length is 1000 + x.
We now have the new length expressed in two ways and since these two expressions represent the same thing they must be equal.
Therefore 1000 + x = 1525.
Because of overgrazing, state agriculture officials determined that the 4500
head of cattle currently on the ranch had to be reduced to 2750.
The current population is 4500.
The future population is 2750.
The future population is also the current population minus the reduction.
We don't know the reduction. Let x represent the reduction. Then we can express the future population as 4500 - x.
We now have future population expressed in two ways and since these two expressions represent the same quantity, they must be equal.
Therefore 4500 - x = 2750.
A length of gold chain, cut into 12-inch-long pieces makes five bracelets
The length of the chain is unknown. Let it be represented by the variable x.
A bracelet is 12 inches long. the total length of chain produces five bracelets.
The total length of chain is therefore (5)(12) = 60 inches.
We now have the total length of chain represented as 60 and as x.
Therefore x = 60.
The 24 ounces of walnuts used to make a fruitcake were twice what
was called for in the recipe.
Solution: The amount called for in the recipe is unknown. Let it be reperesented by the variable x.
The amount used was 24 ounces, but it was also twice the amount called for or 2x.
Therefore 24 = 2x.
The following percent problems are presented in an unconventional manner. Diagrams are used to help relate the statement of the problem with the fundamental formula A = PB by which ALL percent problems are solved.
Some of the same percent problems are shown below in a more conventional manner. | http://www.drdelmath.com/elementary_agebra/exercises/exercises_section2_2.htm | 13 |
112 | A neutron star is a type of stellar remnant that can result from the gravitational collapse of a massive star during a Type II, Type Ib or Type Ic supernova event. Such stars are composed almost entirely of neutrons, which are subatomic particles without net electrical charge and with slightly larger mass than protons. Neutron stars are very hot and are supported against further collapse by quantum degeneracy pressure due to the Pauli exclusion principle. This principle states that no two neutrons (or any other fermionic particles) can occupy the same place and quantum state simultaneously.
A typical neutron star has a mass between about 1.4 and 3.2 solar masses (see Chandrasekhar Limit), with a corresponding radius of about 12 km if the Akmal–Pandharipande–Ravenhall equation of state (APR EOS) is used. In contrast, the Sun's radius is about 60,000 times that. Neutron stars have overall densities predicted by the APR EOS of 3.7×1017 to 5.9×1017 kg/m3 (2.6×1014 to 4.1×1014 times the density of the Sun), which compares with the approximate density of an atomic nucleus of 3×1017 kg/m3. The neutron star's density varies from below 1×109 kg/m3 in the crust, increasing with depth to above 6×1017 or 8×1017 kg/m3 deeper inside (denser than an atomic nucleus). This density is approximately equivalent to the mass of a Boeing 747 compressed to the size of a small grain of sand.
In general, compact stars of less than 1.44 solar masses – the Chandrasekhar limit – are white dwarfs, and above 2 to 3 solar masses (the Tolman–Oppenheimer–Volkoff limit), a quark star might be created; however, this is uncertain. Gravitational collapse will usually occur on any compact star between 10 and 25 solar masses and produce a black hole. Some neutron stars rotate very rapidly and emit beams of electromagnetic radiation as pulsars.
As the core of a massive star is compressed during a supernova, and collapses into a neutron star, it retains most of its angular momentum. Since it has only a tiny fraction of its parent's radius (and therefore its moment of inertia is sharply reduced), a neutron star is formed with very high rotation speed, and then gradually slows down. Neutron stars are known to have rotation periods from about 1.4 ms to 30 seconds. The neutron star's density also gives it very high surface gravity, up to 7×1012 m/s2 with typical values of a few ×1012 m/s2 (that is more than 1011 times of that of Earth). One measure of such immense gravity is the fact that neutron stars have an escape velocity of around 100,000 km/s, about a third of the speed of light. Matter falling onto the surface of a neutron star would be accelerated to tremendous speed by the star's gravity. The force of impact would likely destroy the object's component atoms, rendering all its matter identical, in most respects, to the rest of the star.
The gravitational field at the star's surface is about 2×1011 times stronger than on Earth. Such a strong gravitational field acts as a gravitational lens and bends the radiation emitted by the star such that parts of the normally invisible rear surface become visible.
A fraction of the mass of a star that collapses to form a neutron star is released in the supernova explosion from which it forms (from the law of mass-energy equivalence, E = mc2). The energy comes from the gravitational binding energy of a neutron star.
Neutron star relativistic equations of state provided by Jim Lattimer include a graph of radius vs. mass for various models. The most likely radii for a given neutron star mass are bracketed by models AP4 (smallest radius) and MS2 (largest radius). BE is the ratio of gravitational binding energy mass equivalent to observed neutron star gravitational mass of "M" kilograms with radius "R" meters,
Given current values
and star masses "M" commonly reported as multiples of one solar mass,
then the relativistic fractional binding energy of a neutron star is
A two-solar-mass neutron star would not be more compact than 10,970 meters radius (AP4 model). Its mass fraction gravitational binding energy would then be 0.187, −18.7% (exothermic). This is not near 0.6/2 = 0.3, −30%.
A neutron star is so dense that one teaspoon (5 milliliters) of its material would have a mass over 5.5×1012 kg, about 900 times the mass of the Great Pyramid of Giza. Hence, the gravitational force of a typical neutron star is such that if an object were to fall from a height of one meter, it would only take one microsecond to hit the surface of the neutron star, and would do so at around 2000 kilometers per second, or 7.2 million kilometers per hour.
The temperature inside a newly formed neutron star is from around 1011 to 1012 kelvin. However, the huge number of neutrinos it emits carry away so much energy that the temperature falls within a few years to around 106 kelvin. Even at 1 million kelvin, most of the light generated by a neutron star is in X-rays. In visible light, neutron stars probably radiate approximately the same energy in all parts of visible spectrum, and therefore appear white.
The pressure increases from 3×1033 to 1.6×1035 Pa from the inner crust to the center.
The equation of state for a neutron star is still not known. It is assumed that it differs significantly from that of a white dwarf, whose EOS is that of a degenerate gas which can be described in close agreement with special relativity. However, with a neutron star the increased effects of general relativity can no longer be ignored. Several EOS have been proposed (FPS, UU, APR, L, SLy, and others) and current research is still attempting to constrain the theories to make predictions of neutron star matter. This means that the relation between density and mass is not fully known, and this causes uncertainties in radius estimates. For example, a 1.5 solar mass neutron star could have a radius of 10.7, 11.1, 12.1 or 15.1 kilometres (for EOS FPS, UU, APR or L respectively).
Current understanding of the structure of neutron stars is defined by existing mathematical models, but it might be possible to infer through studies of neutron-star oscillations. Similar to asteroseismology for ordinary stars, the inner structure might be derived by analyzing observed frequency spectra of stellar oscillations.
On the basis of current models, the matter at the surface of a neutron star is composed of ordinary atomic nuclei crushed into a solid lattice with a sea of electrons flowing through the gaps between them. It is possible that the nuclei at the surface are iron, due to iron's high binding energy per nucleon. It is also possible that heavy element cores, such as iron, simply sink beneath the surface, leaving only light nuclei like helium and hydrogen cores. If the surface temperature exceeds 106 kelvin (as in the case of a young pulsar), the surface should be fluid instead of the solid phase observed in cooler neutron stars (temperature <106 kelvins).
The "atmosphere" of the star is hypothesized to be at most several micrometers thick, and its dynamic is fully controlled by the star's magnetic field. Below the atmosphere one encounters a solid "crust". This crust is extremely hard and very smooth (with maximum surface irregularities of ~5 mm), because of the extreme gravitational field.
Proceeding inward, one encounters nuclei with ever increasing numbers of neutrons; such nuclei would decay quickly on Earth, but are kept stable by tremendous pressures. As this process continues at increasing depths, neutron drip becomes overwhelming, and the concentration of free neutrons increases rapidly. In this region, there are nuclei, free electrons, and free neutrons. The nuclei become increasingly small (gravity and pressure overwhelming the strong force) until the core is reached, by definition the point where they disappear altogether.
The composition of the superdense matter in the core remains uncertain. One model describes the core as superfluid neutron-degenerate matter (mostly neutrons, with some protons and electrons). More exotic forms of matter are possible, including degenerate strange matter (containing strange quarks in addition to up and down quarks), matter containing high-energy pions and kaons in addition to neutrons, or ultra-dense quark-degenerate matter.
History of discoveries
In 1934, Walter Baade and Fritz Zwicky proposed the existence of the neutron star, only a year after the discovery of the neutron by Sir James Chadwick. In seeking an explanation for the origin of a supernova, they proposed that the neutron star is formed in a supernova. Supernovae are suddenly appearing dying stars in the sky, whose luminosity in visible light outshine an entire galaxy for days to weeks. Baade and Zwicky correctly proposed at that time that the release of the gravitational binding energy of the neutron stars powers the supernova: "In the supernova process, mass in bulk is annihilated".
In 1965, Antony Hewish and Samuel Okoye discovered "an unusual source of high radio brightness temperature in the Crab Nebula". This source turned out to be the Crab Nebula neutron star that resulted from the great supernova of 1054.
In 1967, Jocelyn Bell and Antony Hewish discovered regular radio pulses from CP 1919. This pulsar was later interpreted as an isolated, rotating neutron star. The energy source of the pulsar is the rotational energy of the neutron star. The majority of known neutron stars (about 2000, as of 2010) have been discovered as pulsars, emitting regular radio pulses.
In 1971, Riccardo Giacconi, Herbert Gursky, Ed Kellogg, R. Levinson, E. Schreier, and H. Tananbaum discovered 4.8 second pulsations in an X-ray source in the constellation Centaurus, Cen X-3. They interpreted this as resulting from a rotating hot neutron star. The energy source is gravitational and results from a rain of gas falling onto the surface of the neutron star from a companion star or the interstellar medium.
In 1974, Joseph Taylor and Russell Hulse discovered the first binary pulsar, PSR B1913+16, which consists of two neutron stars (one seen as a pulsar) orbiting around their center of mass. Einstein's general theory of relativity predicts that massive objects in short binary orbits should emit gravitational waves, and thus that their orbit should decay with time. This was indeed observed, precisely as general relativity predicts, and in 1993, Taylor and Hulse were awarded the Nobel Prize in Physics for this discovery.
In 1982, Don Backer and colleagues discovered the first millisecond pulsar, PSR B1937+21. This objects spins 642 times per second, a value that placed fundamental constraints on the mass and radius of neutron stars. Many millisecond pulsars were later discovered, but PSR B1937+12 remained as the fsatest-spinning pulsar for 24 years, when PSR J1748-2446ad was discovered.
In 2003, Marta Burgay and colleagues discovered the first double neutron star system where both components are detectable as pulsars, PSR J0737-3039. The discovery of this system allows a total of 5 different tests of general relativity, some of these with unprecedented precision.
In 2010, Paul Demorest and colleagues measured the mass of the millisecond pulsar PSR J1614–2230 to be 1.97±0.04 solar masses, using Shapiro delay. This was substantially higher than any previously measured neutron star mass (1.67 solar masses, see PSR J1903+0327), and places strong constraints on the interior composition of neutron stars.
In 2013, John Antoniadis and colleagues measured the mass of PSR J0348+0432 to be 2.01±0.04 solar masses, using white dwarf spectroscopy . This confirmed the existence of such massive stars using a different method. Furthermore, this allowed, for the first time, a test of general relativity using such a massive neutron star.
Neutron stars rotate extremely rapidly after their creation due to the conservation of angular momentum; like spinning ice skaters pulling in their arms, the slow rotation of the original star's core speeds up as it shrinks. A newborn neutron star can rotate several times a second; sometimes, the neutron star absorbs orbiting matter from a companion star, increasing the rotation to several hundred times per second, reshaping the neutron star into an oblate spheroid.
Over time, neutron stars slow down because their rotating magnetic fields radiate energy; older neutron stars may take several seconds for each revolution.
The rate at which a neutron star slows its rotation is usually constant and very small: the observed rates of decline are between 10−10 and 10−21 seconds for each rotation. Therefore, for a typical slow down rate of 10−15 seconds per rotation, a neutron star now rotating in 1 second will rotate in 1.000003 seconds after a century, or 1.03 seconds after 1 million years.
Sometimes a neutron star will spin up or undergo a glitch, a sudden small increase of its rotation speed. Glitches are thought to be the effect of a starquake — as the rotation of the star slows down, the shape becomes more spherical. Due to the stiffness of the "neutron" crust, this happens as discrete events as the crust ruptures, similar to tectonic earthquakes. After the starquake, the star will have a smaller equatorial radius, and since angular momentum is conserved, rotational speed increases. Recent work, however, suggests that a starquake would not release sufficient energy for a neutron star glitch; it has been suggested that glitches may instead be caused by transitions of vortices in the superfluid core of the star from one metastable energy state to a lower one.
Neutron stars have been observed to "pulse" radio and x-ray emissions believed to be caused by particle acceleration near the magnetic poles, which need not be aligned with the rotation axis of the star. Though mechanisms not yet entirely understood, these particles produce coherent beams of radio emission. External viewers see these beams as pulses of radiation whenever the magnetic pole sweeps past the line of sight. The pulses come at the same rate as the rotation of the neutron star, and thus, appear periodic. Neutron stars which emit such pulses are called pulsars.
The most rapidly rotating neutron star currently known, PSR J1748-2446ad, rotates at 716 rotations per second. A recent paper reported the detection of an X-ray burst oscillation (an indirect measure of spin) at 1122 Hz from the neutron star XTE J1739-285. However, at present, this signal has only been seen once, and should be regarded as tentative until confirmed in another burst from this star.
Population and distances
At present, there are about 2000 known neutron stars in the Milky Way and the Magellanic Clouds, the majority of which have been detected as radio pulsars. The population of neutron stars is concentrated along the disk of the Milky Way although the spread perpendicular to the disk is fairly large. The reason for this spread is due to the asymmetry of the supernova explosion process, which can impart high speeds (400 km/s) to the newly created neutron star.
Some of the closest neutron stars are RX J1856.5-3754 about 400 light years away and PSR J0108-1431 at about 424 light years. Another nearby neutron star that was detected transiting the backdrop of the constellation Ursa Minor has been catalogued as 1RXS J141256.0+792204. This rapidly moving object, nicknamed "Calvera" by its Canadian and American discoverers, was discovered using the ROSAT/Bright Source Catalog. Initial measurements placed its distance from Earth at 200 to 1,000 light years away, with later claims at about 450 light-years.
Binary neutron stars
About 5% of all known neutron stars are members of a binary system. The formation and evolution scenario of binary neutron stars is a rather exotic and complicated process. The companion stars may be either ordinary stars, white dwarfs or other neutron stars. According to modern theories of binary evolution it is expected that neutron stars also exist in binary systems with black hole companions. Such binaries are expected to be prime sources for emitting gravitational waves. Neutron stars in binary systems often emit X-rays which is caused by the heating of material (gas) accreted from the companion star. Material from the outer layers of a (bloated) companion star is sucked towards the neutron star as a result of its very strong gravitational field. As a result of this process binary neutron stars may also coalesce into black holes if the accretion of mass takes place under extreme conditions.
- Neutron star
- Protoneutron star (PNS), theorized.
- Radio-quiet neutron stars
- Radio loud neutron star
- Single pulsars–general term for neutron stars that emit directed pulses of radiation towards us at regular intervals (due to their strong magnetic fields).
- Binary pulsars
- Low-mass X-ray binaries (LMXB)
- Intermediate-mass X-ray binaries (IMXB)
- High-mass X-ray binaries (HMXB)
- Accretion-powered pulsar ("X-ray pulsar")
- Exotic star
- Quark star–currently a hypothetical type of neutron star composed of quark matter, or strange matter. As of 2008, there are three candidates.
- Electroweak star–currently a hypothetical type of extremely heavy neutron star, in which the quarks are converted to leptons through the electroweak force, but the gravitational collapse of the star is prevented by radiation pressure. As of 2010, there is no evidence for their existence.
- Preon star–currently a hypothetical type of neutron star composed of preon matter. As of 2008, there is no evidence for the existence of preons.
Giant nucleus
A neutron star has some of the properties of an atomic nucleus, including density and being composed of nucleons. In popular scientific writing, neutron stars are therefore sometimes described as giant nuclei. However, in other respects, neutron stars and atomic nuclei are quite different. In particular, a nucleus is held together by the strong interaction, whereas a neutron star is held together by gravity. It is generally more useful to consider such objects as stars.
Examples of neutron stars
- PSR J0108-1431 – closest neutron star
- LGM-1 – the first recognized radio-pulsar
- PSR B1257+12 – the first neutron star discovered with planets (a millisecond pulsar)
- SWIFT J1756.9-2508 – a millisecond pulsar with a stellar-type companion with planetary range mass (below brown dwarf)
- PSR B1509-58 source of the "Hand of God" photo shot by the Chandra X-ray Observatory.
See also
- Bulent Kiziltan (2011). Reassessing the Fundamentals: On the Evolution, Ages and Masses of Neutron Stars. Universal-Publishers. ISBN 1-61233-765-1.
- Bulent Kiziltan; Athanasios Kottas; Stephen E. Thorsett (2010). "The Neutron Star Mass Distribution". arXiv:1011.4291 [astro-ph.GA].
- "Nasa Ask an Astrophysist: Maximum Mass of a Neutron Star".
- Paweł Haensel, A Y Potekhin, D G Yakovlev (2007). Neutron Stars. Springer. ISBN 0-387-33543-9.
- A neutron star's density increases as its mass increases, and, for most equations of state (EOS), its radius decreases non-linearly. For example, EOS radius predictions for a 1.35 M star are: FPS 10.8 km, UU 11.1 km, APR 12.1 km, and L 14.9 km. For a more massive 2.1 M star, radius predictions are: FPS undefined, UU 10.5 km, APR 11.8 km, and L 15.1 km. (NASA mass radius graph)
- 3.7×1017 kg/m3 derives from mass 2.68 × 1030 kg / volume of star of radius 12 km; 5.9×1017 kg m−3 derives from mass 4.2×1030 kg per volume of star radius 11.9 km
- "Calculating a Neutron Star's Density". Retrieved 2006-03-11. NB 3 × 1017 kg/m3 is 3×1014 g/cm3
- "Introduction to neutron stars". Retrieved 2007-11-11.
- , a ten stellar mass star will collapse into a black hole.
- Zahn, Corvin (1990-10-09). "Tempolimit Lichtgeschwindigkeit" (in German). Retrieved 2009-10-09. "Durch die gravitative Lichtablenkung ist mehr als die Hälfte der Oberfläche sichtbar. Masse des Neutronensterns: 1, Radius des Neutronensterns: 4, ... dimensionslosen Einheiten (c, G = 1)"
- Neutron Star Masses and Radii, p. 9/20, bottom
- J. M. Lattimer and M. Prakash, "Neutron Star Structure and the Equation of State" Astrophysical J. 550(1) 426 (2001); http://arxiv.org/abs/astro-ph/0002232
- Measurement of Newton's Constant Using a Torsion Balance with Angular Acceleration Feedback , Phys. Rev. Lett. 85(14) 2869 (2000)
- The average density of material in a neutron star of radius 10 km is 1.1×1012 kg cm−3. Therefore, 5 ml of such material is 5.5×1012 kg, or 5 500 000 000 metric tons. This is about 15 times the total mass of the human world population. Alternatively, 5 ml from a neutron star of radius 20 km radius (average density 8.35×1010 kg cm−3) has a mass of about 400 million metric tons, or about the mass of all humans.
- Miscellaneous Facts
- Neutron degeneracy pressure (Archive). Physics Forums. Retrieved on 2011-10-09.
- NASA. Neutron Star Equation of State Science Retrieved 2011-09-26
- V. S. Beskin (1999). "Radiopulsars". УФН. T.169, №11, p.1173-1174
- neutron star
- Baade, Walter and Zwicky, Fritz (1934). "Remarks on Super-Novae and Cosmic Rays". Phys. Rev. 46 (1): 76–77. Bibcode:1934PhRv...46...76B. doi:10.1103/PhysRev.46.76.2.
- Even before the discovery of neutron, in 1931, neutron stars were anticipated by Lev Landau, who wrote about stars where "atomic nuclei come in close contact, forming one gigantic nucleus" (published in 1932: Landau L.D. "On the theory of stars". Phys. Z. Sowjetunion 1: 285–288.). However, the widespread opinion that Landau predicted neutron stars proves to be wrong: for details, see P. Haensel, A. Y. Potekhin, & D. G. Yakovlev (2007). Neutron Stars 1: Equation of State and Structure (New York: Springer), page 2 http://adsabs.harvard.edu/abs/2007ASSL..326.....H
- Chadwick, James (1932). "On the possible existence of a neutron". Nature 129 (3252): 312. Bibcode:1932Natur.129Q.312C. doi:10.1038/129312a0.
- Hewish and Okoye; Okoye, S. E. (1965). "Evidence of an unusual source of high radio brightness temperature in the Crab Nebula". Nature 207 (4992): 59. Bibcode:1965Natur.207...59H. doi:10.1038/207059a0.
- Shklovsky, I.S. (April 1967). "On the Nature of the Source of X-Ray Emission of SCO XR-1". Astrophys. J. 148 (1): L1–L4. Bibcode:1967ApJ...148L...1S. doi:10.1086/180001
- Demorest, PB; Pennucci, T; Ransom, SM; Roberts, MS; Hessels, JW (2010). "A two-solar-mass neutron star measured using Shapiro delay". Nature 467 (7319): 1081–1083. arXiv:1010.5788. Bibcode:2010Natur.467.1081D. doi:10.1038/nature09466. PMID 20981094.
- Antoniadis, J (2012). "A Massive Pulsar in a Compact Relativistic Binary". Science 340 (6131). arXiv:1304.6875. Bibcode:2013Sci...340..448A2010. doi:10.1126/science.1233232.
- Alpar, M Ali (January 1, 1998). "Pulsars, glitches and superfluids". Physicsworld.com.
- [astro-ph/0601337] A Radio Pulsar Spinning at 716 Hz
- University of Chicago Press – Millisecond Variability from XTE J1739285 – 10.1086/513270
- Posselt, B.; Neuhäuser, R.; Haberl, F. (March 2009). "Searching for substellar companions of young isolated neutron stars". Astronomy and Astrophysics 496 (2): 533–545. arXiv:0811.0398. Bibcode:2009A&A...496..533P. doi:10.1051/0004-6361/200810156.
- Tauris & van den Heuvel (2006), in Compact Stellar X-ray Sources. Eds. Lewin and van der Klis, Cambridge University Press http://adsabs.harvard.edu/abs/2006csxs.book..623T
- Compact Stellar X-ray Sources (2006). Eds. Lewin and van der Klis, Cambridge University
- Neutrino-Driven Protoneutron Star Winds, Todd A. Thompson.
- Nakamura, T. (1989). "Binary Sub-Millisecond Pulsar and Rotating Core Collapse Model for SN1987A". Progress of Theoretical Physics 81 (5): 1006. Bibcode:1989PThPh..81.1006N. doi:10.1143/PTP.81.1006.
- "ASTROPHYSICS: ON OBSERVED PULSARS". scienceweek.com. Retrieved 6 August 2004.
- Norman K. Glendenning, R. Kippenhahn, I. Appenzeller, G. Borner, M. Harwit (2000). Compact Stars (2nd ed.).
- Kaaret; Prieskorn; in 't Zand; Brandt; Lund; Mereghetti; Gotz; Kuulkers et al. (2006). "Evidence for 1122 Hz X-Ray Burst Oscillations from the Neutron-Star X-Ray Transient XTE J1739-285". The Astrophysical Journal 657 (2): L97. arXiv:astro-ph/0611716. Bibcode:2007ApJ...657L..97K. doi:10.1086/513270.
|Wikimedia Commons has media related to: Neutron stars|
- Introduction to neutron stars
- Neutron Stars for Undergraduates and its Errata
- NASA on pulsars
- "NASA Sees Hidden Structure Of Neutron Star In Starquake". SpaceDaily.com. April 26, 2006
- "Mysterious X-ray sources may be lone neutron stars". New Scientist.
- "Massive neutron star rules out exotic matter". New Scientist. According to a new analysis, exotic states of matter such as free quarks or BECs do not arise inside neutron stars.
- "Neutron star clocked at mind-boggling velocity". New Scientist. A neutron star has been clocked traveling at more than 1500 kilometers per second. | http://en.wikipedia.org/wiki/Neutron_star | 13 |
150 | A pendulum is a weight suspended from a pivot so that it can swing freely. When a pendulum is displaced sideways from its resting equilibrium position, it is subject to a restoring force due to gravity that will accelerate it back toward the equilibrium position. When released, the restoring force combined with the pendulum's mass causes it to oscillate about the equilibrium position, swinging back and forth. The time for one complete cycle, a left swing and a right swing, is called the period. A pendulum swings with a specific period which depends (mainly) on its length.
From its discovery around 1602 by Galileo Galilei the regular motion of pendulums was used for timekeeping, and was the world's most accurate timekeeping technology until the 1930s. Pendulums are used to regulate pendulum clocks, and are used in scientific instruments such as accelerometers and seismometers. Historically they were used as gravimeters to measure the acceleration of gravity in geophysical surveys, and even as a standard of length. The word 'pendulum' is new Latin, from the Latin pendulus, meaning 'hanging'.
The simple gravity pendulum is an idealized mathematical model of a pendulum. This is a weight (or bob) on the end of a massless cord suspended from a pivot, without friction. When given an initial push, it will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines.
The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, θ0, called the amplitude. It is independent of the mass of the bob. If the amplitude is limited to small swings, the period T of a simple pendulum, the time taken for a complete cycle, is:
where L is the length of the pendulum and g is the local acceleration of gravity.
For small swings the period of swing is approximately the same for different size swings: that is, the period is independent of amplitude. This property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, take the same amount of time.
For larger amplitudes, the period increases gradually with amplitude so it is longer than given by equation (1). For example, at an amplitude of θ0 = 23° it is 1% larger than given by (1). The period increases asymptotically (to infinity) as θ0 approaches 180°, because the value θ0 = 180° is an unstable equilibrium point for the pendulum. The true period of an ideal simple gravity pendulum can be written in several different forms (see Pendulum (mathematics) ), one example being the infinite series:
The difference between this true period and the period for small swings (1) above is called the circular error. In the case of a longcase clock whose pendulum is about one metre in length and whose amplitude is ±0.1 radians, the θ term adds a correction to equation (1) that is equivalent to 54 seconds per day and the θ term a correction equivalent to a further 0.03 seconds per day.
For real pendulums, corrections to the period may be needed to take into account the presence of air, the mass of the string, the size and shape of the bob and how it is attached to the string, flexibility and stretching of the string, motion of the support, and local gravitational gradients.
The length L of the ideal simple pendulum above, used for calculating the period, is the distance from the pivot point to the center of mass of the bob. A pendulum consisting of any swinging rigid body, which is free to rotate about a fixed horizontal axis is called a compound pendulum or physical pendulum. For these pendulums the appropriate equivalent length is the distance from the pivot point to a point in the pendulum called the center of oscillation. This is located under the center of mass, at a distance called the radius of gyration, that depends on the mass distribution along the pendulum. However, for any pendulum in which most of the mass is concentrated in the bob, the center of oscillation is close to the center of mass.
Using the parallel axis theorem, the radius of gyration L of a rigid pendulum can be shown to be
Substituting this into (1) above, the period T of a rigid-body compound pendulum for small angles is given by
For example, for a pendulum made of a rigid uniform rod of length L pivoted at its end, I = (1/3)mL. The center of mass is located in the center of the rod, so R = L/2. Substituting these values into the above equation gives T = 2π√. This shows that a rigid rod pendulum has the same period as a simple pendulum of 2/3 its length.
Christiaan Huygens proved in 1673 that the pivot point and the center of oscillation are interchangeable. This means if any pendulum is turned upside down and swung from a pivot located at its previous center of oscillation, it will have the same period as before, and the new center of oscillation will be at the old pivot point. In 1817 Henry Kater used this idea to produce a type of reversible pendulum, now known as a Kater pendulum, for improved measurements of the acceleration due to gravity.
One of the earliest known uses of a pendulum was in the 1st. century seismometer device of Han Dynasty Chinese scientist Zhang Heng. Its function was to sway and activate one of a series of levers after being disturbed by the tremor of an earthquake far away. Released by a lever, a small ball would fall out of the urn-shaped device into one of eight metal toad's mouths below, at the eight points of the compass, signifying the direction the earthquake was located.
Many sources claim that the 10th century Egyptian astronomer Ibn Yunus used a pendulum for time measurement, but this was an error that originated in 1684 with the British historian Edward Bernard.
During the Renaissance, large pendulums were used as sources of power for manual reciprocating machines such as saws, bellows, and pumps. Leonardo da Vinci made many drawings of the motion of pendulums, though without realizing its value for timekeeping.
Italian scientist Galileo Galilei was the first to study the properties of pendulums, beginning around 1602. His first existent report of his research is contained in a letter to Guido Ubaldo dal Monte, from Padua, dated November 29, 1602. His biographer and student, Vincenzo Viviani, claimed his interest had been sparked around 1582 by the swinging motion of a chandelier in the Pisa cathedral. Galileo discovered the crucial property that makes pendulums useful as timekeepers, called isochronism; the period of the pendulum is approximately independent of the amplitude or width of the swing. He also found that the period is independent of the mass of the bob, and proportional to the square root of the length of the pendulum. He first employed freeswinging pendulums in simple timing applications. A physician friend invented a device which measured a patient's pulse by the length of a pendulum; the pulsilogium. In 1641 Galileo conceived and dictated to his son Vincenzo a design for a pendulum clock; Vincenzo began construction, but had not completed it when he died in 1649. The pendulum was the first harmonic oscillator used by man.
In 1656 the Dutch scientist Christiaan Huygens built the first pendulum clock. This was a great improvement over existing mechanical clocks; their best accuracy was increased from around 15 minutes deviation a day to around 15 seconds a day. Pendulums spread over Europe as existing clocks were retrofitted with them.
The English scientist Robert Hooke studied the conical pendulum around 1666, consisting of a pendulum that is free to swing in two dimensions, with the bob rotating in a circle or ellipse. He used the motions of this device as a model to analyze the orbital motions of the planets. Hooke suggested to Isaac Newton in 1679 that the components of orbital motion consisted of inertial motion along a tangent direction plus an attractive motion in the radial direction. This played a part in Newton's formulation of the law of universal gravitation. Robert Hooke was also responsible for suggesting as early as 1666 that the pendulum could be used to measure the force of gravity.
During his expedition to Cayenne, French Guiana in 1671, Jean Richer found that a pendulum clock was 2 ⁄2 minutes per day slower at Cayenne than at Paris. From this he deduced that the force of gravity was lower at Cayenne. In 1687, Isaac Newton in Principia Mathematica showed that this was because the Earth was not a true sphere but slightly oblate (flattened at the poles) from the effect of centrifugal force due to its rotation, causing gravity to increase with latitude. Portable pendulums began to be taken on voyages to distant lands, as precision gravimeters to measure the acceleration of gravity at different points on Earth, eventually resulting in accurate models of the shape of the Earth.
In 1673, Christiaan Huygens published his theory of the pendulum, Horologium Oscillatorium sive de motu pendulorum. He demonstrated that for an object to descend down a curve under gravity in the same time interval, regardless of the starting point, it must follow a cycloid curve rather than the circular arc of a pendulum. This confirmed the earlier observation by Marin Mersenne that the period of a pendulum does vary with its amplitude, and that Galileo's observation of isochronism was accurate only for small swings. Huygens also solved the issue of how to calculate the period of an arbitrarily shaped pendulum (called a compound pendulum), discovering the center of oscillation, and its interchangeability with the pivot point.
The existing clock movement, the verge escapement, made pendulums swing in very wide arcs of about 100°. Huygens showed this was a source of inaccuracy, causing the period to vary with amplitude changes caused by small unavoidable variations in the clock's drive force. To make its period isochronous, Huygens mounted cycloidal-shaped metal 'cheeks' next to the pivot in his 1673 clock, that constrained the suspension cord and forced the pendulum to follow a cycloid arc. This solution didn't prove as practical as simply limiting the pendulum's swing to small angles of a few degrees. The realization that only small swings were isochronous motivated the development of the anchor escapement around 1670, which reduced the pendulum swing in clocks to 4°–6°.
During the 18th and 19th century, the pendulum clock's role as the most accurate timekeeper motivated much practical research into improving pendulums. It was found that a major source of error was that the pendulum rod expanded and contracted with changes in ambient temperature, changing the period of swing. This was solved with the invention of temperature compensated pendulums, the mercury pendulum in 1721 and the gridiron pendulum in 1726, reducing errors in precision pendulum clocks to a few seconds per week.
The accuracy of gravity measurements made with pendulums was limited by the difficulty of finding the location of their center of oscillation. Huygens had discovered in 1673 that a pendulum has the same period when hung from its center of oscillation as when hung from its pivot, and the distance between the two points was equal to the length of a simple gravity pendulum of the same period. In 1818 British Captain Henry Kater invented the reversible Kater's pendulum which used this principle, making possible very accurate measurements of gravity. For the next century the reversible pendulum was the standard method of measuring absolute gravitational acceleration.
In 1851, Jean Bernard Léon Foucault showed that the plane of oscillation of a pendulum, like a gyroscope, tends to stay constant regardless of the motion of the pivot, and that this could be used to demonstrate the rotation of the Earth. He suspended a pendulum free to swing in two dimensions (later named the Foucault pendulum) from the dome of the Panthéon in Paris. The length of the cord was 67 m (220 ft). Once the pendulum was set in motion, the plane of swing was observed to precess or rotate 360° clockwise in about 32 hours. This was the first demonstration of the Earth's rotation that didn't depend on celestial observations, and a "pendulum mania" broke out, as Foucault pendulums were displayed in many cities and attracted large crowds.
Around 1900 low-thermal-expansion materials began to be used for pendulum rods in the highest precision clocks and other instruments, first invar, a nickel steel alloy, and later fused quartz, which made temperature compensation trivial. Precision pendulums were housed in low pressure tanks, which kept the air pressure constant to prevent changes in the period due to changes in buoyancy of the pendulum due to changing atmospheric pressure. The accuracy of the best pendulum clocks topped out at around a second per year.
The timekeeping accuracy of the pendulum was exceeded by the quartz crystal oscillator, invented in 1921, and quartz clocks, invented in 1927, replaced pendulum clocks as the world's best timekeepers. Pendulum clocks were used as time standards until World War 2, although the French Time Service continued using them in their official time standard ensemble until 1954. Pendulum gravimeters were superseded by "free fall" gravimeters in the 1950s, but pendulum instruments continued to be used into the 1970s.
For 300 years, from its discovery around 1602 until development of the quartz clock in the 1930s, the pendulum was the world's standard for accurate timekeeping. In addition to clock pendulums, freeswinging seconds pendulums were widely used as precision timers in scientific experiments in the 17th and 18th centuries. Pendulums require great mechanical stability: a length change of only 0.02%, 0.2 mm in a grandfather clock pendulum, will cause an error of a minute per week.
Pendulums in clocks (see example at right) are usually made of a weight or bob (b) suspended by a rod of wood or metal (a). To reduce air resistance (which accounts for most of the energy loss in clocks) the bob is traditionally a smooth disk with a lens-shaped cross section, although in antique clocks it often had carvings or decorations specific to the type of clock. In quality clocks the bob is made as heavy as the suspension can support and the movement can drive, since this improves the regulation of the clock (see Accuracy below). A common weight for seconds pendulum bobs is 15 pounds. (6.8 kg). Instead of hanging from a pivot, clock pendulums are usually supported by a short straight spring (d) of flexible metal ribbon. This avoids the friction and 'play' caused by a pivot, and the slight bending force of the spring merely adds to the pendulum's restoring force. A few precision clocks have pivots of 'knife' blades resting on agate plates. The impulses to keep the pendulum swinging are provided by an arm hanging behind the pendulum called the crutch, (e), which ends in a fork, (f) whose prongs embrace the pendulum rod. The crutch is pushed back and forth by the clock's escapement, (g,h).
Each time the pendulum swings through its centre position, it releases one tooth of the escape wheel (g). The force of the clock's mainspring or a driving weight hanging from a pulley, transmitted through the clock's gear train, causes the wheel to turn, and a tooth presses against one of the pallets (h), giving the pendulum a short push. The clock's wheels, geared to the escape wheel, move forward a fixed amount with each pendulum swing, advancing the clock's hands at a steady rate.
The pendulum always has a means of adjusting the period, usually by an adjustment nut (c) under the bob which moves it up or down on the rod. Moving the bob up decreases the pendulum's length, causing the pendulum to swing faster and the clock to gain time. Some precision clocks have a small auxiliary adjustment weight on a threaded shaft on the bob, to allow finer adjustment. Some tower clocks and precision clocks use a tray attached near to the midpoint of the pendulum rod, to which small weights can be added or removed. This effectively shifts the centre of oscillation and allows the rate to be adjusted without stopping the clock.
The pendulum must be suspended from a rigid support. During operation, any elasticity will allow tiny imperceptible swaying motions of the support, which disturbs the clock's period, resulting in error. Pendulum clocks should be attached firmly to a sturdy wall.
The most common pendulum length in quality clocks, which is always used in grandfather clocks, is the seconds pendulum, about 1 metre (39 inches) long. In mantel clocks, half-second pendulums, 25 cm (10 in) long, or shorter, are used. Only a few large tower clocks use longer pendulums, the 1.5 second pendulum, 2.25 m (7 ft) long, or occasionally the two-second pendulum, 4 m (13 ft) as is the case of Big Ben.
The largest source of error in early pendulums was slight changes in length due to thermal expansion and contraction of the pendulum rod with changes in ambient temperature. This was discovered when people noticed that pendulum clocks ran slower in summer, by as much as a minute per week (one of the first was Godefroy Wendelin, as reported by Huygens in 1658). Thermal expansion of pendulum rods was first studied by Jean Picard in 1669. A pendulum with a steel rod will expand by about 11.3 parts per million (ppm) with each degree Celsius increase, causing it to lose about 0.27 seconds per day for every degree Celsius increase in temperature, or 9 seconds per day for a 33 °C (60 °F) change. Wood rods expand less, losing only about 6 seconds per day for a 33 °C (60 °F) change, which is why quality clocks often had wooden pendulum rods. However, care had to be taken to reduce the possibility of errors due to changes in humidity.
The first device to compensate for this error was the mercury pendulum, invented by George Graham in 1721. The liquid metal mercury expands in volume with temperature. In a mercury pendulum, the pendulum's weight (bob) is a container of mercury. With a temperature rise, the pendulum rod gets longer, but the mercury also expands and its surface level rises slightly in the container, moving its centre of mass closer to the pendulum pivot. By using the correct height of mercury in the container these two effects will cancel, leaving the pendulum's centre of mass, and its period, unchanged with temperature. Its main disadvantage was that when the temperature changed, the rod would come to the new temperature quickly but the mass of mercury might take a day or two to reach the new temperature, causing the rate to deviate during that time. To improve thermal accommodation several thin containers were often used, made of metal. Mercury pendulums were the standard used in precision regulator clocks into the 20th century.
The most widely used compensated pendulum was the gridiron pendulum, invented in 1726 by John Harrison. This consists of alternating rods of two different metals, one with lower thermal expansion (CTE), steel, and one with higher thermal expansion, zinc or brass. The rods are connected by a frame, as shown in the drawing above, so that an increase in length of the zinc rods pushes the bob up, shortening the pendulum. With a temperature increase, the low expansion steel rods make the pendulum longer, while the high expansion zinc rods make it shorter. By making the rods of the correct lengths, the greater expansion of the zinc cancels out the expansion of the steel rods which have a greater combined length, and the pendulum stays the same length with temperature.
Zinc-steel gridiron pendulums are made with 5 rods, but the thermal expansion of brass is closer to steel, so brass-steel gridirons usually require 9 rods. Gridiron pendulums adjust to temperature changes faster than mercury pendulums, but scientists found that friction of the rods sliding in their holes in the frame caused gridiron pendulums to adjust in a series of tiny jumps. In high precision clocks this caused the clock's rate to change suddenly with each jump. Later it was found that zinc is subject to creep. For these reasons mercury pendulums were used in the highest precision clocks, but gridirons were used in quality regulator clocks. They became so associated with quality that, to this day, many ordinary clock pendulums have decorative 'fake' gridirons that don't actually have any temperature compensation function.
Around 1900 low thermal expansion materials were developed which, when used as pendulum rods, made elaborate temperature compensation unnecessary. These were only used in a few of the highest precision clocks before the pendulum became obsolete as a time standard. In 1896 Charles Edouard Guillaume invented the nickel steel alloy Invar. This has a CTE of around 0.5 µin/(in·°F), resulting in pendulum temperature errors over 71 °F of only 1.3 seconds per day, and this residual error could be compensated to zero with a few centimeters of aluminium under the pendulum bob (this can be seen in the Riefler clock image above). Invar pendulums were first used in 1898 in the Riefler regulator clock which achieved accuracy of 15 milliseconds per day. Suspension springs of Elinvar were used to eliminate temperature variation of the spring's restoring force on the pendulum. Later fused quartz was used which had even lower CTE. These materials are the choice for modern high accuracy pendulums.
The effect of the surrounding air on a moving pendulum is complex and requires fluid mechanics to calculate precisely, but for most purposes its influence on the period can be accounted for by three effects:
So increases in barometric pressure increase a pendulum's period slightly due to the first two effects, by about 0.11 seconds per day per kilopascal (0.37 seconds per day per inch of mercury or 0.015 seconds per day per torr). Researchers using pendulums to measure the acceleration of gravity had to correct the period for the air pressure at the altitude of measurement, computing the equivalent period of a pendulum swinging in vacuum. A pendulum clock was first operated in a constant-pressure tank by Friedrich Tiede in 1865 at the Berlin Observatory, and by 1900 the highest precision clocks were mounted in tanks that were kept at a constant pressure to eliminate changes in atmospheric pressure. Alternatively, in some a small aneroid barometer mechanism attached to the pendulum compensated for this effect.
Pendulums are affected by changes in gravitational acceleration, which varies by as much as 0.5% at different locations on Earth, so pendulum clocks have to be recalibrated after a move. Even moving a pendulum clock to the top of a tall building can cause it to lose measurable time from the reduction in gravity.
The timekeeping elements in all clocks, which include pendulums, balance wheels, the quartz crystals used in quartz watches, and even the vibrating atoms in atomic clocks, are in physics called harmonic oscillators. The reason harmonic oscillators are used in clocks is that they vibrate or oscillate at a specific resonant frequency or period and resist oscillating at other rates. However, the resonant frequency is not infinitely 'sharp'. Around the resonant frequency there is a narrow natural band of frequencies (or periods), called the resonance width or bandwidth, where the harmonic oscillator will oscillate. In a clock, the actual frequency of the pendulum may vary randomly within this bandwidth in response to disturbances, but at frequencies outside this band, the clock will not function at all.
The measure of a harmonic oscillator's resistance to disturbances to its oscillation period is a dimensionless parameter called the Q factor equal to the resonant frequency divided by the bandwidth. The higher the Q, the smaller the bandwidth, and the more constant the frequency or period of the oscillator for a given disturbance. The reciprocal of the Q is roughly proportional to the limiting accuracy achievable by a harmonic oscillator as a time standard.
The Q is related to how long it takes for the oscillations of an oscillator to die out. The Q of a pendulum can be measured by counting the number of oscillations it takes for the amplitude of the pendulum's swing to decay to 1/e = 36.8% of its initial swing, and multiplying by 2π.
In a clock, the pendulum must receive pushes from the clock's movement to keep it swinging, to replace the energy the pendulum loses to friction. These pushes, applied by a mechanism called the escapement, are the main source of disturbance to the pendulum's motion. The Q is equal to 2π times the energy stored in the pendulum, divided by the energy lost to friction during each oscillation period, which is the same as the energy added by the escapement each period. It can be seen that the smaller the fraction of the pendulum's energy that is lost to friction, the less energy needs to be added, the less the disturbance from the escapement, the more 'independent' the pendulum is of the clock's mechanism, and the more constant its period is. The Q of a pendulum is given by:
where M is the mass of the bob, ω = 2π/T is the pendulum's radian frequency of oscillation, and Γ is the frictional damping force on the pendulum per unit velocity.
ω is fixed by the pendulum's period, and M is limited by the load capacity and rigidity of the suspension. So the Q of clock pendulums is increased by minimizing frictional losses (Γ). Precision pendulums are suspended on low friction pivots consisting of triangular shaped 'knife' edges resting on agate plates. Around 99% of the energy loss in a freeswinging pendulum is due to air friction, so mounting a pendulum in a vacuum tank can increase the Q, and thus the accuracy, by a factor of 100.
The Q of pendulums ranges from several thousand in an ordinary clock to several hundred thousand for precision regulator pendulums swinging in vacuum. A quality home pendulum clock might have a Q of 10,000 and an accuracy of 10 seconds per month. The most accurate commercially produced pendulum clock was the Shortt-Synchronome free pendulum clock, invented in 1921. Its Invar master pendulum swinging in a vacuum tank had a Q of 110,000 and an error rate of around a second per year.
Their Q of 10–10 is one reason why pendulums are more accurate timekeepers than the balance wheels in watches, with Q around 100-300, but less accurate than the quartz crystals in quartz clocks, with Q of 10–10.
Pendulums (unlike, for example, quartz crystals) have a low enough Q that the disturbance caused by the impulses to keep them moving is generally the limiting factor on their timekeeping accuracy. Therefore the design of the escapement, the mechanism that provides these impulses, has a large effect on the accuracy of a clock pendulum. If the impulses given to the pendulum by the escapement each swing could be exactly identical, the response of the pendulum would be identical, and its period would be constant. However, this is not achievable; unavoidable random fluctuations in the force due to friction of the clock's pallets, lubrication variations, and changes in the torque provided by the clock's power source as it runs down, mean that the force of the impulse applied by the escapement varies.
If these variations in the escapement's force cause changes in the pendulum's width of swing (amplitude), this will cause corresponding slight changes in the period, since (as discussed at top) a pendulum with a finite swing is not quite isochronous. Therefore, the goal of traditional escapement design is to apply the force with the proper profile, and at the correct point in the pendulum's cycle, so force variations have no effect on the pendulum's amplitude. This is called an isochronous escapement.
In 1826 British astronomer George Airy proved what clockmakers had known for centuries; that the disturbing effect of a drive force on the period of a pendulum is smallest if given as a short impulse as the pendulum passes through its bottom equilibrium position. Specifically, he proved that if a pendulum is driven by an impulse that is symmetrical about its bottom equilibrium position, the pendulum's amplitude will be unaffected by changes in the drive force. The most accurate escapements, such as the deadbeat, approximately satisfy this condition.
The presence of the acceleration of gravity g in the periodicity equation (1) for a pendulum means that the local gravitational acceleration of the Earth can be calculated from the period of a pendulum. A pendulum can therefore be used as a gravimeter to measure the local gravity, which varies by over 0.5% across the surface of the Earth. The pendulum in a clock is disturbed by the pushes it receives from the clock movement, so freeswinging pendulums were used, and were the standard instruments of gravimetry up to the 1930s.
The difference between clock pendulums and gravimeter pendulums is that to measure gravity, the pendulum's length as well as its period has to be measured. The period of freeswinging pendulums could be found to great precision by comparing their swing with a precision clock that had been adjusted to keep correct time by the passage of stars overhead. In the early measurements, a weight on a cord was suspended in front of the clock pendulum, and its length adjusted until the two pendulums swung in exact synchronism. Then the length of the cord was measured. From the length and the period, g could be calculated from (1).
The seconds pendulum, a pendulum with a period of two seconds so each swing takes one second, was widely used to measure gravity, because most precision clocks had seconds pendulums. By the late 17th century, the length of the seconds pendulum became the standard measure of the strength of gravitational acceleration at a location. By 1700 its length had been measured with submillimeter accuracy at several cities in Europe. For a seconds pendulum, g is proportional to its length:
The precision of the early gravity measurements above was limited by the difficulty of measuring the length of the pendulum, L . L was the length of an idealized simple gravity pendulum (described at top), which has all its mass concentrated in a point at the end of the cord. In 1673 Huygens had shown that the period of a real pendulum (called a compound pendulum) was equal to the period of a simple pendulum with a length equal to the distance between the pivot point and a point called the center of oscillation, located under the center of gravity, that depends on the mass distribution along the pendulum. But there was no accurate way of determining the center of oscillation in a real pendulum.
To get around this problem, the early researchers above approximated an ideal simple pendulum as closely as possible by using a metal sphere suspended by a light wire or cord. If the wire was light enough, the center of oscillation was close to the center of gravity of the ball, at its geometric center. This "ball and wire" type of pendulum wasn't very accurate, because it didn't swing as a rigid body, and the elasticity of the wire caused its length to change slightly as the pendulum swung.
However Huygens had also proved that in any pendulum, the pivot point and the center of oscillation were interchangeable. That is, if a pendulum were turned upside down and hung from its center of oscillation, it would have the same period as it did in the previous position, and the old pivot point would be the new center of oscillation.
British physicist and army captain Henry Kater in 1817 realized that Huygens' principle could be used to find the length of a simple pendulum with the same period as a real pendulum. If a pendulum was built with a second adjustable pivot point near the bottom so it could be hung upside down, and the second pivot was adjusted until the periods when hung from both pivots were the same, the second pivot would be at the center of oscillation, and the distance between the two pivots would be the length of a simple pendulum with the same period.
Kater built a reversible pendulum (shown at right) consisting of a brass bar with two opposing pivots made of short triangular "knife" blades (a) near either end. It could be swung from either pivot, with the knife blades supported on agate plates. Rather than make one pivot adjustable, he attached the pivots a meter apart and instead adjusted the periods with a moveable weight on the pendulum rod (b,c). In operation, the pendulum is hung in front of a precision clock, and the period timed, then turned upside down and the period timed again. The weight is adjusted with the adjustment screw until the periods are equal. Then putting this period and the distance between the pivots into equation (1) gives the gravitational acceleration g very accurately.
Kater timed the swing of his pendulum using the "method of coincidences" and measured the distance between the two pivots with a microscope. After applying corrections for the finite amplitude of swing, the buoyancy of the bob, the barometric pressure and altitude, and temperature, he obtained a value of 39.13929 inches for the seconds pendulum at London, in vacuum, at sea level, at 62 °F. The largest variation from the mean of his 12 observations was 0.00028 in. representing a precision of gravity measurement of 7×10 (7 mGal or 70 µm/s). Kater's measurement was used as Britain's official standard of length (see below) from 1824 to 1855.
Reversible pendulums (known technically as "convertible" pendulums) employing Kater's principle were used for absolute gravity measurements into the 1930s.
The increased accuracy made possible by Kater's pendulum helped make gravimetry a standard part of geodesy. Since the exact location (latitude and longitude) of the 'station' where the gravity measurement was made was necessary, gravity measurements became part of surveying, and pendulums were taken on the great geodetic surveys of the 18th century, particularly the Great Trigonometric Survey of India.
Relative pendulum gravimeters were superseded by the simpler LaCoste zero-length spring gravimeter, invented in 1934 by Lucien LaCoste. Absolute (reversible) pendulum gravimeters were replaced in the 1950s by free fall gravimeters, in which a weight is allowed to fall in a vacuum tank and its acceleration is measured by an optical interferometer.
Because the acceleration of gravity is constant at a given point on Earth, the period of a simple pendulum at a given location depends only on its length. Additionally, gravity varies only slightly at different locations. Almost from the pendulum's discovery until the early 19th century, this property led scientists to suggest using a pendulum of a given period as a standard of length.
Until the 19th century, countries based their systems of length measurement on prototypes, metal bar primary standards, such as the standard yard in Britain kept at the Houses of Parliament, and the standard toise in France, kept at Paris. These were vulnerable to damage or destruction over the years, and because of the difficulty of comparing prototypes, the same unit often had different lengths in distant towns, creating opportunities for fraud. Enlightenment scientists argued for a length standard that was based on some property of nature that could be determined by measurement, creating an indestructible, universal standard. The period of pendulums could be measured very precisely by timing them with clocks that were set by the stars. A pendulum standard amounted to defining the unit of length by the gravitational force of the Earth, for all intents constant, and the second, which was defined by the rotation rate of the Earth, also constant. The idea was that anyone, anywhere on Earth, could recreate the standard by constructing a pendulum that swung with the defined period and measuring its length.
Virtually all proposals were based on the seconds pendulum, in which each swing (a half period) takes one second, which is about a meter (39 inches) long, because by the late 17th century it had become a standard for measuring gravity (see previous section). By the 18th century its length had been measured with sub-millimeter accuracy at a number of cities in Europe and around the world.
The initial attraction of the pendulum length standard was that it was believed (by early scientists such as Huygens and Wren) that gravity was constant over the Earth's surface, so a given pendulum had the same period at any point on Earth. So the length of the standard pendulum could be measured at any location, and would not be tied to any given nation or region; it would be a truly democratic, worldwide standard. Although Richer found in 1672 that gravity varies at different points on the globe, the idea of a pendulum length standard remained popular, because it was found that gravity only varies with latitude. Gravitational acceleration increases smoothly from the equator to the poles, due to the oblate shape of the Earth. So at any given latitude (east-west line), gravity was constant enough that the length of a seconds pendulum was the same within the measurement capability of the 18th century. So the unit of length could be defined at a given latitude and measured at any point at that latitude. For example, a pendulum standard defined at 45° north latitude, a popular choice, could be measured in parts of France, Italy, Croatia, Serbia, Romania, Russia, Kazakhstan, China, Mongolia, the United States and Canada. In addition, it could be recreated at any location at which the gravitational acceleration had been accurately measured.
By the mid 19th century, increasingly accurate pendulum measurements by Edward Sabine and Thomas Young revealed that gravity, and thus the length of any pendulum standard, varied measurably with local geologic features such as mountains and dense subsurface rocks. So a pendulum length standard had to be defined at a single point on Earth and could only be measured there. This took much of the appeal from the concept, and efforts to adopt pendulum standards were abandoned.
One of the first to suggest defining length with a pendulum was Flemish scientist Isaac Beeckman who in 1631 recommended making the seconds pendulum "the invariable measure for all people at all times in all places". Marin Mersenne, who first measured the seconds pendulum in 1644, also suggested it. The first official proposal for a pendulum standard was made by the British Royal Society in 1660, advocated by Christiaan Huygens and Ole Rømer, basing it on Mersenne's work, and Huygens in Horologium Oscillatorum proposed a "horary foot" defined as 1/3 of the seconds pendulum. Christopher Wren was another early supporter. The idea of a pendulum standard of length must have been familiar to people as early as 1663, because Samuel Butler satirizes it in Hudibras:
In 1671 Jean Picard proposed a pendulum defined 'universal foot' in his influential Mesure de la Terre. Gabriel Mouton around 1670 suggested defining the toise either by a seconds pendulum or a minute of terrestrial degree. A plan for a complete system of units based on the pendulum was advanced in 1675 by Italian polymath Tito Livio Burratini. In France in 1747, geographer Charles Marie de la Condamine proposed defining length by a seconds pendulum at the equator; since at this location a pendulum's swing wouldn't be distorted by the Earth's rotation. British politicians James Steuart (1780) and George Skene Keith were also supporters.
By the end of the 18th century, when many nations were reforming their weight and measure systems, the seconds pendulum was the leading choice for a new definition of length, advocated by prominent scientists in several major nations. In 1790, then US Secretary of State Thomas Jefferson proposed to Congress a comprehensive decimalized US 'metric system' based on the seconds pendulum at 38° North latitude, the mean latitude of the United States. No action was taken on this proposal. In Britain the leading advocate of the pendulum was politician John Riggs Miller. When his efforts to promote a joint British–French–American metric system fell through in 1790, he proposed a British system based on the length of the seconds pendulum at London. This standard was adopted in 1824 (below).
In the discussions leading up to the French adoption of the metric system in 1791, the leading candidate for the definition of the new unit of length, the metre, was the seconds pendulum at 45° North latitude. It was advocated by a group led by French politician Talleyrand and mathematician Antoine Nicolas Caritat de Condorcet. This was one of the three final options considered by the French Academy of Sciences committee. However, on March 19, 1791 the committee instead chose to base the metre on the length of the meridian through Paris. A pendulum definition was rejected because of its variability at different locations, and because it defined length by a unit of time. (However, since 1983 the metre has been officially defined in terms of the length of the second and the speed of light.) A possible additional reason is that the radical French Academy didn't want to base their new system on the second, a traditional and nondecimal unit from the ancien regime.
Although not defined by the pendulum, the final length chosen for the metre, 10 of the pole-to-equator meridian arc, was very close to the length of the seconds pendulum (0.9937 m), within 0.63%. Although no reason for this particular choice was given at the time, it was probably to facilitate the use of the seconds pendulum as a secondary standard, as was proposed in the official document. So the modern world's standard unit of length is certainly closely linked historically with the seconds pendulum.
Britain and Denmark appear to be the only nations that (for a short time) based their units of length on the pendulum. In 1821 the Danish inch was defined as 1/38 of the length of the mean solar seconds pendulum at 45° latitude at the meridian of Skagen, at sea level, in vacuum. The British parliament passed the Imperial Weights and Measures Act in 1824, a reform of the British standard system which declared that if the prototype standard yard was destroyed, it would be recovered by defining the inch so that the length of the solar seconds pendulum at London, at sea level, in a vacuum, at 62 °F was 39.1393 inches. This also became the US standard, since at the time the US used British measures. However, when the prototype yard was lost in the 1834 Houses of Parliament fire, it proved impossible to recreate it accurately from the pendulum definition, and in 1855 Britain repealed the pendulum standard and returned to prototype standards.
A pendulum in which the rod is not vertical but almost horizontal was used in early seismometers for measuring earth tremors. The bob of the pendulum does not move when its mounting does, and the difference in the movements is recorded on a drum chart.
As first explained by Maximilian Schuler in a 1923 paper, a pendulum whose period exactly equals the orbital period of a hypothetical satellite orbiting just above the surface of the earth (about 84 minutes) will tend to remain pointing at the center of the earth when its support is suddenly displaced. This principle, called Schuler tuning, is used in inertial guidance systems in ships and aircraft that operate on the surface of the Earth. No physical pendulum is used, but the control system that keeps the inertial platform containing the gyroscopes stable is modified so the device acts as though it is attached to such a pendulum, keeping the platform always facing down as the vehicle moves on the curved surface of the Earth.
In 1665 Huygens made a curious observation about pendulum clocks. Two clocks had been placed on his mantlepiece, and he noted that they had acquired an opposing motion. That is, their pendulums were beating in unison but in the opposite direction; 180° out of phase. Regardless of how the two clocks were started, he found that they would eventually return to this state, thus making the first recorded observation of a coupled oscillator.
The cause of this behavior was that the two pendulums were affecting each other through slight motions of the supporting mantlepiece. Many physical systems can be mathematically described as coupled oscillation. Under certain conditions these systems can also demonstrate chaotic motion.
Pendulum motion appears in religious ceremonies as well. The swinging incense burner called a censer, also known as a thurible, is an example of a pendulum. Pendulums are also seen at many gatherings in eastern Mexico where they mark the turning of the tides on the day which the tides are at their highest point. See also pendulums for divination and dowsing.
During the Middle Ages, pendulums were used as a method of torture by the Spanish Inquisition. Using the basic principle of the pendulum, the weight (bob) is replaced by an axe head. The victim is strapped to a table below, the device is activated, and the axe begins to swing back and forth through the air. With each pass, or return, the pendulum is lowered, gradually coming closer to the victim's torso, until finally cleaved. Because of the time required before the mortal action of the axe is complete, the pendulum is considered a method of torturing the victim before his or her demise.
Note: most of the sources below, including books, are viewable online through the links given.
▪ Premium designs
▪ Designs by country
▪ Designs by U.S. state
▪ Most popular designs
▪ Newest, last added designs
▪ Unique designs
▪ Cheap, budget designs
▪ Design super sale
DESIGNS BY THEME
▪ Accounting, audit designs
▪ Adult, sex designs
▪ African designs
▪ American, U.S. designs
▪ Animals, birds, pets designs
▪ Agricultural, farming designs
▪ Architecture, building designs
▪ Army, navy, military designs
▪ Audio & video designs
▪ Automobiles, car designs
▪ Books, e-book designs
▪ Beauty salon, SPA designs
▪ Black, dark designs
▪ Business, corporate designs
▪ Charity, donation designs
▪ Cinema, movie, film designs
▪ Computer, hardware designs
▪ Celebrity, star fan designs
▪ Children, family designs
▪ Christmas, New Year's designs
▪ Green, St. Patrick designs
▪ Dating, matchmaking designs
▪ Design studio, creative designs
▪ Educational, student designs
▪ Electronics designs
▪ Entertainment, fun designs
▪ Fashion, wear designs
▪ Finance, financial designs
▪ Fishing & hunting designs
▪ Flowers, floral shop designs
▪ Food, nutrition designs
▪ Football, soccer designs
▪ Gambling, casino designs
▪ Games, gaming designs
▪ Gifts, gift designs
▪ Halloween, carnival designs
▪ Hotel, resort designs
▪ Industry, industrial designs
▪ Insurance, insurer designs
▪ Interior, furniture designs
▪ International designs
▪ Internet technology designs
▪ Jewelry, jewellery designs
▪ Job & employment designs
▪ Landscaping, garden designs
▪ Law, juridical, legal designs
▪ Love, romantic designs
▪ Marketing designs
▪ Media, radio, TV designs
▪ Medicine, health care designs
▪ Mortgage, loan designs
▪ Music, musical designs
▪ Night club, dancing designs
▪ Photography, photo designs
▪ Personal, individual designs
▪ Politics, political designs
▪ Real estate, realty designs
▪ Religious, church designs
▪ Restaurant, cafe designs
▪ Retirement, pension designs
▪ Science, scientific designs
▪ Sea, ocean, river designs
▪ Security, protection designs
▪ Social, cultural designs
▪ Spirit, meditational designs
▪ Software designs
▪ Sports, sporting designs
▪ Telecommunication designs
▪ Travel, vacation designs
▪ Transport, logistic designs
▪ Web hosting designs
▪ Wedding, marriage designs
▪ White, light designs
▪ Magento store designs
▪ OpenCart store designs
▪ PrestaShop store designs
▪ CRE Loaded store designs
▪ Jigoshop store designs
▪ VirtueMart store designs
▪ osCommerce store designs
▪ Zen Cart store designs
▪ Flash CMS designs
▪ Joomla CMS designs
▪ Mambo CMS designs
▪ Drupal CMS designs
▪ WordPress blog designs
▪ Forum designs
▪ phpBB forum designs
▪ PHP-Nuke portal designs
ANIMATED WEBSITE DESIGNS
▪ Flash CMS designs
▪ Silverlight animated designs
▪ Silverlight intro designs
▪ Flash animated designs
▪ Flash intro designs
▪ XML Flash designs
▪ Flash 8 animated designs
▪ Dynamic Flash designs
▪ Flash animated photo albums
▪ Dynamic Swish designs
▪ Swish animated designs
▪ jQuery animated designs
▪ WebMatrix Razor designs
▪ HTML 5 designs
▪ Web 2.0 designs
▪ 3-color variation designs
▪ 3D, three-dimensional designs
▪ Artwork, illustrated designs
▪ Clean, simple designs
▪ CSS based website designs
▪ Full design packages
▪ Full ready websites
▪ Portal designs
▪ Stretched, full screen designs
▪ Universal, neutral designs
CORPORATE ID DESIGNS
▪ Corporate identity sets
▪ Logo layouts, logo designs
▪ Logotype sets, logo packs
▪ PowerPoint, PTT designs
▪ Facebook themes
VIDEO, SOUND & MUSIC
▪ Video e-cards
▪ After Effects video intros
▪ Special video effects
▪ Music tracks, music loops
▪ Stock music bank
GRAPHICS & CLIPART
▪ Pro clipart & illustrations, $19/year
▪ 5,000+ icons by subscription
▪ Icons, pictograms
|Custom Logo Design $149 ▪ Web Programming ▪ ID Card Printing ▪ Best Web Hosting ▪ eCommerce Software ▪ Add Your Link|
|© 1996-2013 MAGIA Internet Studio ▪ About ▪ Portfolio ▪ Photo on Demand ▪ Hosting ▪ Advertise ▪ Sitemap ▪ Privacy ▪ Maria Online| | http://www.qesign.com/sale.php?x=Pendulum | 13 |
89 | |Yale-New Haven Teachers Institute||Home|
Lois Van Wagner
Most students when confronted with a well-formed quartz crystal, or a purple fluorite, or a polished geode, literally jump about demanding to know “Who made this?” and “How did they make this?”. And the subject of diamonds and other gems is avariciously attended to with wide eyes and listening ears. Since the actual atomic structure can be drawn and modeled in a wide variety of mediums even the more humdrum aspects can be made inviting. And the chemical concoctions that can result in some homemade crystals bring out the white-coat scientist in even the most reluctant student. Many of the students own small personal calculators that are solar-powered, or have seen the advertisements in discount store circulars for solar powered exterior lighting. These objects provide a good jump-off point for discussions about crystals and technology.
My unit on crystals will be divided into three categories covering three areas of study. The first area will deal with the actual structure of crystals beginning with a look at the atom, some simple atomic drawings (Bohr models) of elements, and a study of the periodic table of the elements. Next we will look at compounds and their bonds. This will lead to the understanding of how crystals “look” and the shapes they can take. Within this section we will draw and construct in three dimensions paper models of six of the basic crystal shapes. We will also grow some of our own crystals in the laboratory.
The second major heading in my unit will focus on minerals. We will learn about the characteristics of a mineral: the chemical composition, mineral color, luster, cleavage, hardness (and how this property is related to crystal structure), and specific gravity. The characteristics of minerals lend themselves nicely to a mineral identification lab., and a lab. on specific gravity of minerals and selected rock samples. Some of the other topics in this section will include a study of the various forms of quartz; gems, especially diamonds; the formation of mineral crystals—by igneous, sedimentary, and metamorphic processes; and some stories of famous (or infamous) gems.
The final section of this unit will delve into the uses for crystals which modern technology has fostered, specifically solar cells, transistors, and liquid crystals.
The unit is designed for the middle school age level, specifically for the eighth grade Earth Science curriculum. In the two years that I have used this curriculum unit with my eighth grade classes I have amplified or omitted sections depending on the interest and abilities of the various classes. In this way I have been able to use the unit with students that range in ability from very high to very low.
Throughout this paper I am indebted to the teaching and guidance of Dr. Werner Wolf and to the following books and sources:
On the topic of crystal growth and structure: Alan Holden and Phylis Morrison, Crystals and Crystal Growing; and Elizabeth Woods, Crystals—A Handbook for School Teachers.
On the topic of quartz and related minerals: Cornelius Hurlbut, Minerals and Man.
On the topic of gemstones, natural and man made: Cornelius Hurlbut, Minerals and Man; Joel Arem, Man-made Crystals; and Paul O’Neil, Gemstones.
On the topic of liquid crystals: Frederick Kahn in Physics Today; and Glenn Brown and Peter Crocker in C&EN.
On the topic of solar cells and semi-conductors: Bruce Chalmers in Scientific American; and Christopher Swan, Suncell; Energy, Economy, and Photovoltaics.
Figure 1 illustrates a specific labelling technique that is quite easy to use with children and facilitates accuracy. Since the shells fill using definite laws and patterns involving energy, they are predictable. Beyond the element calcium, however, the laws become quite complex and need extensive explanation and understanding of subshells, It is wise, therefore, to limit the Bohr model drawing to that point at this grade level. Some periodic tables give the number of electrons in the energy levels as part of the information in the block for each element. By looking at this data, some of the students might come up with partial explanations for the building of these shells. In the absence of such data it is sufficient to declare the numbers 2 and 8 “magic numbers” and let them build the smaller atoms.
Two activities which can be used with the students are to be found in Appendix 1 listed within Activity 2. The first is a simple introduction to the various crystal shapes by drawing them and identifying their title. The second activity applicable to crystal shapes using the diagrams given there. In doing so, the student can in very concrete terms understand the variety of angles and lengths involved.
- 1. Isometric or cubic — three edges of equal length and at right angles to one another.
- 2. Tetragonal — three edges at right angles but only two edges of equal length.
- 3. Orthorhombic — three edges at right angles but all edges of different lengths.
- 4. Monoclinic — two edges at right angle, the other angle not; and all edges of different lengths.
- 5. Triclinic — all three edges of different lengths and all angles not at right angles.
- 6. Hexagonal — two edges are equal and make angles of 60 to 120 degrees with each other. The third edge is at right angles to them and of different length.
Since most of their crystals will be formed by solution the student needs to add the words solute, solvent, and solution to his vocabulary. The solute is the substance being dissolved and the solvent is the substance doing the dissolving. A solvent can hold in solution just so much of the solute. At this point we say the solution is saturated. If there is less solute in the solution than it would ideally hold we would then say it is an unsaturated solution. And in some cases such as when we heat the solvent we can continue to add solute and it will dissolve. When the heat source is removed and the solution’s temperature falls the extra solute may remain in solution. This fragile situation is called supersaturation and is the basis for our crystal growth experiments.
Solubility, or the amount of solute which can be dissolved in the solvent, is affected by a number of factors, one of which it the temperature of the solvent. Generally speaking we increase solubility of solid solute when we increase the temperature of the solvent. (This is not true of the solubility of gases in a solvent as is witnessed by anyone who has sipped a glass of warm, flat soda.) In Appendix 2 there are some solubility figures which the student can use to set up graphs of solubility curves.
Crystal growth is a very orderly and regulated process. A crystal grows from the southside with the atoms of the compound being added according to a very specific pattern. If there is not enough space for the crystal to grow unhindered it will increase only until it meets something which gets in its way and then stop. Often many small crystals begin forming at the same time, and they grow until their edges meet at varying angles. They do not join to form a single large crystal but rather remain a jumble of small individual crystals forming a polycrystalline mass. The adjoining faces of the crystals are called the grain boundaries. These boundaries are particularly evident in metals which have formed by fairly rapid cooling of the molten form. During the cooling process innumerable small crystals form and grow until they bump into a neighboring crystal.
Crystals can form from the cooling or evaporation of solutions, or form the cooling of molten solid, or the cooling of vaporized substances. In Appendix 3 you will find a number of experimental techniques for demonstrating crystal growth and for student participation in crystal growing.
Many crystals in nature demonstrate this mixed crystal condition in the replacement of aluminum by chromium or sometimes iron. Rubies are a good example of this, being composed of aluminum oxide with chromium replacing some of the aluminum, and also sapphires which replace the aluminum with titanium and iron.
In some cases a slightly different atomic substance can enter a crystal but only in small quantities. This is called a substitutional impurity. A most relevant example of this is substitution of phosphorus or boron atoms in silicon crystals. These “impure” compounds are used to make transistors for electronic instruments.
Sometimes a different kind of impurity enters a crystal. These foreign atoms may be very small compared to the host substance and fit in between the orderly arranged host atoms. If the host substance has a generous size pattern the invading atoms could be as large as the host atoms themselves. The additional atoms are called interstitial impurities. A well known example of this is carbon and iron, which makes steel.
A third kind of defect could be called a vacancy. This results from very rapid crystal growth during which some of the atomic sites are simply not filled. The milky or veiled appearance of home-grown crystals, however, is caused by very large openings called voids. It generally occurs when the evaporation of solvent proceeds too rapidly and incomplete crystallization happens. The white coloration is caused by the presence of a liquid solution that is trapped in the open spaces of the crystal. Vacancies on the other hand are far too small to be visible.
Minerals are natural substances that are inorganic and not the result of any living process, therefore ruling out coal, oil, or pearls. It must also have a specific chemical formula, made up of atoms in a definite ratio. In addition the atoms must have a definite and specific arrangement in space. It is because of these characteristics that minerals have unique properties which can be used to differentiate them form one another.
Many minerals are made up primarily of elements which impart no strong color of their own and only minute amounts of a coloring agent can have striking results. Some color guidelines are: red may indicate the presence of chromium or hematite, green can indicate chlorite or chromium, and blue an indicate the presence of titanium or titanium and iron. The presence of copper ions can result in shades of green or blue and manganese can result in shades of red. It is sometimes helpful to determine the color of a mineral’s streak by rubbing the sample on an unglazed porcelain streak plate. This powdered residue is often more accurate in indicating true color and many mineral identification handbooks include a list of streak colors.
Hardness of a mineral is shown by its resistance to being scratched. This is related to the crystal structure in that the more tightly bonded the atoms the harder the surface resistance to being etched will be. Diamond is the hardest mineral but is not readily available for student experimentation. corundum is the next step down and it inexpensively available. In 1812 Friedrich Mohs devised a rough scale of hardness that is invaluable in mineral identification. Diamond at number 10 is the hardest, and talc at number 1 is the softest. The intervals between the numbers are not equal, however, and the difference between corundum at 9 and diamond at 10 is greater than the entire range of 1 to 9!
Mohs Scale of Mineral Hardness
Cleavage is a reflection of the electrical forces acting between the atoms which result in the crystal breaking along atomic planes that are parallel to crystal faces. The children are asked to look for these flat faces and simply to indicate whether the mineral has “good” cleavage or instead breaks with rather raggedy edges and is therefore declared to be fractured. Or if the student has already looked at crystal shape pictures and constructed paper models of them, they can try naming the crystal system it belongs to.
- 1. Talc
- 2. Gypsum
- 3. Calcite
- 4. Fluorite
- 5. Apatite
- 6. Feldspar
- 7. Quartz
- 8. Topaz
- 9. Corundum
- 10. Diamond
Another means of identifying minerals is specific gravity. This measurement can be the most helpful identifying characteristic of all as it is apt to be the most reliable. The students can have a very interesting lab built around this property. Directions for this lab are in Appendix 1, Activity 4.
The story of Archimedes and his quest for a way to determine the value of the king’s crown is a sure-fire attention-getter to start the lesson. According to the legend, in about 250 B.C. Archimedes was given the task of determining if a crown belonging to King Hiero was pure gold or only an alloy of gold and silver. It is said that upon easing into his tub the bath water spilled over the edge and it came to him that the volume of water lost was the same as the volume of his body, and he could use the same technique to determine the volume of the crown. Since it was known that gold and silver have different densities, the only thing that would remain would be to take an accurate measure of the weight of the crown and divide this by the volume of the crown. The resulting density figure could be compared with the density of gold, and the truth would be known. It is said that with this revelation, Archimedes leaped from his bath and ran, forgetting the state of his undress, through the streets of Syracuse in Sicily exclaiming, “Eureka! — I have found it!” on his way to the palace! A sad footnote to the story is that the crown was indeed not the pure gold it had been portrayed as, and the unfortunate merchant met an uncomfortable end. Or so they say.
Carried one step further, the concept of specific gravity is based on the physical law that an object immersed in water loses as much weight as an equivalent volume of water would weigh. With a spring balance and a water pan the students can determine the specific gravity of a variety of minerals. Experience has shown that fairly large specimens will give the best results. See Appendix 1 for directions for labs on this and related topics; Activities 4, 5, and 6.
In 1880 the Curies discovered another peculiar property of quartz while studying the electrical conductivity of crystalline bodies. They discovered that pressure on plates of quartz caused a deflection of the needle on a sensitive electrometer. This is called the piezoelectric effect. It occurs when the crystal is squeezed slightly out of shape and then springs back. This shape change actually affects the crystal at the atomic level causing a movement of ions, with their attendant electric charges. This motion of the electrically charged particles constitutes flow of electrons, or electricity. This particular characteristic is now used to control and stabilize the frequency of a radio transmitter or to regulate watches.4
Another form of quartz known and valued for its beauty is amethyst. This mineral is almost pure SiO2 with only a trace of iron. As the amount of iron increases, so does the intensity of the violet color, so it is believed to be the coloring agent. According to folklore the amethyst gives its wearer great power, increased intelligence, and strength.
Smoky quartz does not differ from clear quartz in chemical composition. In fact when it is heated to very high temperatures the “smoky” color vanishes, and it looks identical to clear crystalline quartz. The color can be restored by treating the crystal with a beam of x-ray radiation. Scientists believe that the color of smoky quartz is a result of natural radiation in the earth.
Agate is a common form of quartz which does not have any external evidence of its crystal nature. The extremely tiny crystalline particles are so intergrown that they appear smoothly mixed. Agate is used decoratively and as jewelry, especially in the onyx form.5
A form of quartz that is unique and appears to be most “un-quartz like” is the opal. It contains a rather large percentage of water, ranging from four to twenty percent. And a complex internal structure if microscopic silica fitted together in a lattice-pattern results in diffraction of the light hitting it, forming rainbows of brilliant color as the gem is rotated. Opal is a low-pressure and low-temperature mineral and is formed at the earth’s surface by deposition from ground water or by the evaporation of hydrothermal, or hot water, springs as they rise to the surface and cool leaving opal mineral behind. Because of the rather large amount of water present in opal, it tends to be relatively soft (5.5 to 6.5) and low in specific gravity. These qualities limit its use as a gem, and it is usually found mounted in pendants and pins where the stone is relatively protected.6
Man is not at a total loss in this field. Currently we are producing some 44,000 pounds for industrial use annually by a process developed by H. Tracy Hall for the General Electric Research Labs in the early 1950s. His process involved a mixture of graphite powder and an iron compound placed in a hydraulic press. This press was able to generate a force of more than 1.5 million pounds per square inch! To that was added an electrical current which heated the mix to over 4,800 degrees Fahrenheit. From this was produced low quality diamond grit used widely for industrial purposes as an abrasive.
Gem quality diamonds are another story. Instead of the less expensive carbon sources which are used in the manufacture of industrial diamonds, the feed material for gem quality diamonds is the industrial grit, and the pressure and temperatures must be maintained for long periods of time, up to a week. From this we can obtain gem quality diamonds of up to one carat, but unfortunately the cost of manufacture is higher than the present cost of mining the natural stones.7
Natural diamonds are thought to be formed deep within the earth, probably 90 to 120 miles down within the upper regions of the mantle. Here pressures of 975,000 pounds per square inch and temperatures of at least 2,700 degrees Fahrenheit may cause carbon atoms to crystallize into tetrahedral shapes of great strength. Diamond is more resistant to scratching than any other mineral, only another diamond can mark it. It is resistant to acids and alkalis. It is brilliant and has very high dispersive qualities which result in the flashes of light reflecting from the cut stone. Dispersion is the ability of a substance to separate white light into its component colors just as a prism or water droplets forming a rainbow. It also has a relatively high specific gravity, 3.5, which results in it being found in placer deposits, those areas in stream beds where heavy and often valuable particles settle out and collect in quantity.
Diamonds were first found in India and for thousands of years this was the only source. They were not mined but rather found in stream gravel and alluvial deposits. Some famous stones from India with fascinating histories are the Koh-i-nor and the Great Mogul. In the early 1700s diamonds were discovered in Brazil. Men panning for gold found clear pebbles that were later recognized as diamonds! Many of these South American gems were shipped to India to be sold in their markets to Europeans. As India’s sources began to dry up, Brazil became more acceptable in the world’s eyes as a diamond source. And so for a while Brazil was the principal producer of diamonds. Even today many fine gemstones come from there.
The most extraordinary diamond finds have occurred only within the last one hundred years or so. In 1866 a Boer farmer’s son found a shiny pebble on a river bank in South Africa. It turned out to be a 21.5 carat diamond! Adventurers from all over the world descended on the site turning it into a free-for-all not unlike the American gold rush, Diamonds were recovered from the river bank, the surface soil, deeper “yellow ground,” and finally at depths of fifty to sixty feet, the “blue ground.” This rock is the original matrix which the diamonds formed in more than 15 million years ago. Diamonds have also been found beneath the beach sand at the mouth of the Orange River and now mining is being done in the Atlantic Ocean in that area.8
Very few diamonds have been found elsewhere in the world. An occasional gem has been found in the American midwest in glacial deposits or in dunes. And in Arkansas some have been found in their rock matrix similar to the blue ground of South Africa. They have also been found in the Soviet Union, but not much is known about these.
Sedimentary formations may contain gems with two separate histories. Some of the sedimentary rocks hold crystals that were formed elsewhere and then were moved by erosional forces to sediment basins where they became part of the deposit. Other gems were formed when groundwater seeped down through the volcanic ash or other sediments dissolving minerals and moving them into pockets. Opal and turquoise are examples of this type of gem.
When tectonic processes, volcanic activity, or deep burial subject rock to great heat and pressure, metamorphic forms may result. The partial melting of the parent rock allows specific minerals to escape and move to areas where they concentrate into gems. Examples are garnets, which can be found here in Connecticut, and the famous Burmese rubies and sapphires.
Displaying geodes to the students is an extremely valuable tool as it is obvious to the group that the individual perfectly formed crystals could not have been carved out by man. This idea of the human manufacture of crystals, especially the particularly beautiful quartz or pyrite crystals, is quite common among students at this grade level.
When photons strike the solar cell, bonded electrons are bounced right out of their positions creating many more conduction n-electrons and holes on both sides of the junction. Since there are already so many electrons on the n-type side and so many holes on the p-type side, the additional new ones are only a tiny proportion; but by making new holes on the n-type side and new conduction electrons on the p-type side, the solar cell is unbalanced. The electrons from the p-type side move across the junction creating a flow of electrons, or electricity! This flow moves electrons out through the n-type layer onto a conductive wire grid which is connected to a circuit that is completed by an attachment to the p-type layer.12 Read more about this fascinating but complex topic in Chalmers’ article or Swan’s book.
The simplest or first generation display capability with a single electrical lead connected to a single picture element forms the seven-segment number displays as seen on clocks and watches. Second generation displays attaches four picture elements to one electrical lead and displays the seven segment numbers and also star-burst shapes which can form letters or numbers, useful for pocket calculators.15
More complex wiring and liquid crystals with helical (spiral) axis positions can display 5-32 elements per electrical lead and are used for personal computers. The newest experimental versions are capable of producing TV picture displays on a flat substrate.
Liquid crystals are true liquids but also have some solid properties. Their internal order is very delicate and can be changes by a weak electrical field, magnetism, or even temperature variations. Noticeable optical effects are the result of re-arrangement of the molecules and the resulting changes in refraction (light-bending), reflection, absorption, scattering, or coloring of the visible light from their surface. Liquid crystals modify the ambient light rather than emit their own light and therefore require minimal amounts of power. A typical LCD (liquid crystal device) uses one microwatt per square centimeter of display area.16
A very simplified diagram below shows the effect of an electric current on liquid crystal molecules. This change is visible due to its effect on light waves.
Although liquid crystal technology has only recently been exploited by man for diagnostic tools, displays, new materials such as kevlar, and oil-recovery technology; nature has used these peculiar molecules in living systems right along. The structure of cell membranes and some tissues are liquid crystals. Hardening of the arteries is a result of the deposition of liquid crystals of cholesterol, cells involved in sickle cell anemia have liquid crystal structure, and on a brighter note, it may soon be possible to change the solid form of a gall stone into a liquid crystal form that can be flushed from the body.19 For further information see the Brown or Kahn articles.
The technological development of crystals has taken off in this generation from semiconductors to transistors to integrated circuits to microchips. Always getting smaller but with vastly increased information handling abilities. They are the mainstay of the space and military industries, making possible the impossible in distant space travel, satellite technology, and weapons’ accuracy.
Solar cells are becoming more efficient and more common, and as our energy problems increase, student interest in solar technology has also increased. And the liquid crystals in our students’ watches and pocket calculators are just one more bit of technology waiting to be explained to our students.
There are many very helpful books and articles which can aid the reader in his or her understanding of these interesting, complex topics. Many of these have been listed in the bibliography, but I would like to specifically recommend the Holden book, The Nature of Solids, and the Chalmers article, “Photovoltaic Generation of Electricity,” as two excellent readings with which to begin. They will provide you with the information you will want to feel confident about teaching about crystal technology.
All of the books and articles that are specifically referred to in the text of this unit are available in book or reprint form at the Yale-New Haven Teachers Institute office on Wall Street, New Haven.
Objective: to observe the properties of two substances with different kinds of bonds.
three metal teaspoons
2 cm table salt
2 cm table sugar
folded paper towel hot pad
Record all of your data in the chart below as you do each test.
- 1. Rub a few grains of the salt and then the sugar between your fingers. Which one feels the sharpest or the roughest?
- 2. Examine with the magnifying glass a few grains of both salt and sugar. Describe the grain shape of each.
- 3. Put some of the salt in a spoon. Crush it with the other spoon. Do the same with some of the sugar. Which is the most difficult to crush?
- 4. Light a match or a candle. Hold the teaspoon of salt over the flame for ten seconds. Does it melt? Do the same with a teaspoon of sugar. Which substance appears to have the higher melting temperature?
|Characteristics||Table Salt||Table Sugar|
- 1. Use your book to find out what kind of bonds salt and sugar have.
- 2. Which of these two substances appears to have the stronger bonds?
Objective: to familiarize students with the shapes of crystals in their three dimensional forms.
crystal model outline sheets,1 and 2
- 1. Look at drawings of the six basic crystal shapes. Draw and label each one on drawing paper.
- 2. Using the outline sheet cut out each crystal pattern. Fold along the dotted lines and tape the edges. Identify each shape using your drawings done in part 1.
Objective: to demonstrate in concrete fashion the relationship between atomic arrangement and crystal shape.
12 one inch size styrofoam balls
12 one-half inch size styrofoam balls
green crayons or paint
- 1. Color the large styrofoam balls green to represent chlorine atoms. The smaller size balls will represent sodium atoms.
- 2. Break a toothpick in half, push one end into the chlorine and the other end into the sodium. Do this to all the “atoms” you have. Now you have sodium chloride molecules.
- 3. Using the toothpick halves join one NaCl (sodium chloride) molecule to another, matching up sodium to chlorine each time, and at right angles to one another. They should be placed as tightly together as possible. Look at the diagram to get an idea of how it should look.
- 4. Sprinkle some of the salt grains on a dark surface and inspect them with the magnifying glass. What shape are they? What is the chemical name for this compound?
- 1. What shape is the salt crystal?
- 2. What shape does the styrofoam ball model of the atoms have?
- 3. Is there a relationship between the answers to questions 1 and 2? Why?
Objective: to understand the meaning of specific gravity by determining the specific gravities of several mineral samples experimentally.
thread or string
250 ml beaker
- 1. Tie a twelve inch length of thread to the first sample, then tie the other end to the spring scale. Record the mass on the chart.
- 2. Fill the beaker about two-thirds full with the water. Submerge the sample in the water being careful not to allow the mineral to touch the bottom or sides of the beaker. Record the mass on the chart.
- 3. Subtract the mass in water from the mass in air. This is the “loss of mass in water.” Record this on the chart.
- 4. The “loss of mass in water” amount is the same as the mass of water displaced. Enter this on the chart.
- 5. Use the formula below to calculate the specific gravity of your mineral.
- ____Specific gravity = mass of the mineral in air / mass of the water displaced by the mineral
- 6. Repeat steps 1-5 for each of the samples available.
|DATA CHART||Pyrite||Quartzite||Galena||Unknown Sample|
mass in water
loss of mass
mass of water
- 1. Pyrite is called fool’s gold. How can you tell it from real gold?
- 2. What is an alloy and how can you differentiate it from a pure sample?
- 3. Use the Handbook of Physics and Chemistry to find the material with the highest and lowest specific gravity.
Objective: to determine density of solids using the direct measurement of the volume of the solid.
assortment of blocks:
balance beam scale
- 1. Carefully measure the length, width, and thickness of the block using the metric ruler. Fill in the table below with your results.
- 2. Calculate the volume by multiplying the three numbers and enter the results in the designated space.
- 3. Use the balance beam scale to determine the mass of the block in grams. Enter the result in the designated space.
- 4. Calculate the density and record your answer.
- ____Density = mass volume
- 5. Repeat the above procedure for all of the blocks and carefully record all of the measurements and results of calculations.
- 1. What is the definition of density?
- 2. Could you use this technique to determine the density of a toy soldier? Why?
- 3. Could you use this density formula to find the density of a steel ball? How?
Objective: to determine the density of an irregularly-shaped solid using the displacement method.
balsa wood block
balance beam scale
250 ml. beaker of water
Directions: We have measured the density directly by using a ruler to find the volume of a regularly-shaped object. We can determine the density of an irregularly-shaped object using the displacement method. We will weigh the object on the scale to find mass; then we will use the graduated cylinder to find out how much water is displaced by the object when it is submerged. The amount of water displaced is the same as the volume of the object. Then we have only to put the numbers in our formula:
Density = mass / volume
one milliliter = one cubic centimeter
1 ml = 1 cm3
- 1. Weigh the three objects. Record the data on the chart below.
- 2. Fill the graduated cylinder to the 50 ml. line with water from your beaker.
- 3. Securely tie each object with a 30 cm. length of thread.
- 4. Submerge one of the objects in the graduated cylinder. Record the new water level on the chart below. Repeat for each object.
- 5. Subtract the first reading (50 ml.) from the second reading. This is the volume displaced, or the volume of the object.
- 6. Divide the mass by the volume to find the density. Remember 1 ml. = 1 cm.3
|balsa wood||metal plate||stone|
water and object
Objective: to develop skills in mineral identification using some characteristics of minerals.
- mineral samples
- steel nail
- hardened file
- streak plate
- paper towels
- lemon juice or dilute hydrochloric acid (Use acid only on samples kept at demonstration table — do not contaminate your samples at your desk.)
- 1. Read each mineral identification characteristic and use the suggested term or property to describe the mineral you are examining. Repeat for each mineral.
- 2. Characteristics:
Color: apparent external color: black, brown, gray-white, brownish-white, etc.
Luster: describes the shine or light-reflection from a mineral: metallic, pearly, greasy, glassy, silky.
Hardness: describes the resistance to scratching and is measured by the Mohs scale of hardness, #1 to #10, softest to hardest. To test hardness, scratch your sample with the following items in order. When a visible scratch is made (a dent in the rock), that is the hardness.
2.5 — fingernail
3 — penny
4-5 — steel file
6 — hardened file
over 7 — may scratch a streak plate but leaves no color
Streak: Describes the color of the powdered mineral. The streak can have a different color from the external apparent color. It is determined by rubbing the sample firmly over the plate, the color on the plate is the streak. A mineral of hardness 7 or over will have no streak, but may scratch the plate instead.
Magnetism: minerals that are magnetic will cause a compass needle to spin.
Crystal structure: Use your paper crystal models to determine the crystal shape of your sample.
Acid test: put one or two drops of a weak acid on your sample (be sure to use the samples on the demonstration table). If the mineral bubbles and fizzes it contains carbonate. The gas being released is carbon dioxide.
Heft: refers to the weight per volume or size. Terms that could be used would be “heavy,” “medium,” or “light.” These are only relative and imprecise terms but useful for comparing with specific gravity.
|(degrees Celsius)||(grams/100 cc. water)|
|(degrees Celsius)||(grams/100cc. water)|
|(degrees Celsius)||(grams/100cc. water)|
|(degrees Celsius)||(grams/100cc. water)|
|(degrees Celsius)||(grams/100cc. water)|
2. What is the solubility of potassium chloride at 90 degrees Celsius?
3. What is the solubility of sodium sulfate at 50 degrees Celsius?
4. What is the solubility of sodium nitrate at 30 degrees Celsius?
To grow crystals from an alum solution dissolve 4 teaspoons of alum powder in a half cup of hot water. Stir until all of the powder dissolves. Cover the beaker with paper or cloth to keep the dust out and slow evaporation. Crystals will begin to form on the bottom of the container. It is important to keep your solution free from drafts or temperature changes. If your room has a widely varying temperature range, set the beaker in a large bucket of water, being careful not to get extra water in the beaker. The water will help prevent any rapid temperature changes.
When some good size crystals have formed, pour off and save the solution and carefully pick out the best crystals with tweezers. Dry the crystals and the tweezers well. Now pour the solution back into the beaker with the remaining crystal masses and gently reheat and redissolve the solute.
The next step is a real test of your dexterity! You must fasten the crystal to a thread, either by means of a slip knot or with a minuscule spot of glue (Duco cement is good). Suspend your hanging crystal “seed” in the cooled solution and cover the top. It is important to be sure there are no extraneous crystal grains on the sides of your crystal seed, on the thread, or in the container. Growth of your seed will be impeded by them. Again place the beaker in the water bath to control temperature during the growing time. This process would result in a single, large, well-formed crystal.20
Other substances which can form nice crystals are borax, salt, sugar, copper sulfate, and Epsom salts. The amount of solute will vary, but you can determine the appropriate amount by trial and error or by checking one of the books given above.
To grow crystals from a melt, obtain some salol (phenyl salicylate, HOC6H4COOC6H5) from a drugstore. Put a small amount on a glass slide or sheet of aluminum pie plate, and heat it with a candle. Since it melts at 42 degrees Celsius, you will not need much flame. As it cools, put a tiny grain of salol powder on the melted salol; this will act as a seed for the crystal to grow around.21
To grow crystals from a vapor, obtain some naphthalene (moth-flakes, C10H8). CAUTION — NAPHTHALENE IS VERY FLAMMABLE AND MUST NOT BE HEATED NEAR AN OPEN FLAME! Woods suggests placing a few flakes in a tall glass jar and placing a loose cover (aluminum foil) on top. Place the base of the jar on a lighted 100 watt bulb, and soon you will see the rising vapor sublimating (changing from vapor to solid) onto the top sides of the container. This experiment should be done in a very well ventilated room, preferably within a vented hood.22
———. Rocks and Minerals. New York: Bantam Books, 1973. A well-illustrated paperback handbook with descriptions of atoms, bonds, and crystal structures. The minerals are grouped by chemical composition and are beautifully illustrated.
Brown, G. and Crooker, P. “Liquid Crystals.” C&EN (January 31, 1983), pp.24-38. Hard to read, but complete discussion of liquid crystals.
Cady, W. “Crystals and Electricity.” Scientific American (December, 1949), pp.46-51. An understandable, concise discussion of piezoelectricity, its uses, and how it works. Highly recommended.
Chalmers, B. “Photovoltaic Generation of Electricity.” Scientific American (Oct. 1976): 34-43. An excellent discussion of p, n-type silicon, the p-n junction, and the solar cell circuit. It also covers problems and economics of solar technology. An outstanding reference written in understandable language.
Dana, E. Minerals and How to Study Them. Revised by C. Hurlbut. New York: John Wiley and Sons, 1949. Old but very complete guide to minerals. Crystallography discussed in detail and it has a convenient compact size and is easy to understand.
Desautels, P. Rocks and Minerals. New York: Grosset and Dunlap, 1974. Outstanding demonstration photos—double page size, 16” x 12”. with a brief description of photo subjects and related varieties.
English, G. Getting Acquainted with Minerals. New York: McGraw Hill Book Company, 1958. Textbook of mineralogy with an identification key based on hardness and luster.
Hewitt, P. Conceptual Physics. Boston: Little, Brown and Company, 1985. General college-level physics text.
Hittinger, W. “Metal Oxide Semi-Conductor Technology.” Scientific American (August 1973). Advanced level discussion of semi-conductors (bipolar and unipolar) and integrated circuits.
Holden, A. The Nature of Solids. New York: Columbia University Press, 1965. A basic, comprehensive guide to the physics and chemistry of crystals. It is a good introductory book for a study of crystallography.
Holden, A., and Morrison, P. Crystals and Crystal Growing. Cambridge, Mass.: The MIT Press, 1982. A well-written guide to crystals, covering everything from the physics and chemistry of crystals, structures, and uses to the actual growing techniques. Enjoyable reading and my first choice recommendation.
Hurlbut, C. Dana’s Manual of Mineralogy. New York: John Wiley and Sons, 1959. A detailed text book, but has an excellent key to identify minerals, and several useful pages of tables.
———. Minerals and Man. New York: Random House, 1970. This book is a masterpiece of information and color photographs which reads like a novel. 304 pages of detailed information and stories which will certainly hold your interest and attention. Highly recommended.
Kahn, F. “The Molecular Physics of Liquid-Crystal Devices.” Physics Today (May 1982), pp.66-74. A very technical discussion of liquid crystals with helpful diagrams.
Kirkaldy, J.F. Minerals and Rocks. Poole, England: Blandford Press, 1963. Handbook format with many color photos and some descriptive geology. An important feature is the extensive glossary of terms.
McGavack, J. and LaSalle, D. Crystals, Insects, and Unknown Objects. New York: The John Day Company, 1971. A book written by a former Science Supervisor and Assistant Superintendent of Schools in New Haven, focuses on hands-on learning techniques and incorporates a unit on crystal growing and its creative approach to learning.
O’Neil, P. Gemstones. Alexandria, VA.: Time-Life Books, 1983. A richly illustrated, well written book that covers the formation, the composition, and the history of gems. Highly recommended, enjoyable.
Pearl, R. Gem, Minerals, Crystals, and Ores. New York: Odyssey Press, 1964. Interesting discussion of gem cutting. Outstanding glossary of mineral terms, crystal terms, mineral name origins, mining and geology terms, as well as multitudes of listed minerals and ores.
Ransom, J. The Rock Hunter’s Range Guide. New York: Harper and Row, 1964. Good for beginning rock collectors. Explains the use of geologic maps and where to get them, how to prospect new locations, and listings of mineral sites for each state.
———. A Range Guide to Mines and Minerals. New York: Harper and Row, 1964. A more comprehensive guide to mineral collecting with lists of abandons mines in the United States.
Read, P.G. Dictionary of Gemmology. London: Butterworth Scientific, 1982. Comprehensive dictionary whose recent publication date makes it useful.
Swan, C. SunCell: Energy, Economy, Photovoltaics. San Francisco: Sierra Club Books, 1986. A comprehensive guide to solar energy dealing with everything from the physics of the cells, the practical uses, the future uses, to the politics and economics of solar energy.
White, J. Color Underground. New York: Charles Scribner Sons, 1971. Beautifully illustrated picturebook of crystals. Large pictures good for displaying with each crystal described and explained. Good for student or teacher use.
Woods, E. Crystals—A Handbook for School Teachers. The International Union of Crystallography, 1972. A small but very useful book giving step-by-step directions for growing a wide variety of crystals plus an assortment of things to do with the crystals you grow. Highly recommended.
Contents of 1989 Volume VI | Directory of Volumes | Index | Yale-New Haven Teachers Institute | http://yale.edu/ynhti/curriculum/units/1989/6/89.06.07.x.html | 13 |
118 | Sean D. Pitman M.D.
© December, 2006
Most scientists today believe that various places on this planet, such as Greenland, the Antarctic, and many other places, have some very old ice. The ice in these areas appears to be layered in a very distinctive annual pattern. In fact, this pattern is both visually and chemically recognizable and extends downward some 4,000 to 5,000 meters. What happens is that as the snow from a previous year is buried under a new layer of snow, it is compacted over time with the weight of each additional layer of snow above it. This compacted snow is called the “firn” layer. After several meters this layers snowy firn turns into layers of solid ice (note that 30cm of compacted snow compresses further into about 10cm of ice). These layers are much thinner on the Antarctic ice cap as compared to the Greenland ice cap since Antarctica averages only 5cm of "water equivalent" per year while Greenland averages over 50cm of water equivalent. 1,2 since these layers get even thinner as they are buried under more and more snow and ice, due to compression and lateral flow (see diagram), the thinner layers of the Antarctic ice cap become much harder to count than those of the Greenland ice cap at an equivalent depth. So, scientists feel that most accurate historical information comes from Greenland, although much older ice comes from other drier places. Still, the ice cores drilled in the Greenland ice cap, such as the American Greenland Ice Sheet Project (GISP2) and the European Greenland Ice Core Project (GRIP), are felt to be very old indeed - upwards of 160,000 years old. (Back to Top)
The Visual Method
But how, exactly, are these layers counted? Obviously,
at the surface the layers are easy to count visually – and in Greenland the
layers are fairly easily distinguished at depths as great as 1,500 to 2,000m
(see picture). Even here though, there
might be a few problems. How does one
distinguish between a yearly layer and a sub-yearly layer of ice?
For instance, it is not only possible but also likely for various large
snowstorms and/or snowdrifts to lay down
“Fundamentally, in counting any annual marker, we must ask whether it is
absolutely unequivocal, or whether nonannual events could mimic or obscure a
year. For the visible strata (and, we believe, for any other annual indicator at
accumulation rates representative of central Greenland), it is almost certain
that variability exists at the subseasonal or storm level, at the annual level,
and for various longer periodicities (2-year, sunspot, etc.). We certainly must
entertain the possibility of misidentifying the deposit of a large storm or a
snow dune as an entire year or missing a weak indication of a summer and thus
picking a 2-year interval as 1 year.” 7
Good examples of this phenomenon can be found in areas of very high precipitation, such as the more coastal regions of Greenland. It was in this area, 17 miles off the east coast of Greenland, that Bob Cardin and other members of his squadron had to ditch their six P-38’s and two B-17’s when they ran out of gas in 1942 - the height of WWII. Many years later, in 1981, several members of this original squad decided to see if they could recover their aircraft. They flew back to the spot in Greenland where they thought they would find their planes buried under a few feet of snow. To their surprise, there was nothing there. Not even metal detectors found anything. After many years of searching, with better detection equipment, they finally found the airplanes in 1988 three miles from their original location and under approximately 260 feet of ice! They went on to actually recovered one of them (“Glacier Girl” – a P38), which was eventually restored to her former glory.20
What is most interesting about this story, at least for the purposes of this discussion, is the depth at which the planes were found (as well as the speed which the glacier moved). It took only 46 years to bury the planes in over 260 feet (~80 meters) of ice and move them some 3 miles from their original location. This translates into a little over 5 ½ feet (~1.7 meters) of ice or around 17 feet (~5 meters) of compact snow per year and about 100 meters of movement per year. In a telephone interview, Bob Cardin was asked how many layers of ice were above the recovered airplane. He responded by saying, “Oh, there were many hundreds of layers of ice above the airplane.” When told that each layer was supposed to represent one year of time, Bob said, “That is impossible! Each of those layers is a different warm spell – warm, cold, warm, cold, warm, cold.” 21 Also, the planes did not sink in the ice over time as some have suggested. Their density was less than the ice or snow since they were not filled with the snow, but remained hollow. They were in fact buried by the annual snowfall over the course of almost 50 years.
Now obviously, this example does not reflect the actual climate of central Greenland or of central Antarctica. As a coastal region, it is exposed to a great deal more storms and other sub-annual events that produce the 17 feet of annual snow per year. However, even now, large snowstorms also drift over central Greenland. And, in the fairly recent warm Hipsithermal period (~4 degrees warmer than today) the precipitation over central Greenland, and even Antarctica, was most likely much greater than it is today. So, how do scientists distinguish between annual layers and sub-annual layers? Visual methods, by themselves, seem rather limited – especially as the ice layers get thinner and thinner as one progresses down the column of ice. (Back to Top)
Oxygen and Other Isotopes
Well, there are many other methods that scientists use to help them identify annual layers. One such method is based on the oxygen isotope variation between 16O and 18O (and 17O) as they relate to changes in temperature. For instance, water (H2O), with the heavier 18O isotope, evaporates less rapidly and condenses more readily than water molecules that incorporate the lighter 16O isotope. Since the 18O requires more energy (warmer weather) to be evaporated and transported in the atmosphere, more 18O is deposited in the ice sheets in the summer than in the winter. Obviously then, the changing ratios of these oxygen isotopes would clearly distinguish the annual cycles of summer and winter as well as longer periods of warm and cold (such as the ice age) – right? Not quite. One major drawback with this method is that these oxygen isotopes do not stay put. They diffuse over time. This is especially true in the “firn layer” of compacted snow before it turns into ice. So, from the earliest formation of these ice layers, the ratios of oxygen isotopes as well as other isotopes are altered by gravitational diffusion and so cannot be used as reliable markers of annual layers as one moves down the ice core column. One of the evidences given for the reality of this phenomenon is the significant oxygen isotope enrichment (verses present day atmospheric oxygen ratios) found in 2,000 year-old-ice from Camp Century, Greenland.3 Interestingly enough, this property of isotope diffusion has long been recognized as a problem. Consider the following comment made by Fred Hall back in 1989:
“The accumulating firn [ice-snow granules] acts like a giant columnar sieve through which the gravitational enrichment can be maintained by molecular diffusion. At a given borehole, the time between the fresh fall of new snow and its conversion to nascent ice is roughly the height of the firn layers in [meters] divided by the annual accumulation of new ice in meters per year. This results in conversion times of centuries for firn layers just inside the Arctic and Antarctic circles, and millennia for those well inside [the] same. Which is to say--during these long spans of time, a continuing gas-filtering process is going on, eliminating any possibility of using the presence of such gases to count annual layers over thousands of years.” 4
Lorius et al., in a 1985 Nature article, agreed commenting that, “Further detailed isotope studies showed that seasonal delta 18O variations are rapidly smoothed by diffusion indicating that reliable dating cannot be obtained from isotope stratigraphy”.29 Jaworowski (work discussed further below in "Biased Data" section) also notes the following:
The short-term peaks of d18O in the ice sheets have been ascribed to annual summer/winter layering of snow formed at higher and lower air temperatures. These peaks have been used for dating the glacier ice, assuming that the sample increments of ice cores represent the original mean isotopic composition of precipitation, and that the increments are in a steady-state closed system.
Experimental evidence, however, suggests that this assumption is not valid, because of dramatic metamorphosis of snow and ice in the ice sheets as a result of changing temperature and pressure. At very cold Antarctic sites, the temperature gradients were found to reach 500°C/m, because of subsurface absorption of Sun radiation. Radiational subsurface melting is common in Antarctica at locations with summer temperatures below -20°C, leading to formation of ponds of liquid water, at a depth of about 1 m below the surface. Other mechanisms are responsible for the existence of liquid water deep in the cold Antarctic ice, which leads to the presence of vast sub-sheet lakes of liquid water, covering an area of about 8,000 square kilometers in inland eastern Antarctica and near Vostok Station, at near basal temperatures of -4 to -26.2°C. The sub-surface recrystallization, sublimation, and formation of liquid water and vapor disturb the original isotopic composition of snow and ice. . .
Important isotopic changes were found experimentally in firn (partially compacted granular snow that forms the glacier surface) exposed to even 10 times lower thermal gradients. Such changes, which may occur several times a year, reflecting sunny and overcast periods, would lead to false age estimates of ice. It is not possible to synchronize the events in the Northern and Southern Hemispheres, such as, for example, CO2 concentrations in Antarctic and Greenland ice. This is, in part the result of ascribing short-term stable isotope peaks of hydrogen and oxygen to annual summer/winter layering of ice. and using them for dating. . .
In the air from firn and ice at Summit, Greenland, deposited during the past ~200 years, the CO2 concentration ranged from 243.3 ppmv to 641.4 ppmv. Such a wide range reflects artifacts caused by sampling or natural processes in the ice sheet, rather than the variations of CO2 concentration in the atmosphere. Similar or greater range was observed in other studies of greenhouse gases in polar ice.50
(Back to Top)
Contaminated and Biased Data
According to Prof. Zbigniew Jaworowski, Chairman of the Scientific Council of the Central Laboratory for Radiological Protection in Warsaw, Poland, the ice core data is not only contaminated by procedural problems, it is also manipulated in order to fit popular theories of the day.
Jaworowski first argues that ice cores do not fulfill the essential criteria of a closed system. For example, there is liquid water in ice, which can dramatically change the chemical composition of the air bubbles trapped between ice crystals. "Even the coldest Antarctic ice (down to -73°C) contains liquid water. More than 20 physicochemical processes, mostly related to the presence of liquid water, contribute to the alteration of the original chemical composition of the air inclusions in polar ice. . . Even the composition of air from near-surface snow in Antarctica is different from that of the atmosphere; the surface snow air was found to be depleted in CO2 by 20 to 50 percent . . ."50
Beyond this, there is the problem of fractionation of gases as the "result of various solubilities in water (CH4 is 2.8 times more soluble than N2 in water at O°C; N2O, 55 times; and CO2, 73 times), starts from the formation of snowflakes, which are covered with a film of supercooled liquid."50
"[Another] one of these processes is formation of gas hydrates or clathrates. In the highly compressed deep ice all air bubbles disappear, as under the influence of pressure the gases change into the solid clathrates, which are tiny crystals formed by interaction of gas with water molecules. Drilling decompresses cores excavated from deep ice, and contaminates them with the drilling fluid filling the borehole. Decompression leads to dense horizontal cracking of cores [see illustration], by a well known sheeting process. After decompression of the ice cores, the solid clathrates decompose into a gas form, exploding in the process as if they were microscopic grenades. In the bubble-free ice the explosions form a new gas cavities and new cracks. Through these cracks, and cracks formed by sheeting, a part of gas escapes first into the drilling liquid which fills the borehole, and then at the surface to the atmospheric air. Particular gases, CO2, O2 and N2 trapped in the deep cold ice start to form clathrates, and leave the air bubbles, at different pressures and depth. At the ice temperature of –15°C dissociation pressure for N2 is about 100 bars, for O2 75 bars, and for CO2 5 bars. Formation of CO2 clathrates starts in the ice sheets at about 200 meter depth, and that of O2 and N2 at 600 to 1000 meters. This leads to depletion of CO2 in the gas trapped in the ice sheets. This is why the records of CO2 concentration in the gas inclusions from deep polar ice show the values lower than in the contemporary atmosphere, even for the epochs when the global surface temperature was higher than now."50
No study has yet demonstrated that the content of greenhouse trace gases in old ice, or even in the interstitial air from recent snow, represents the atmospheric composition.
The ice core data from various polar sites are not consistent with each other, and there is a discrepancy between these data and geological climatic evidence. One such example is the discrepancy between the classic Antarctic Byrd and the Vostok ice cores, where an important decrease in the CO2 content in the air bubbles occurred at the same depth of about 500 meters, but at which the ice age difference by about 16,000 years. In approximately 14,000-year-old part of the Byrd core, a drop in the CO2 concentration of 50 ppmv was observed, but in similarly old ice from the Vostok core, an increase of 60 ppmv was found. In about 6,000-year-old ice from Camp Century, Greenland, the CO2 concentration in air bubbles was 420 ppmv, but was 270 ppmv in similarly old ice from Byrd Antarctica . . .
One can also note that the CO2 concentration in the air bubbles decreases with the depth of the ice for the entire period between the years 1891 and 1661, not because of any changes in the atmosphere, but along the increasing pressure gradient, which is probably the result of clathrate formation, and the fact that the solubility of CO2 increases with depth.
If this isn't already bad enough, Jaworowski proceeds to argue that the data, as contaminated as it is, has been manipulated to fit popular theories of the day.
Until 1985, the published CO2 readings from the air bubbles in the pre-industrial ice ranged from 160 to about 700 ppmv, and occasionally even up to 2,450 ppmv. After 1985, high readings disappeared from the publications!50
Another problem is the notion that lead levels in ice cores correlate with the increased use of lead by various more and more modern civilizations such as the Greeks and Romans and then during European and American industrialization. A potential problem with this notion is Jaworowski's claim to have "demonstrated that in pre-industrial period the total flux of lead into the global atmosphere was higher than in the 20th century, that the atmospheric content of lead is dominated by natural sources, and that the lead level in humans in Medieval Ages was 10 to 100 times higher than in the 20th century."50 Beyond this potential problem, there is also the problem of heavy metal contamination of the ice cores during the drilling process.
Numerous studies on radial distribution of metals in the cores reveal an excessive contamination of their internal parts by metals present in the drilling fluid. In these parts of cores from the deep Antarctic, ice concentrations of zinc and lead were higher by a factor of tens or hundreds of thousands, than in the contemporary snow at the surface of the ice sheet. This demonstrates that the ice cores are not a closed system; the heavy metals from the drilling fluid penetrate into the cores via micro- and macro-cracks during the drilling and the transportation of the cores to the surface.50
Professor Jaworowski summarizes with a most interesting statement:
It is astonishing how credulously the scientific community and the public have accepted the clearly flawed interpretations of glacier studies as evidence of anthropogenic increase of greenhouse gases in the atmosphere. Further historians can use this case as a warning about how politics can negatively influence science.50
While this statement is most certainly a scathing rebuke of the scientific community as it stands, I would argue that Jaworowski doesn't go far enough. He doesn't consider that the problems he so carefully points as the basis for his own doubts concerning the basis of global warming may also pose significant problems for the validity of using ice cores for reliably assuming the passage of vast spans of time, supposedly recording in the layers of large ice sheets. (Back to Top)
So, it seems as though isotope ratios are severely limited if not entirely worthless as yearly markers for ice core dating beyond a very short period of time. However, there are several other dating methods, such as the correlation of impurities in the layers of ice to known historical events – such as known volcanic eruptions.
After a volcano erupts, the ash and other elements from the eruption fall out and are washed out of the atmosphere by precipitation. This fallout leaves “tephra” (microscopic shards of glass from the ash fallout – see picture), sulfuric acid, and other chemicals in the snow and subsequent ice from that year. Sometimes the tephra fallout can be specifically matched via physical and chemical analysis to a known historical eruption. This analysis begins when electrical conductivity measurements (ECM) are made along the entire length of the ice core. Increases in electrical conductivity indicate the presence of increased acid content. When a volcano erupts, it spews out a great deal of sulfur-rich gases. These are converted in the atmosphere to sulfuric acid aerosols, which end up in the layers of ice and increase the ECM readings. The higher the acidity, the better the conduction. Sections of ice from a region with an acidic spike are then melted and filtered through a capillary-pore membrane filter. An automated scanning electron microscope (SEM), equipped for x-ray microanalysis, is used to determine the size, shape and elemental composition of hundreds of particles on the filter. Cluster analysis, using a multivariate statistical routine that measures the elemental compositions of sodium, magnesium, aluminum, silicon, potassium, calcium, titanium and iron, is done to identify the volcanic “signature” of the tephra particles in the sample. Representative tephra particles are re-located for photomicrography and more detailed chemical analysis. Then tephra is collected from near the volcanic eruption that may have produced the fallout in the core and is ground into a fine powder, dispersed in liquid, and filtered through a capillary-pore membrane. Then automated SEM and chemical analysis is used on this known tephra sample to find its chemical signature and compare it with the unknown sample found in the ice core - to see if there is a match.22
Tephra from several well-known historical volcanoes have been analyzed in this way. For example, Crater Lake in Oregon was once a much larger mountain (Mt. Mazama) before it blew up as a volcano. In the mid-1960s scientists dated this massive explosion, with the use of radiocarbon dating methods, at between 6,500 and 7,000 years before present (BP). Then, in 1979, Scientific American published an article about a pair of sagebrush bark sandals that were found just under the Mazama tephra at Fort Rock Cave. These sandals were carbon-14 dated to around 9,000 years BP. Even thought this date was several thousand years older than expected, the article went on to say that the bulk of the evidence still put the most likely eruption date of Mt. Mazama at around 7,000 years BP. 23,24 Later, a “direct count” of the layers in the ice core obtained from Camp Century Greenland put the date of the Mazama tephra at 6,400±110 years BP.23,25 Then, at the 16th INQUA conference held June 2003, in Reno Nevada (attended by over 1,000 scientists studying the Quaternary period), Kevin M. Scott noted in an abstract that the Mazama Park eruptive period had been “newly dated at 5,600-5,900 14C yrs BP.” Scott went on to note that this new date “includes collapses and eruptions previously dated throughout a range of 4,300 to 6,700 14C yrs BP.” 26 At this point it should also be noted that the carbon-14 dating method is being calibrated by the Greenland ice cores, so it is circular to argue that the Greenland ice core dates have been validated by carbon-14 analysis.26
Another famous volcano, the Mediterranean volcano Thera, was so large that it effectively destroyed the Minoan (Santorini) civilization. This is thought to have happened in the year 1628 B.C. since tree rings from that region showed a significant disruption matching that date. Of course, such an anomaly was looked for in the ice cores. As predicted, layers in the "Dye 3" Greenland ice core showed such a major eruption in 1645, plus or minus 20 years. This match was used to confirm or calibrate the ice core data as recently as 2003.
Interestingly enough though, the scientists did not have the budget at the time to a systematic search throughout the whole ice core for such large anomalies that would also match a Thera-sized eruption. Now that such detailed searches have been done, many such sulfuric acid peaks have been found at numerous dates within the 18th, 17th, 16th, 15th, and 14th centuries B.C. 35 Beyond this, tephra analyzed from the "1620s" ice core layers did not match the volcanic material from the Thera volcano. The investigators concluded:
"Although we cannot completely rule out the possibility that two nearly coincident eruptions, including the Santorini eruption, are responsible for the 1623 BC signal in the GISP ice core, these results very much suggest that the Santorini eruption is not responsible for this signal. We believe that another eruption led not only to the 1623 BC ice core signal but also, by correlation, to the tree-ring signals at 1628/1627 BC." 36
Then, as recently as March of 2004, Pearce et al published a paper declaring that another volcano, the Aniakchak Volcano in Alaska, was the true source of the tephra found in the GRIP ice core at the "1645 ± 4 BC layer." These researchers went on to say that, "The age of the Minoan eruption of Santorini, however, remains unresolved." 37
So, here we have a clearly erroneous match between a volcanic eruption and both tree rings and ice core signals. What is most curious, however, is that many scientists still declare that ice cores are solidly confirmed by such means. Beyond this, as flexible as the dating here seems to be, the Mt. Mazama and Thera eruptions are still about the oldest eruptions that can be identified in the Greenland ice cores. There are two reasons for this. One reason is that below 10,000 layers or so in the ice core the ice becomes too alkaline to reliably identify the acid spikes associated with volcanic eruptions.5 Another reason is that the great majority of volcanic eruptions throughout history were not able to get very much tephra into the Greenland ice sheet. So, the great majority of volcanic signals are detected via their acid signal alone.
This presents a problem. A review of four eruption chronologies constructed since 1970 illustrate this problem quite nicely. In 1970, Lamb published an eruption chronology for the years 1500 to 1969. The work recorded 380 known historical eruptions. Ten years later, Hirschboek published a revised eruption chronology that recorded 4,796 eruptions for the same period – a very significant increase from Lamb’s figure. One year later, in 1981, Simkin et al. raised the figure to 7,664 eruptions and Newhall et al. increased the number further a year later to 7,713. It is also interesting to note that Simkin et al. recorded 3,018 eruptions between 1900 and 1969, but only 11 eruptions were recorded from between 1 and 100 AD. So obviously, as one goes back through recent history, the number of known volcanic eruptions drops off dramatically, though they were most certainly still occurring – just without documentation. Based on current rates of volcanic activity, an expected eruption rate for the past several thousand years comes to around 30,000 eruptions per 1,000 years.25
With such a high rate of volcanic activity, to include many rather large volcanoes, how are scientists so certain that a given acid spike on ECM is so clearly representative of any particular volcano – especially when the volcanic eruption in question happened more than one or two thousand years ago? The odds that at least one volcanic signal will be found in an ice core within a very small “range of error” around any supposed historical eruption are extremely good - even for large volcanoes. Really, is this all too far from a self-fulfilling prophecy? How then can the claim be made that historical eruptions validate the dating of ice cores to any significant degree?
“The desire to link such phenomena [volcanic eruptions] and the stretching of the dating frameworks involved is an attractive but questionable practice. All such attempts to link (and hence infer associations between) historic eruptions and environmental phenomena and human "impacts", rely on the accurate and precise association in time of the two events. . . A more general investigation of eruption chronologies constructed since 1970 suggest that such associations are frequently unreliable when based on eruption data gathered earlier than the twentieth century.” 25
(Back to Top)
So, if volcanic markers are generally unreliable and completely useless beyond a few thousand years, how are scientists so sure that their ice core dating methods are meaningful? Well, one of the most popular methods used to distinguish annual layers is one that measures the fluctuations in ice core dust. Dust is alkaline and shows up as a low ECM reading. During the dry northern summer, dust particles from Arctic Canada and the coastal regions of Greenland are carried by wind currents and are deposited on the Greenland ice sheet. During the winter, this area is not so dusty, so less dust is deposited during the winter as compared to the summer. This annual fluctuation of dust is thought to be the most reliable of all the methods for the marking of the annual cycle - especially as the layers start to get thinner and thinner as one moves down the column of ice.27 And, it certainly would be one of the most reliable methods if it were not for one little problem known as “post-depositional particle migration”.
Zdanowicz et al., from the University of New Hampshire, did real time studies of modern atmospheric dust deposition in the 1990’s on the Penny Ice Cap, Baffin Island, Arctic Canada. Their findings are most interesting indeed:
“After the snow deposition on polar ice sheets, not all the chemical species preserve the original concentration values in the ice. In order to obtain reliable past-environmental information by firn and ice cores, it is important to understand how post-depositional effects can alter the chemical composition of the ice. These effects can happen both in the most superficial layers and in the deep ice. In the snow surface, post-depositional effects are mainly due to re-emission in the atmosphere and we show here that chloride, nitrate, methane-sulphonic acid (MSA) and H2O2 [hydrogen peroxide] are greatly affected by this process; moreover, we show how the mean annual snow accumulation rate influences the re-emission extent. In the deep ice, post-depositional effects are mainly due to movement of acidic species and it is interesting to note the behavior of some substances (e.g. chloride and nitrate) in acidic (high concentrations of volcanic acid gases) and alkaline (high dust content) ice layers . . . We failed to identify any consistent relationship between dust concentration or size distribution, and ionic chemistry or snowpack stratigraphy.” 28
This study goes on to reveal that each yearly cycle is marked not by one distinct annual dust concentration as is normally assumed when counting ice core layers, but by two distinct dust concentration peaks – one in late winter-spring and another one in the late summer-fall. So, each year is initially marked by “two seasonal maxima of dust deposition.” By itself, this finding cuts in half those ice core dates that assume that each year is marked by only one distinct deposition of dust. This would still be a salvageable problem if the dust actually stayed put once it was deposited in the snow. But, it does not stay put – it moves!
“While some dust peaks are found to be associated with ice layers or Na [sodium] enhancements, others are not. Similarly, variations of the NMD [number mean diameter – a parameter for quantifying relative changes in particle size] and beta cannot be systematically correlated to stratigraphic features of the snowpack. This lack of consistency indicates that microparticles are remobilized by meltwater in such a way that seasonal (and stratigraphic) differences are obscured.” 28
This remobilization of the microparticles of dust in the snow was found to affect both fine and coarse particles in an uneven way. The resulting “dust profiles” displayed “considerable structure and variability with multiple well-defined peaks” for any given yearly deposit of snow. The authors hypothesized that this variability was most likely caused by a combination of factors to include “variations of snow accumulation or summer melt and numerous ice layers acting as physical obstacles against particle migration in the snow.” The authors suggest that this migration of dust and other elements limits the resolution of these methods to “multiannual to decadal averages”.28
Another interesting thing about the dust found in the layers of ice is that those layers representing the last “ice age” contain a whole lot of dust – up to 100 times more dust than is deposited on average today.19 The question is, how does one explain a hundred times as much Ice Age dust in the Greenland icecap with gradualistic, wet conditions? There simply are no unique dust sources on Earth to account for 100 times more dust during the 100,000 years of the Ice Age, particularly when this Ice Age was thought to be associated with a large amount of precipitation/rain – which would only cleanse the atmosphere more effectively. How can high levels of precipitation be associated with an extremely dusty atmosphere for such a long period of time? Isn’t this a contradiction from a uniformitarian perspective? Perhaps a more recent catastrophic model has greater explanatory value?
Other dating methods, such as 14C, 36Cl and other radiometric dating methods are subject to this same problem of post-depositional diffusion as well as contamination – especially when the summer melt sends water percolating through the tens and hundreds of layers found in the snowy firn before the snow turns to ice. Then, even after the snow turns to ice, diffusion is still a big problem for these molecules. They simply do not stay put.
More recent publications by Rempel et al., in Nature (May, 2001),32 also quoted by J.W. Wettlaufer (University of Washington) in a paper entitled, "Premelting and anomalous diffusion in ancient ice",31 suggest that chemicals that have been trapped in ancient glacial or polar ice can move substantial distances within the ice (up to 50cm even in deeper ice where layers get as thin as 3 or 4 millimeters). Such mobility is felt by these scientists to be "large enough to offset the resolution at which the core was examined and alter the interpretation of the ice-core record." What happens is that, "Substances that are climate signatures - from sea salt to sulfuric acid - travel through the frozen mass along microscopic channels of liquid water between individual ice crystals, away from the ice on which they were deposited. The movement becomes more pronounced over time as the flow of ice carries the substances deeper within the ice sheet, where it is warmer and there is more liquid water between ice crystals. . . The Vostok core from Antarctica, which goes back 450,000 years, contains even greater displacement [as compared to the Greenland ice cores] because of the greater depth." That means that past analyses of historic climate changes gleaned from ice core samples might not be all that accurate. Wettlaufer specifically notes that, "The point of the paper is to suggest that the ice core community go back and redo the chemistry."31,32 Of course these scientists do not think that such problems are significant enough to destroy the usefulness of ice cores as a fairly reliable means of determining historical climate changes. But, it does make one start to wonder how much confidence one can actually have in the popular interpretations of what ancient ice really means. (Back to Top)
To add to the problems inherent in ice core dating is the significant amount evidence that the world was a much warmer place just a few thousand years ago. These higher temperatures of the Middle Holocene or Hypsithermal period are said to have begun about 9,000 years ago and then started to fade about 4,000 years ago.8,53
So, how "warm" was this warm period? Various studies suggest sustained temperatures of northerly regions, such as the Canadian Northwest Territories, of 3-4°C warmer than today. Studies on sedimentary cores carried out in the North Atlantic between Hudson Strait and Cape Hatteras indicate ocean temperatures of 18°C (verses about 8°C today in this region).54 However, not all regions experienced the same increase in temperature and the overall average global temperature is thought to have been about 2°C warmer than it is today.55
also seems that in the fairly recent past the vegetation zones were much closer to
the poles than they are today. The
remains of some plant species can be found as far as 1,000 km farther north than
they are found today. Forests once
extended right up to the Barents Coast and the White Sea.
The European tundra zones were non-existent.
In northern Asia, peat-moss was discovered on Novaya Zemlya.
And, this was no short-term aberration in the weather. This warming trend
seems to have lasted for quite a while.56 Consider
also the very interesting suggestion of Prof. Borisov, a long time meteorology and climatology
professor at Leningrad State University:
“During the last 18,000 years, the warming was particularly appreciable during the Middle Holocene. This covered the time period of 9,000 to 2,500 years ago and culminated about 6,000 to 4,000 years ago, i.e., when the first pyramids were already being built in Egypt . . . The most perturbing questions of the stage under consideration are: Was the Arctic Basin iceless during the culmination of the optimum?”8
Professor Borisov asks a very interesting question. What would happen to the ice sheets during several thousand years of a “hypsithermal” warming if it really was some 2°C warmer than it is today? If the Arctic region around the entire globe, to include the Arctic Ocean, was ice free during just a few thousand years, even episodically during the summer months, what would have happened to the ice sheet on Greenland?
Consider what would happen if the entire Arctic Ocean went without ice during the summer months owing to a warmer and therefore longer spring, summer, and fall. Certainly there would be more snowfall, but this would not be enough to prevent the warm rainfall from removing the snow cover and the ice itself from Greenland’s ice sheet. A marine climate would create a more temperate environment because water vapor over the Arctic region would act as a greenhouse gas, holding the day’s heat within the atmosphere.
Borisov goes on to point out that a 1°C increase in average global temperature results in a more dramatic increase in temperature at the poles and extreme latitudes than it does at the equator and more tropical zones. For example, between the years 1890 and 1940, there was a 1°C degree increase in the average global temperature. During this same time the mean annual temperature in the Arctic basin rose 7°C. This change was reflected more in warmer winters than in warmer summers. For instance, the December temperature rose almost 17°C while the summer temperature changed hardly at all. Likewise, the average winter temperature for Spitsbergen and Greenland rose between 6 to 13°C during this time. 8 Along these same lines, an interesting article published in the journal Nature 30-years ago by R. L. Newson showed that, without the Arctic ice cap, the winters of the Arctic Ocean would rise 20-40ºC and 10-20ºC over northern Siberia and Alaska - all other factors being equal11 M. Warshaw and R. Rapp published similar results in the Journal of Applied Meteorology - using a different circulation model.12
Of course, the real question here is, would a 2°C increase in average global temperature, over today's "global warming" temperatures, melt the ice sheets of Greenland or even Antarctica?
Borisov argued that this idea is not all that far-fetched. He notes that measurements carried out on Greenland’s northeastern glaciers as far back as the early 1950’s showed that they were loosing ice far faster than it was being formed. 8 The northeastern glaciers were in fact in “ablation” as a result of just a 1°C rise in average global temperature. What would be expected from another 2°C rise? - over the course of several thousand years?
Since that time research done by Carl Boggild of the Geological Survey of Denmark and Greenland (GEUS), involving data from a network of 10 automatic monitoring stations, showed that the large portions of the Greenland ice sheet are melting up to 10 times faster that earlier research had indicated.
In 2000, research indicated that the Greenland ice was melting at a conservative estimate of just over 50 cubic kilometers of ice per year. However, studies done by a team from the University of Texas over 18 months from 2005 to 2006 with the use of gravity data collected by satellites, suggests that the "ice cap may be melting three times faster than indicated by previous measurements" from 1997 to 2003. Currently, the ice is melting at 239 cubic kilometers per year (measured from April 2002 to November 2005).52
Greenland covers 2,175,590 square kilometers with about 85% of that area covered by ice of about 2 km thick. That's about 4,351,180 cubic kilometers of ice. At current rates of melting, it would take about 18,000 years to melt all the ice on Greenland. Of course, 18,000 years seems well outside the range of the Hypsithermal period. However, even at current temperatures, the melt rate of the Earth's glaciers, to include those of Greenland, is accelerating dramatically - and we still have another 2°C to go. Towns in Greenland are already beginning to sink because of the melting permafrost. Even potatoes are starting to grow in Greenland. This has never happened before in the memory of those who have lived there all their lives.
In April of 2000, Lars Smedsrud and Tore Furevik wrote in an article in the Cicerone magazine, published by the Norwegian Climate Research Centre (CICERO) that , "If the melting of the ice, both in thickness and surface area, does not slow, then it is an established fact that the arctic ice will disappear during this century." This is based on the fact that the Arctic ice has thinned by some 40% between the years 1980 and 2000. This past summer, December 2006, explorers Lonnie Dupre and Eric Larsen made a very dangerous and most interesting trek to the North Pole. As they approached the Pole they found open water, a lot of icy slush, and ice so thin it wouldn't support their weight.
"We expected to see the ice get better, get flatter, as we got closer to the pole. But the ice was busted up," Dupre said. "As we got closer to the pole, we had to paddle our canoes more and more."51
Walt Meier, a researcher at the U.S. National Snow and Ice Data Center in Boulder, Colorado commented on these interesting findings noting that the melting of the Arctic ice cap in summer - is progressing more rapidly than satellite images alone have shown. Given resent data such as this, climate researchers at the U.S. Naval Postgraduate School in California predict the complete absence of summer ice on the Arctic Ocean by 2030 or sooner.51 That prediction is dramatically different than what scientists were predicting just a few years ago - that the ice would still be there by the end of the century. Consider how a complete loss of Arctic ice and with an average temperature increase over the Arctic Ocean upwards of 20-40ºC would affect the temperature of surrounding regions - like Greenland. Could Greenland long retain its ice without the Arctic polar ice?
If this is not convincing enough, consider that since the year 2000, glaciers around the world have continued melting at greater and greater rates - exponentially greater rates. Alaska's glaciers are receding at twice the rate previously thought, according to a new study published in July 19, 2002 Science journal. Around the globe, sea level is about 6 inches higher than it was just 100 years ago, and the rate of rise is increasing quite dramatically. Leading glaciologist, Dr. Mark Meier, remarked in February of 2002 that the accepted estimates of sea level rise were underestimated, due to the rapid retreat of mountain glaciers.44
The next year, at the American Association for the Advancement of Science (AAAS) meeting in San Francisco on February 25, 2001, Professor Lonnie Thompson, from Ohio State University's Department of Geological Sciences, presented a paper entitled, "Disappearing Glaciers - Evidence of a Rapidly Changing Earth." Dr. Thompson has completed 37 expeditions since 1978 to collect and study perhaps the world's largest archive of glacial ice cored from the Himalayas, Mount Kilimanjaro in Africa, the Andes in South America, the Antarctic and Greenland.
Prof. Thompson reported to AAAS that at least one-third of the massive ice field on top of Tanzania's Mount Kilimanjaro has melted in only the past twelve years. Further, since the first mapping of the mountain's ice in 1912, the ice field has shrunk by 82%. By 2015, there will be no more "snows of Kilimanjaro." In Peru, the Quelccaya ice cap in the Southern Andes Mountains is at least 20% smaller than it was in 1963. One of the main glaciers there, Qori Kalis, has been melting at the astonishing rate of 1.3 feet per day. Back in 1963, the glacier covered 56 square kilometers. By 2000, it was down to less than 44 square kilometers and now there is a new ten acre lake. It's melt rate has been increasing exponentially and at its current rate will be entirely gone between 2010 and 2015, the same time that Kilimanjaro dries.
The exponential nature of this worldwide melt is dramatically illustrated by aerial photographs taken of various glaciers. A series of photographs of the Qori Kalis glacier in Peru are available from 1963. Between 1963 and 1978 the rate of melt was 4.9 meters per year. Between 1978 and 1983 was 8 meters per year. This increased to 14 meters per year by 1993 and to 30 meters per year by 1995, to 49 meters per year by 1998 and to a shocking 155 meters per year by 2000. By 2001 it was up to about 200 meters per year. That's almost 2 feet per day. Dr. Thompson exclaimed, "You can literally sit there and watch it retreat."
Then, in 2001, NASA scientists published a major study, based on satellite and aircraft observations, showing that large portions of the Greenland ice sheet, especially around its margins, were thinning at a rate of roughly 1 meter per year. Other scientists, such as Carl Boggild and his team, have recorded thinning Greenland glaciers at rates as fast a 10 or even 12 meters per year. It is quite a shock to scientists to realize that the data from satellite images shows that various Greenland glaciers are thinning and retreating in an exponential manner - by an "astounding" 150 meters in thickness in just the last 15 years.43
In both 2002 and 2003, the Northern Hemisphere registered record low ocean ice cover. NASA's satellite data show the Arctic region warmed more during the 1990s than during the 1980s, with Arctic Sea ice now melting by up to 15 percent per decade. Satellite images show the ice cap covering the Northern pole has been shrinking by 10 percent per decade over the past 25 years.45
On the opposite end of the globe, sea ice floating near Antarctica has shrunk by some 20 percent since 1950. One of the world's largest icebergs, named B-15, that measured near 10,000 square kilometers (4,000 square miles) or half the size of New Jersey, calved off the Ross Ice Shelf in March 2000. The Larsen Ice Shelf has largely disintegrated within the last decade, shrinking to 40 percent of its previously stable size.45 Then, in 2002, the Larsen B ice shelf collapsed. Almost immediately after, researchers observed that nearby glaciers started flowing a whole lot faster - up to 8 times faster! This marked increase in glacial flow also resulted in dramatic drops in glacial elevations, lowering them by as much as 38 meters (124 feet) in just 6 months.48
Scientists monitoring a glacier in Greenland, the Kangerdlugssuaq glacier, have found that it is moving into the sea 3 times faster than just 10 years ago. Measurements taken in 1988 and in 1996 show the glacier was moving at a rate of between 3.1 and 3.7 miles per year. The latest measurements, taken the summer of 2005, showed that it is now moving at 8.7 miles a year. Satellite measurements of the Kangerdlugssuaq glacier show that, as well as moving more rapidly, the glacier's boundary is shrinking dramatically. Kangerdlugssuaq is about 1,000 meters (3,280ft) thick, about 4.5 miles wide, extends for more than 20 miles into the ice sheet and drains about 4 per cent of the ice from the Greenland ice sheet. The realization of the rapid melting of such a massive glacier, which was fairly stable until quite recently, came as quite a shock to the scientific community. Professor Hamilton expressed this general surprise in the following comment:
"This is a dramatic discovery. There is concern that the acceleration of this and similar glaciers and the associated discharge of ice is not described in current ice-sheet models of the effects of climate change. These new results suggest the loss of ice from the Greenland ice sheet, unless balanced by an equivalent increase in snowfall, could be larger and faster than previously estimated. As the warming trend migrates north, glaciers at higher latitudes in Greenland might also respond in the same way as Kangerdlugssuaq glacier. In turn, that could have serious implications for the rate of sea-level rise."46
The exponential increase in glacial speed is now thought to be due to increased surface melting. The liquid water formed on the surface during summer melts collects into large lakes. The water pressure generated by these surface lakes forces water down through the icy layers all the way to the underlying bedrock. It then spreads out, lifting up the glacier off the bedrock on a lubricating film of liquid water. Obviously, with such lubrication, the glacier can then flow at a much faster rate - exponentially faster. This increase in speed also makes for a thinner glacier since the glacier becomes more stretched out.46
For example the giant Jakobshavn glacier - at four miles wide and 1,000 feet thick the biggest on the landmass of Greenland - is now moving towards the sea at a rate of 113 feet a year; the "normal" annual speed of a glacier is just one foot. Until now, scientists believed the ice sheet would take 1,000 years to melt entirely, but Ian Howat, who is working with Professor Tulaczyk, says the new developments could "easily" cut this time "in half". 49 Again, this is well within the range of what would have been melted during Hypsithermal warming many times over.
It seems that no one predicted this. No one thought it possible and scientists are quite shocked by these facts. The amazingly fast rate of glacial retreat simply goes against the all prevailing models of glacial development and change, change which generally involve many thousands of years. Who would have thought that such changes could happen in mere decades?
Beyond this, there are many other evidences of a much warmer climate in Greenland and the Arctic basin in the fairly recent past. For example, when Greenland’s seas were 10 meters higher than they are today (during the last hipsithermal), warm water mollusks can be found that live over 500 to 750 miles farther south today. Also, the remains of land vertebrates, such as various reptiles, are found in Denmark and Scandinavia, when they live only in Mediterranean areas today.13
“Additional evidence is given by...peats and relics in Greenland--the northern limits may have been displaced northward through several degrees of latitude...and [by] other plants in Novaya Zemlya, and by peat and ripe fruit stones [fruit pits]...in Spitsbergen that no longer ripen in these northern lands. Various plants were more generally distributed in Ellesmere [Island and] birch grew more widely in Iceland....” 13
The point is that these types of plants and these types of large trees should never be able to grow on islands north of the Arctic Circle. Back in 1962 Ivan T. Sanderson noted that , “Pieces of large tree trunks of the types [found] . . . do not and cannot live at those latitudes today for purely biological reasons. The same goes for huge areas of Siberia.”14 Also, as previously noted, fruit does not ripen during short autumns at these high latitudes. Therefore, the spring and summer seasons had to be much longer for any seeds from these temperate trees to germinate and grow. Likewise, the peats that have been found on Greenland require temperate, humid climates to form. Peat formation requires climates that allow for the partial decomposition of vegetable remains under conditions of deficient drainage.13 Also, peat formations require at least 40 inches of rainfall a year and a mean temperature above 32°F. 15 In addition, there were temperate forests on the Seward Peninsula, in Alaska, and the Tuktoyaktuk Peninsula, in Canada’s frigid Inuvik Region, facing the Beaufort Sea and the Arctic Ocean and at Dubawnt Lake, in Canada’s frozen Keewatin Region, west of the Hudson Bay.16 And yet, somehow, it is believed that Greenland’s icecap survived several thousand years in such a recently temperate climate, but how?
What we have are temperate forests and warm waters near and within the Arctic Circle and Ocean all across the northern boundary from Siberia to Norway and from Alaska to the Hudson Bay. These temperate conditions existed for thousands of years both east and west of Greenland and at all the Greenland latitudes around the world - and these conditions had not yet ended by the time the Egyptians were building their pyramids! This, of course, would explain why mammoths and other large animals were able to live, during this period, throughout these northerly regions. (Back to Top)
Mammoths are especially interesting since millions of them recently lived (within the last 10-20 thousand years according to mainstream science) well within the Arctic Circle. Although popularly portrayed as living in cold barren environments and occasionally dying in local events, such as mudslides or entrapment in soft riverbanks, the evidence may actually paint a very different picture if studied at from a different perspective.
The well preserved "mummified" remains of many mammoths have been found along with those of many other types of warmer weather animals such as the horse, lion, tiger, leopard, bear, antelope, camel, reindeer, giant beaver, musk sheep, musk ox, donkey, ibex, badger, fox, wolverine, voles, squirrels, bison, rabbit and lynx as well as a host of temperate plants are still being found all jumbled together within the Artic Circle - along the same latitudes as Greenland all around the globe.39
The problem with the popular belief that millions of mammoths lived in very northerly regions around the entire globe, with estimates of up to 5 million living along a 600 mile stretch of Siberian coastline alone,39 is that these mammoths were still living in these regions within the past 10,000 to 20,000 years. Carbon 14 dating of Siberian mammoths has returned dates as early as 9670± 40 years before present (BP).41 An even more recent study (1995) carried out on mammoth remains located on Wrangel Island (on the border of the East-Siberian and Chukchi Seas) showed that woolly mammoths persisted on Wrangel Island in the mid-Holocene, from 7390-3730 years ago (i.e., till about ~2,000 B.C.)57
So, why is this a problem?
Contrary to popular imagination, these creatures were not surrounded by the extremely cold, harsh environments that exist in these northerly regions today. Rather, they lived in rather lush steppe-type conditions to include evidence of large fruit bearing trees, abundant grasslands, and the very large numbers and types of grazing animals already mentioned only to be quickly and collectively annihilated over huge areas by rapid weather changes. Clearly, the present is far far different than even the relatively recent past must have been. Sound too far fetched?
Consider that the last meal of the famous Berezovka mammoth (see picture above), found north of the Artic Circle, consisted of "twenty-four pounds of undigested vegetation" 39 to include over 40 types of plants; many no longer found in such northerly regions.43 The enormous quantities of food it takes to feed an elephant of this size (~300kg per day) is, by itself, very good evidence for a much different climate in these regions than exists today.39 Consider the following comment by Zazula et. al. published the June 2003 issue of Nature:
"This vegetation [Beringia: Includes an area between Siberia and Alaska as well as the Yukon Territory of Canada] was unlike that found in modern Arctic tundra, which can sustain relatively few mammals, but was instead a productive ecosystem of dry grassland that resembled extant subarctic steppe communities . . .
Abundant sage (Artemisia frigida) leaves, flowers from Artemisia sp., and seeds of bluegrass (Poa), wild-rye grass (Elymus), sedge (Carex) and rushes (Juncus/Luzula) . . . Seeds of cinquefoil (Potentilla), goosefoot (Chenopodium), buttercup (Ranunculus), mustard (Draba), poppy (Papaver), fairy-candelabra (Androsace septentrionalis), chickweed (Cerastium) and campion (Silene) are indicative of diverse forbs growing on dry, open, disturbed ground, possibly among predominantly arid steppe vegetation. Such an assemblage has no modern analogue in Arctic tundra. Local habitat diversity is indicated by sedge and moss peat from deposits that were formed in low-lying wet areas . . .
[This region] must have been covered with vegetation even during the coldest part of the most recent ice age (some 24,000 years ago) because it supported large populations of woolly mammoth, horses, bison and other mammals during a time of extensive Northern Hemisphere glaciation." 42
Now, does it really make sense for this region to be so warm, all year round, while the same latitudes on other parts of the globe where covered with extensive glaciers? Siberia, Alaska and Northern Europe and parts of northwestern Canada were all toasty warm while much of the remaining North American Continent and Greenland were covered with huge glaciers? Really?
Beyond this, consider that the mammoths didn't have hair erector muscles that enable an animal's fur to be "fluffed-up", creating insulating air pockets. They also lacked oil glands to protect against wetness and increased heat loss in extremely cold and damp environments. Animals currently living in Arctic regions have both oil glands and erector muscles. Of course, the mammoth did have a certain number of cold weather adaptations compared to its living cousins, the elephants; such as smaller ears, trunk and tail, fine woolly under-fur and long outer "protective" hair, and a thick layer of insulating fat,39 but these would by no means be enough to survive in the extremes of cold, ice and snow found in these same regions today - not to mention the lack of an adequate food supply. It seems very much as Sir Henry Howorth concluded back in the late 19th century:
"The instances of the soft parts of the great pachyderms being preserved are not mere local and sporadic ones, but they form a long chain of examples along the whole length of Siberia, from the Urals to the land of the Chukchis [the Bering Strait], so that we have to do here with a condition of things which prevails, and with meteorological conditions that extend over a continent.
When we find such a series ranging so widely preserved in the same perfect way, and all evidencing a sudden change of climate from a comparatively temperate one to one of great rigour, we cannot help concluding that they all bear witness to a common event. We cannot postulate a separate climate cataclysm for each individual case and each individual locality, but we are forced to the conclusion that the now permanently frozen zone in Asia became frozen at the same time from the same cause."40
Actually, northern portions of Asia, Europe, and North America contain the remains of extinct species of the elephant [mammoth] and rhinoceros, together with those of horses, oxen, deer, and other large quadrupeds.39 Even though the evidence speaks against the "instant" catastrophic event freeze that some have suggested,39 the weather change was still a real and relatively sudden change to a much colder and much harsher environment compared to the relatively temperate and abundant conditions that once existed in these northerly regions around much of the globe. Is it not then a least reasonable to hypothesize that Greenland also had such a temperate climate in the resent past, loosing its icecap completely and growing lush vegetation? If not, how was the Greenland ice sheet able to be so resistant to the temperate climate surrounding it on all sides for hundreds much less thousands of years? (Back to Top)
A Recently Green Greenland?
Interestingly enough, crushed plant parts have been found in the ice sheets of northeastern Greenland – from a dike ridge of a glacier. This silty plant material was said to give off a powerful odor, like that of decaying organic matter.17 This material was examined for fossils by Esa Hyyppa of the Geological Survey of Finland, who noted the following:
“The silt examined contained two whole leaves, several leaf fragments and two fruits of Dryas octopetala; [also] a small, partly decayed leaf of a shrub species not definitely determinable . . . and an abundance of much decayed, small fragments of plant tissues, mostly leaf veins and root hairs . . . " 17
It is most Interesting that scientists think that this plant material must have originated from some superficial deposit in a distant valley floor of Greenland and that this material was squeezed up from the base of the ice. Some scientists have even suggested that, “The modern aspect of the flora precludes a preglacial time of origin for it.” 17 Note also that the northeastern corner of Greenland is actually its coldest region. It has a “continental climate that is remote from the influence of the sea.” 18 The ocean dramatically affects climate. That is why regions like the north central portions of the United States have such long, cold winters when compared to equal latitudes along the eastern seaboard. Northeastern Greenland, therefore, would have the coldest climate of the entire island.
Also, consider that just this past July of 2004, plant material consisting of probable grass or pine needles and bark was discovered at the bottom of the Greenland ice sheet under about 10,400 feet of ice. Although thought to be several million years old, Dorthe Dahl-Jensen, a professor at the University of Copenhagen's Niels Bohr Institute and NGRIP project leader noted that the such plant material found under about 10,400 feet of ice indicates the Greenland Ice Sheet "formed very fast."38
Beyond the obvious fact that such types of organic material suggest an extremely rapid climactic change and burial by ice, the question is, Why hasn't such organic material been stripped completely off Greenland by now by the flowing ice sheets? For instance, we know how fast these ice sheets move - up to 100 meters per year in central regions and up to 10 miles per year for several of Greenland's major glaciers. Given several hundred thousand to over a few million years of such scrubbing by moving ice sheets, how could significant amounts of such organic material remain on the surface of Greenland?
In just the last 100 years Glacier National Park has gone from having over 150 glaciers to just 35 today. And, those that remain have already lost over 90% of the volume that they had 100 years ago. "For instance, the Qori Kalis Glacier in Peru is shrinking at a rate of 200 meters per year, 40 times as fast as in 1978 when the rate was only 5 meters per year. And, it's just one of the hundreds of glaciers that are vanishing.
Ice is also disappearing from the Arctic Ocean and Greenland at an astounding rate that has taken scientists completely off guard. More than a hundred species of animals have been spotted moving to more northerly regions than they usually occupy. Many kinds of temperate plants are also growing much farther north and at higher elevations. Given all of these surprising rapid turn of events, even mainstream scientists are presenting some rather interesting scenarios as to what will happen to massive ice sheets like that found on Greenland in the near future. In some scenarios, the ice on Greenland eventually melts, causing sea levels to rise some 18 feet (~6 meters). Melt just the West Antarctic ice sheet as well, and sea levels jump another 18 feet.34 The speed of glacial demise is only recently being appreciated by scientists who are "stunned" to realize that glaciers all around the world, like those of Mt. Kilimanjaro, the Himalayas just beneath Mt. Everest, the high Andes, Swiss Alps, and even Iceland, will be completely gone within just 30 years.33 The same thing happened to the Langjokull Ice Cap, in Iceland, during the Hypsithermal based on benthic diatom data. "Langjokull must have disappeared in the early Holocene for such a diverse, benthic dominated diatom assemblage to flourish."58 It's about to happen again.
Of course, this begs the question as to how the ice sheets on Greenland and elsewhere, which are currently melting much faster than they are forming with just a 1°C rise in global temperature, could have survived for several thousand years during the very recent Hypsithermal period when global temperatures were another 2°C degrees warmer than today and temperatures within the Arctic Circle were between 20 and 40ºC warmer?
(Back to Top)
First glance intuition is often very helpful in coming up with a good hypothesis to explain a given phenomenon, such as the hundreds of thousands of layers of ice found in places like Greenland and Antarctica. It seems down right intuitive that each layer found in these ice sheets should represent an annual cycle. After all, this seems to fit the uniformitarian paradigm so well. However, a closer inspection of the data seems to favor a much more recent and catastrophic model of ice sheet formation. Violent weather disturbances with large storms, a sudden cold snap, and high precipitation rates could very reasonably give rise to all the layers, dust bands, and isotope variations etc. that we find in the various ice sheets today.
So, which hypothesis carries more predictive power? Is there more evidence for a much warmer climate all around Greenland in the recent past or for the survival of the Greenland Ice sheet, without melting, for hundreds of thousands to millions of years? Both positions cannot be right. One of them has to be wrong. Can all the frozen temperate plants and animals within the Arctic Circle trump interpretation of ocean core sediments, coral dating, radiometric dating, sedimentation rate extrapolations, isotope matches between ocean and ice cores and Milankovitch cycles? Most scientists don't think so. Personally, I don't see why not? For me, the evidence of warm-weather animals and plants living all around Greenland around the entire Arctic Circle, is especially overwhelming.
(Back to Top)
D.A., Gow, A.J., Alley, R.B., Zielinski, G.A., Grootes, P.M., Ram, K., Taylor,
K.C., Mayewski, P.A. and Bolzan, J.F., “The Greenland Ice Sheet Project 2
depth-age scale: Methods and results”, Journal of Geophysical Research
Craig H., Horibe Y., Sowers T., “Gravitational
Separation of Gases and Isotopes in Polar Ice Caps”,
Science, 242(4885), 1675-1678, Dec. 23, 1988.
Hall, Fred. “Ice Cores Not That Simple”, AEON II: 1, 1989:199
P.M. and Stuiver, M., Oxygen 18/16 variability in Greenland snow and ice with 10-3
to 105 – year time resolution. Journal of Geophysical Research
R.B. et al., Visual-stratigraphic dating of the GISP2 ice core: Basis,
reproducibility, and application. Journal of Geophysical Research
Borisov P., Can
Man Change the Climate?, trans. V. Levinson (Moscow, U. S. S. R.), 1973
"Santor¡ni Volcano Ash, Traced Afar, Gives a Date of 1623 BC," The
New York Times [New York] (June 7, 1994):C8.
Britannica, Macropaedia, 19 vols. "Etna (Mount)," (Chicago, Illinois,
1982), Vol. 6, p. 1017.
R. L. Newson,
"Response of a General Circulation Model of the Atmosphere
to Removal of the Arctic Icecap," Nature (1973): 39-40.
M. Warshaw and
R. R. Rapp, "An Experiment on the Sensitivity of
a Global Circulation Model," Journal of Applied Meteorology 12 (1973):
B., The Quaternary Era, London, England, 1957, Vol. II, p. 1494.
Sanderson, The Dynasty of ABU, New York, 1962, p. 80.
Brooks C. E. P.,
Climate Through the Ages, 2nd ed., New York, 1970, p. 297.
Pielou E. C.,
After the Ice Age, Chicago, Illinois, 1992, p. 279.
Boyd, Louise A.,
The Coast of Northeast Greenland, American Geological Society Special
Publication No. 30, New York, 1948: p132.
"Glaciology (1): The Balance Sheet or the Mass Balance," Venture to
the Arctic, ed. R. A. Hamilton, Baltimore, Maryland, 1958, p. 175 and Table I,
Hammer et al.,
"Continuous Impurity Analysis Along the Dye 3 Deep Core," American
Geophysica Union Monograph 33 (1985): 90.
Laurence R. Kittleman, "Tephra," Scientific American, p.
171, New York, December, 1979.
- July 2000
Zdanowicz CM, Zielinski GA, Wake CP, “Characteristics
of modern atmospheric dust deposition in snow on the Penny Ice Cap, Baffin
Island, Arctic Canada”, Climate
Change Research Center, Institute for the Study of Earth, Oceans and Space,
University of New Hampshire, Tellus, 50B, 506-520, 1998. (http://www.ccrc.sr.unh.edu/~cpw/Zdano98/Z98_paper.html)
Lorius C., Jouzel J., Ritz C., Merlivat L., Barkov N. I., Korotkevitch Y.
S. and Kotlyakov V. M., “A 150,000-year climatic record from Antarctic ice”,
Nature, 316, 1985, 591-596.
Barbara Stenni, Valerie Masson-Delmotte, Sigfus Johnsen, Jean Jouzel, Antonio Longinelli, Eric Monnin, Regine Ro¨thlisberger, Enrico Selmo, “An Oceanic Cold Reversal During the Last Deglaciation”, Nature 280:644, 1979.
Wettlaufer, J.W., Premelting and anomalous diffusion in ancient ice, FOCUS session, March 16, 2001.
Rempel, A., Wettlaufer, J., Waddington E., Worster, G., "Chemicals in ancient ice move, affecting ice cores", Nature, May 31, 2001. (http://unisci.com/stories/20012/0531012.htm) (http://www.washington.edu/newsroom/news/2001archive/05-01archive/k053001.html)
The Olympian, "National Park's Famous Glaciers Rapidly Disappearing", Sunday, November 24, 2002. (http://www.theolympian.com/home/news/20021124/northwest/14207.shtml)
John Carey, Global Warming - Special Report, BusinessWeek, August 16, 2004, pp 60-69. ( http://www.businessweek.com )
Zielinski et al., "Record of Volcanism Since 7000 B.C. from the GISP2 Greenland Ice Core and Implications for the Volcano-Climate System", Science Vol. 264 pp. 948-951, 13 May 1994
Zielinski and Germani, "New Ice-Core Evidence Challenges the 1620s BC Age for the Santorini (Minoan) Eruption", Journal of Archaeological Science 25 (1998), pp. 279-289
Identification of Aniakchak (Alaska) tephra in Greenland ice core challenges the 1645 BC date for Minoan eruption of Santorini", Geochem. Geophys. Geosyst., 5, Q03005, doi:10.1029/2003GC000672. March, 2004 ( http://www.agu.org/pubs/crossref/2004/2003GC000672.shtml ), "
Jim Scott, "Greenland ice core project yields probable ancient plant remains", University of Colorado Press Release, 13 August 2004 ( http://www.eurekalert.org/pub_releases/2004-08/uoca-gic081304.php )
Michael J. Oard, "The extinction of the woolly mammoth: was it a quick freeze?" ( http://www.answersingenesis.org/Home/Area/Magazines/tj/docs/tj14_3-mo_mammoth.pdf )
Henry H. Howorth, The Mammoth and the Flood (London: Samson Low, Marston, Searle, and Rivington, 1887), pp. 96
Mol, Y. Coppens, A.N. Tikhonov, L.D. Agenbroad, R.D.E. Macphee, C. Flemming, A. Greenwood, B Buigues, C. De Marliave, B. van Geel, G.B.A. van Reenen, J.P. Pals, D.C. Fisher, D. Fox, "The Jarkov Mammoth: 20,000-Year-Old carcass of Siberian woolly mammoth Mammuthus Primigenius" (Blumenbach, 1799), The World of Elephants - International Congress, Rome 2001 ( http://www.cq.rm.cnr.it/elephants2001/pdf/305_309.pdf )
Grant D. Zazula, Duane G. Froese, Charles E. Schweger, Rolf W. Mathewes, Alwynne B. Beaudoin, Alice M. Telka, C. Richard Harington, John A Westgate, "Palaeobotany: Ice-age steppe vegetation in east Beringia", Nature 423, 603 (05 June 2003) ( http://www.sfu.ca/~qgrc/zazula_2003b.pdf )
Shukman, David, Greenland Ice-Melt 'Speeding Up', BBC News, UK Edition, 28 July, 2004. ( http://news.bbc.co.uk/1/hi/world/europe/3922579.stm )
Gary Braasch, Glaciers and Glacial Warming, Receding Glaciers, 2005. ( http://www.worldviewofglobalwarming.org/pages/glaciers.html )
Jerome Bernard, Polar Ice Cap Melting at Alarming Rate, COOLSCIENCE, Oct. 24, 2003 ( http://cooltech.iafrica.com/science/280851.htm )
Steve Connor, Melting Greenland Glacier May Hasten Rise in Sea Level, Independent - Common Dreams News Center, July 25, 2005 ( http://www.commondreams.org/headlines05/0725-02.htm )
Animation of Eastern Alp Glacial Retreat, Institut für Fernerkundung und Photogrammetrie Technische Universität Graz, Last accessed, September, 2005 ( Play Video )
Lynn Jenner, Glaciers Surge When Ice Shelf Breaks Up, National Aeronautics and Space Administration (NASA), September 21, 2004. ( Link )
Geoffrey Lean, The Big Thaw, Znet, accessed 2/06 (Link)
Zbigniew Jaworowski, Another Global Warming Fraud Exposed: Ice Core Data Show No Carbon Dioxide Increase, 21st Century, Spring 1997. ( Link ) and in a Statement written for a Hearing before the US Senate Committee on Commerce, Science, and Transportation, Climate Change: Incorrect information on pre-industrial CO2, March 19, 2004 ( Link )
Don Behm, Into the spotlight: Leno, scientists alike want to hear explorer's findings, Journal Sentinel, July 21, 2006 ( Link )
Kelly Young, Greenland ice cap may be melting at triple speed, NewScientist.com News Service, August 10, 2006 ( Link )
L. D. Keigwin, J. P. Sachs, Y. Rosenthal, and E. A. Boyle, The 8200 year B.P. event in the slope water system, western subpolar North Atlantic, Paleoceanography, Vol. 20, PA2003, doi:10.1029/2004PA001074, 2005 ( Link )
Nicole Petit-Maire, Philippe Bouysse, and others, Geological records of the recent past, a key to thenear future world environments, Episodes, Vol. 23, no. 4, December 2000 ( Link )
Harvey Nichols, Open Arctic Ocean Commentary, Climate Science: Roger Pielke Sr. Research Group Weblog, July 12, 2006 ( Link )
Vartanyan S.L., Kh. A. Arslanov, T.V. Tertychnaya, and S.B. Chernov, Radiocarbon Dating Evidence for Mammoths on Wrangel Island, Arctic Ocean, until 2000 BC, Radiocarbon Volume 37, Number 1, 1995, pp. 1-6. ( Link )
Black, J.L., Miller, G.H., and Geirsdottir, A., Diatoms as Proxies for a Fluctuating Ice Cap Margin, Hvitarvatn, Iceland, AGU Meeting - abstract, 2005 ( Link )
The following is from an E-mail exchange with C. Leroy Ellenberger, best known as a one-time advocate, but now a prolific critic of controversial writer/catastrophist Immanuel Velikovsky. My response follows:
July 26, 2007:
Talbott STILL does not GET IT concerning the ability of the ice at high altitude at the summit of the Greenland ice cap to have survived the global warming that occurred during the Hypsithermal period. Just because it was six or so degrees warmer at sea level during that time DOES NOT AUTOMATICALLY mean that it was six degrees warmer at the high altitude at Greenland's summit, due to adiabatic cooling; or even if it were six degrees warmer at the summit does not mean that the summer temperature necessarily got above freezing. AS I said in July 1994, we can ski Hawaii and Chile even while the folks at sea level are basking almost naked in the sun. And besides, the cores contain NO INDICATION that such wholesale melting, draining away untold number of annual layers, has even happened at the summit of Greenland in the past 110,000 years. Period. As Paul Simon sez in "The Boxer": "The man hears what he wants to hear and disregards the rest." That is Dave Talbott, "clueless in the mythosphere".
I would also like to point out what Robert Grumbine told me earlier today: if, say, 10,000 annual layers were melted, as Talbott would like to believe, then it would have been impossible for Bob Bass to get the high correlation he did between the signals in ice core profiles and Milankovitch cycles.
LeroyHigh altitude doesn't seem to be a helpful argument when it comes to explaining the preservation of Greenland Ice sheets during the thousands of years (6-7kyr) of Hypsithermal (Middle Holocene) warming. Why? Because what supports the high altitude of the ice in Greenland? Obviously, it is the ice itself.I mean really, note that the altitude of the ice sheet in Greenland is about 2,135 meters. Now, consider that about 2,000 meters of this altitude is made up of the thickness of the ice itself. If you warm up this region so that the lower altitudes start to melt, the edges are going to start receding at a rate that is faster than the replacement of the total ice lost. In short, the total volume of the ice will decrease and the ice sheet will become thinner as it flows peripherally. This will reduce the altitude of the ice sheet and increase the total amount of surface area exposed to the warmer temperatures. This cycle will only increase over the time of increased warmness.Consider this in the light of what is happening to the ice sheet in Greenland today with only a one degree increase in the average global temperature over the past 100 years or so. Currently, the ice is melting at 239 cubic kilometers per year (measured from April 2002 to November 2005). And, we aren't yet close to the average global warmness thought to have been sustained during the Hypsithermal (another 3 to 5 degrees warmer). If that's not a problem I don't know what is? But, as you pointed out, "A man hears what he wants to hear and disregards the rest." - but I suppose you are immune to this sort of human bias?As far as Milankovitch Cycles and the fine degree of correlation achieved, ever hear of "tuning"? If not, perhaps it might be interesting to look into just a bit. Milankovitch cycles seem, to me at least, to have a few other rather significant problems as well.Sean Pitman
Sean . . .
While your logic is unassailable, it is based on a false premise concerning how much warming occurred during the mid-Holocene Hypsithermal period. (1) Contrary to what you and Charles Ginenthal claim (coincidentally or not), there was no ca. 5 degree rise in average global temperature during the Hypsithermal, more generally known as the Atlantic period, from ca. 6000 B.C. to 3000 B.C. This 5 degrees is a figure that was derived for the rise in temperature in Europe, according to the source cited by Ginenthal. The consensus among climatologists is that the average global rise in temperature in the Hypsithermal/Atlantic period was about one degree, which we are seeing now. (2) However, regardless what the temperature rise might have been, another line of evidence contradicts your and Ginenthal's position. The hundreds of sediment cores extracted from the bottom of the Arctic Ocean indicate that during the past 70,000 years the Arctic Ocean has never been ice free and therefore never warm enough for all the melting you, Ginenthal, and Talbott claim allegedly happened. I urge you to read Mewhinney's Part 2 of "Minds in Ablation" and see if your dissertation on ancient ice does not need some revision or dismantling. It would appear that Dave Talbott's gloating in his email to this list at Thu, 26 Jul 3007 20:20:19-0400 (EDT), was not only premature, but totally unjustified.
Richard Alley [author of The Two Mile Time Machine] received my email while he was en route to Greenland, but he took the time to send the following reply, for which I am most grateful and which is above my request:
"Modelers such as P. Huybrechts have looked into this. In the models, there exist solutions in which somewhat smaller and steeper ice sheets are stable; warming causes melting back of the margins but not enough melting across the cold top of the ice sheet to generate abundant meltwater runoff. Averaged over the last few decades, iceberg calving has removed about half of the snow accumulation on Greenland, and melting the other half. Warming causing retreat would pull the ice largely or completely out of the ocean, thus reducing or eliminating the loss by calving; without losing icebergs, less snowfall is required to maintain the ice sheet, so stability is possible with more melting. Too much warming and the ice sheet no longer has a steady solution. The model results shown in our review paper are relevant." - Alley, R.B., P.U. Clark, P. Huybrechts and I. Joughin. Ice-sheet and sea-level changes. Science 310: 456-460 (2005).
On 7/26/07 8:17 PM, "Leroy Ellenberger"
I appreciate your response. It seems to me though that you are now simply throwing out anything that comes to mind to see if it will fly. First you argue that the altitude of Greenland would preserve the ice sheet in a warm environment for thousands of years. But, now that you see that this argument is untenable, you have now decided that it must not have been that warm during the Hypsithermal?
I've read through Mewhinney's "Minds in Ablation" several times now in my consideration of this topic. To be frank, I don't see where Mewhinney convincingly deals with the topic of the Hypsithermal warm period. For example, you argue that there was only a significant rise in temperature (relative to today) in Europe. Beyond this, you suggest that the overall average global temperature during the Hypsithermal was about the same as the average global temperature today.
Well, it seems to me like there are at least a few potential problems here. The first problem is that even at current global temperatures, the Greenland Ice sheet is in ablation at a rate that would easily melt it well within the time frame of the Hypsithermal period - several times over. Also, the notion that only Europe experienced significantly increased temperatures doesn't seem to gel quite right with the available facts.
Harvey Nichols, back in the late 60s, published a study of the history of the "Canadian Boreal forest-tundra ecotone". This study "suggested that the arctic tree-line had moved northwards 350 to 400 km beyond its modern position (extending soils evidence collected by Irving and Larsen, in Bryson et al. 1965, ref. 6) during the mid-Holocene warm period, the Hypsithermal. The climatic control of the modern arctic tree-line indicated that prolonged summer temperature anomalies of ~ + 3 to 4 C were necessary for this gigantic northward shift of the tree-line, thus fulfilling Budyko's temperature requirement for the melting of Arctic Ocean summer ice pack. A more extensive peat stratigraphic and palynological study (Nichols, 1975, ref. 7) confirmed and extended the study throughout much of the Canadian Northwest Territories of Keewatin and Mackenzie, with a paleo-temperature graph based on fossil pollen and peat and timber macrofossil analyses. This solidified the concept of a +3.5 to 4 degree (+/- 0.5) C summer warming, compared to modern values, for the Hypsithermal episode 3500 BP back at least to 7000 before present, again suggesting that by Budyko's (1966) calculations there should have been widespread summer loss of Arctic Ocean pack ice. By this time J.C. Ritchie and F. K. Hare (1971, ref.8) had also reported timber macrofossils from the far northwest of Canada's tundra from even earlier in the Hypsithermal."
Harvey Nichols (1967a) "The post-glacial history of vegetation and climate at Ennadai Lake, Keewatin, and Lynn Lake, Manitoba (Canada)", Eiszeitalter und Gegenwart, vol. 18, pp. 176 - 197.
H. Nichols (1967b) "Pollen diagrams from sub-arctic central Canada", Science 155, 1665 - 1668.
These "warm" features are not limited to Canada or Europe, but can be seen around the entire Arctic Circle. Large trees as well as fruit bearing trees and peat bogs, all of which have been dated as being no older than a few tens of thousands of years, are found along the northern most coasts of Russia, Canada, and Europe - often well within the boundaries of the Arctic Circle. Millions of Wholly Mammoth along with horse, lion, tiger, leopard, bear, antelope, camel, reindeer, giant beaver, musk sheep, musk ox, donkey, ibex, badger, fox, wolverine, voles, squirrels, bison, rabbit and lynx as well as a host of temperate plants are still being found all jumbled together within the Artic Circle - along the same latitudes as Greenland all around the globe. Again, the remains of many of these plants and animals date within a few tens of thousands of years ago. Yet, their presence required much warmer conditions within the Arctic Circle than exist today - as explained by Nichols above.
And, this problem isn't limited to the Hypsithermal period. Speaking of the area between Siberia and Alaska as well as the Yukon Territory of Canada Zazula et al said, "[This region] must have been covered with vegetation even during the coldest part of the most recent ice age (some 24,000 years ago) because it supported large populations of woolly mammoth, horses, bison and other mammals during a time of extensive Northern Hemisphere glaciation."
Grant D. Zazula, Duane G. Froese, Charles E. Schweger, Rolf W. Mathewes, Alwynne B. Beaudoin, Alice M. Telka, C. Richard Harington, John A Westgate, "Palaeobotany: Ice-age steppe vegetation in east Beringia", Nature 423, 603 (05 June 2003) ( http://www.sfu.ca/~qgrc/zazula
I don't get it? Was it much warmer than today all the way around the Arctic Circle, everywhere, and still cold in Greenland? How is such a feat achieved?
As far as your "other lines of evidence" they all seem shaky to me in comparison to the overwhelming evidence of warm-weather plants and animals living within the Arctic Circle within the last 20kyr or so.
The patterns of sedimentary cores are, by the way, subject to the very subjective process of "tuning" - as noted in my essay on Milankovitch Theory.
Richard Alley's argument that smaller "steeper" ice sheets are more stable during warm periods doesn't make any sense to me. Ice sheets flow. They don't remain "steep" or all humped up like Half-Dome. To significantly increase the "steepness" or "slope" of the Greenland ice sheet, the overall size of the sheet would have to be reduced from over 2,000,000 square kilometers to just a few thousand to make a significant difference in the overall "steepness" of the Greenland ice sheet.
Even today the Greenland ice is melting quite rapidly across most of "the top" as well as the sides. It is also melting in such a way that the surface meltwater percolates down through the entire ice sheet to create vast lakes at the bottom - lubricating the ice sheet and making it flow even faster. Just because it doesn't reach the ocean before it melts and turns into water doesn't mean that less ice is melting than before - i.e., just because it is flowing as liquid water instead of "calving" into the ocean.
No, I'm afraid you, Mewhinney, and Alley have a long way to go to explain some of these interesting problems - at least to my own satisfaction. It seems like you all accept certain interpretations based on a limited data set while failing to seriously consider a significant amount of evidence that seems to fundamentally counter your position in a very convincing manner.
Thanks again for your thoughts. I did find them very interesting.
You raise a many points here in your rejoinder, some of which distort what I wrote. I have neither the resources nor the time to explore all the points you raise, if indeed they need to be explored considering Jim Oberg's remark to Warner Sizemore in a late 1978 letter about not needing to chase every hare Velikovsky set loose to know that Worlds in Collision is bogus. And for all the points you raise, many of them interesting about exotic Arctic conditions and so forth, you do not, as I see it, come to grips with the testimony from the Arctic Ocean sediment cores which indicate that body of water has never been ice free in the past 70,000 years, as would be the case if climate were as warm as you claim. This has to be a boundary condition on your speculations despite all the
botanical and faunal activity in the Arctic during that time.
Sure, the Pleistocene and early Holocene were interesting environments whose conditions we have difficulty understanding and doubly so as we project our own experiences on that extinct epoch. AS an example of a distortion of what I wrote, I did not claim Europe was the only area that warmed five degrees during the Hypsithermal, merely that it was the area mentioned in the source Ginenthal used to justify his claim of a global warming that large. As for the demise of the Pleistocene megafauna that seems to interest you so much, I can do no better that R. Dale Guthrie's Frozen Fauna of the Mammoth Steppe (U. Chi. Press, 1990) and William White's three part critique in Kronos XI: 1-3 which focusses on the extragant claims made over the years about the catastrophic demise and preservation of the frozen mammoths. White was rebutting my defense of the Sanderson-Velikovsky school of mammoth extinction earlier in Kronos. Oh yes, and do not forget William R. Farrand's 1961 classic "Frozen Mammoths and Modern Geology", SCIENCE 133, 729-35. I leave you with the closing quote of my previous email: "Mundus vult decipi ergo decipiatur".
Sincerely, C. Leroy Ellenberger
Dear Leroy,If you aren't interested in seriously considering some of the main points I've raised, that's up to you. It is just that so far I haven't seen anyone present any significantly cogent arguments against the evidence for a very warm and iceless Arctic Circle and Ocean in the recent past.You say I've not considered the evidence of the ocean cores, but I have considered this evidence. It is just that your interpretation of the ocean sediment cores seems to fly in the face of the overwhelming interpretation of the existence of warm-weather animal and plant life within the entire Arctic Circle in the recent past. Both interpretations simply can't be right. One has to win out over the other. It all boils down to which perspective carries with it the greatest degree of predictive power. Consider this in the light of the following interpretation of ocean cores take from the Barents Sea ( i.e., part of the Arctic Ocean):"Marine sediment cores [taken in the Barents Sea] representing the entire Holocene yielded foraminifera which showed that a temperature optimum (the early Hypsithermal) developed between 7800 and 6800 BP, registering prolonged seasonal (summer) ice free conditions, and progressing to 3700 BP with temperatures similar to those of today, after which a relatively abrupt cooling occurred." [emphasis mine]J-C Duplessy, E. Ivanova, I. Murdmaa, M. Paterne, and L. Labeyrie, ( 2001): "Holocene paleoceanography of the northern Barents Sea and variations of the northward heat transport by the Atlantic Ocean" in "Boreas" vol. 30, # 1, pp. 2 - 16.So, there you have it. How then can you argue that ocean core sediments conclusively support your contention that the Arctic Ocean has "never been ice free for the past 70,000 years"? Now, is that really true? - given the above reference?Also, I don't see that it matters what killed off the mammoths for the purposes of this discussion - catastrophic or otherwise. That has nothing to do with the fact that these creatures and many others lived in lush warm environments for a long period of time above and around the significant majority of the Arctic Circle in the recent past. This is an overwhelming fact with an equally overwhelming conclusion that makes it very hard to imagine how Greenland could have remained frozen the whole time.If you have something as far as real evidence or a reasonable explanation, I'd be quite interested. Otherwise, I'm not into a discussion that is mostly about who can list off the most pejoratives. That might be fun, but I'm really not up for that sort of thing . . .Sean Pitman
Pulling a couple of points out of Pitman's latest 2 notes.
And the ultimate argument -- it doesn't make sense to Pitman. Yet even though he quotes Alley's argument, he doesn't see the effect. Half the ice that is lost from Greenland today is lost by calving of iceberge. Icebergs aren't meltwater. Meltwater is the other half of the mass loss.
The size of the ice sheet depends on the balance between income (accumulation) and outgo (melting, iceberg calving). No iceberg calving halves the outgo, letting the income side win out until the ice sheet gets so large that it starts calving again.
Ice sheets do indeed flow. What Pitman has missed is covered well in the Paterson reference I made earlier. Namely, the flow rate depends on the temperture of the ice. Colder ice doesn't flow as fast as warm. A second feature he missed is that ice is an excellent insulater. It takes time for warmer conditions at the surface to warm the temperatures in the interior of the ice sheet. Enough time that parts of the Greenland ice sheet still 'remember' glacial maximum temperatures. Much more of the sheet would have remembered the glacial maximum conditions several kya than currently do so, and the ice would have been correspondingly stiffer, leading to more steeply-sloped sides.
All the preceding, though, is aside. The real point is that in talking about the Greenland ice sheets' melting away during the hypisthermal, Pitman is making a _prediction_, not an observation. Yet he and some others are taking his prediction as observed fact. The preceding merely sheds a little light on what quality of prediction he made.
More to the point is that if he were engaging in science, the thing he'd do following his prediction of the obliteration of the Greenland
ice sheet is look for evidence that it had actually happened. Forests and mammoths don't show obliteration of ice sheets, so all that is irrelevant except as clues to what motivated the prediction.
One good way of determining that Greenland had melted away is to find those extra 6 meters of sea level that it represents. Yet, in fact, the sea levels are higher now than any time in the last ~100 ky, including during the hypsithermal.
In the second note:
The Barents sea today is ice-free in the summer, yet there is a perennial Arctic sea ice pack. It was also ice free -- in summer -- during the 'little ice age'. The Barents sea is marginal for sea ice packs, so doesn't carry a perennial ice cover. William Chapman, at the University of Illinois, has a nice web site on sea ice conditions called 'the cryosphere today'. The National Snow and Ice Data Center carries more data, some Scandanavian records back several centuries included.
Robert,Thank you for your thoughts. However, they still don't seem to solve the problem - as I understand it anyway.You present the seemingly obvious argument that the total ice lost from the Greenland ice sheet is the result of half melting and half calving. Obviously then, if the ice melts to a point where there is no more calving, half the loss is removed and the accumulation rate can keep up.Superficially this seems like an obvious conclusion. The problem here is that this argument does not take into account the etiology of calving - i.e., the flow of ice all the way to the ocean. If the ice sheet melts to such an extent that it no longer reaches the ocean, the ice sheet itself would have to be quite thin. Thick ice creates a lot of pressure on itself and flows over time at a rate that is fast enough to reach the ocean before it melts. The ice isn't going to be cold enough to make it "stiff" enough so that it doesn't flow at at least its current rate of flow (contrary to your suggestion). Also, the flow rate is only going to be increased over the current rate with increased areas of surface melting. This is due to the increased lubrication of the ice sheet from the percolation of liquid water from the surface to the base of the ice sheet.Also, the ice sheet isn't going to get much "steeper" than it already is. Why not? Because the thickness of the Greenland ice sheet is about 2 km, but around 1500 x 1500 km (2,175,590 sq km) in surface area. How does one create a relatively "steep" ice sheet unless the ice sheet one is thinking about is less than a few tens of km in maximum diameter?In short, it seems to me that it is the flow rate that is key, not the calving rate. Ice is lost at the flow rate regardless of the calving rate. Therefore, if the melting of the ice is so great that the flow rate cannot keep up in a way that allows calving into the ocean, this does not indicate a "halving" of the rate at which ice is lost from the sheet at all. It simply indicates that the flow rate cannot keep up with the increased melt rate. At this point the ice sheet would have become so thin that a much greater surface area would be exposed to summer melt - dramatically increasing the average yearly loss of ice as well as the flow rate (due to the lubrication effect).This sort of thing is already happening today. In the illustration below, note the significant increase in the area of summer melt in Greenland between 1992 and 2005, contributing to about 240 cubic kilometers of ice lost, per year, by 2005.
This feature will only be enhanced as the Arctic region continues to warm - still well shy of the warmth experienced in this region during the thousands of years of the Hypsithermal. Pretty soon the entire sheet will be subject to summer melting. I'm sorry, but this increased melt rate over the entire sheet isn't going to be overcome by a decline in calving rate. That just doesn't happen. There simply is no example of such a thing as far as I am aware. But, I'd be very interested in any reference of such an observation or model to the contrary.Also, the notion that the Arctic Ocean was covered with ice during the Hypsithermal is significantly undermined by current melt rates of the Arctic Ocean ice. At current rates the ice will be pretty much gone well within 50 years (see figure below). In fact, some, like Walt Meier, a researcher at the U.S. National Snow and Ice Data Center in Boulder, Colorado, commented on these interesting findings notes that the melting of the Arctic ice cap in summer is progressing more rapidly than satellite images alone have shown. Given recent data such as this, climate researchers at the U.S. Naval Postgraduate School in California predict the complete absence of summer ice on the Arctic Ocean by 2030 or sooner.Don Behm, Into the spotlight: Leno, scientists alike want to hear explorer's findings, Journal Sentinel, July 21, 2006 ( Link )That's only about 20 years away. And you think all the evidence that the Hypsithermal was even warmer within the entire Arctic region isn't enough to suspect that the Arctic Ocean was probably ice free then as well as it is going to be in very short order today? It stretches one's credulity to think otherwise - does it not? Yet, you argue that forests, mammoths, peat bogs, and warm-water forams are "irrelevant" to this question - even when they appear within the Arctic Circle? Really?Thanks for your efforts though. But, I must say . . . I for one still don't "get it".Sean PitmanConsider also that fairly recent evidence has come to light that mammoths survived on Wrangel Island (located on the border of the East-Siberian and Chukchi Seas) until 2,000 B.C. That's right. This is no joke.
Robert Grumbine presented an interesting challenge: "One good way of determining that Greenland had melted away is to find those extra 6 meters of sea level that it represents. Yet, in fact, the sea levels are higher now than any time in the last ~100 ky, including during the hypsithermal."
Well, as it turns out, this observation has been made and reported by several scientists, to include NguyÔn V¨n B¸ch, Ph¹m ViÖt Nga of the institute of Oceanography, NCNST. These authors report the following findings:
The study results of depositional environments provide with informations to reconstruct the sea-level positions in the last 6,000 years. Here, it must be admitted that in the time of 6,000 years or so before present in Trêng Sa region, the sea-level was higher than the present by 5 - 6m. That's why several coral reefs have the top surfaces of 5m in height. Nowadays, the most of scientific works touching upon Holocene sea-level changes support the conclusion that the sea-level was at +5m dated 6,000 years BP [1, 6]. Thus, in Trêng Sa Sea for the last 6,000 years BP sea-level has moved up and down 4 times (Fig.6) in a drop trend. The curve in Fig.6 is deduced from the study results of sedimentary sequence and stratigraphic, pollen-spores, chemical analysis and sedimentary basin analysis.
NguyÔn V¨n B¸ch, Ph¹m ViÖt Nga, Holocene sea-level changes in Trêng Sa archipelago area, Institute of Oceanography, NCNST, Hoµng Quèc ViÖt, CÇu GiÊy, Hµ Néi, July 9, 2001 ( Link )
Robert Grumbine responds:
Obliterating Greenland is a matter of global sea level, not merely local, so let's see [whether] the paper is about global sea level: . . .
Worse, w.r.t. Pitman's cherry-picking, is that the same paper does include global sea level curves which do show that global sea level has not been several meters greater than present any time in the past 10ky (one stops there, the one that goes farther back shows no such higher sea level for the past 125 ky -- its limit).
Not content to cherry-pick only a single local curve, it also turns out that he cherry-picked _which_ local curve. Figure 3 shows (and labels it so) Regional Sea Level curve, one curve for 'data scattered along the Vietnamese coast', and one for the Hoang So area. The latter accord with the global curve and aren't mentioned by Pitman. Fig 4 shows a curve for the Malaysia peninsula, which shows less sea level change than the scattered Vietnamese stations (4 m peak, vs the 5-6 Pitman quotes for the Vietnamese -- someone unknowledgeable but curious about the science would wonder why there were such very large differences over such small areas 0, 4, 5 meters in three nearby areas).
The authors, even in translation, are clear about what they were doing and what they found. They found some interesting features in their local area. What they did not do, or attempt, or challenge, was to construct a global sea level curve. What's interesting, for their work, is that while global sea level was flat the last 6 ky, their area has been oscillating up and down.
Robert,And that's the whole point. "Global" sea level curves for the Holocene seem to be extrapolations from regional sea level curves - curves that can vary widely with all kinds of theories as to the reasons for the regional differences. Some argue that:"The probability is strong that mid-Holocene eustatic sea level was briefly a meter or two higher than the present sea level, although separating isostatic and eustatic effects remains an impediment to conclusively demonstrating how much global ice volume was reduced. . .In summary, when relative sea-level records are reconstructed from paleoclimatological methods, all coastlines exemplify to one degree or another the complex processes confronting inhabitants of coastlines of Scandinavia, Chesapeake Bay, and Louisiana. Multiple processes can cause observed sea-level changes along any and all coastlines and uncertainty remains when attributing cause to reconstructed sea-level trends. Only through additional relative sea-level records (including sorely needed records from the LGM and early deglaciation), better glaciological budgets, and improved geophysical and glacial models will the many factors that control sea-level change be fully decoupled."Also note the Fairbridge curve with it's multiple Holocene blips several meters above today's "global" levels (Fig. 2):This hardly sounds to me like conclusive science - something upon which one can make very definitive negative or positive statements concerning the likely melt or non-melt of Greenland's ice. There's just not sufficient positive or negative predictive power. In short, its seems rather difficult to use such evidence, "cherry pick" if you will, and pretty much ignore the very strong evidence for a much warmer Arctic in the recent past than exists today - and the implications of this evidence for the survival of the Greenland ice sheet for thousands of years.Sean Pitman
Current concepts of late Pleistocene sea level history, generally referred to the 14C time scale, differ considerably1. Some authors2,3 assume that the sea level at about 30,000 BP was comparable with that of the present and others4,5 assume a considerably lower sea level at that time. We have now obtained 14C dates from in situ roots and peat which indicate that the sea level was lowered eustaticly to at least 40−60 m below the present level between 36,000 and 10,000 BP. The sea level rose from -13 m to about +5 m from 8,000 to 4,000 BP and then approached its present level. [emphasis added]
M. A. GEYH*, H. STREIF* & H.-R. KUDRASS, Sea-level changes during the late Pleistocene and Holocene in the Strait of Malacca, Nature 278, 441 - 443 (29 March 1979); doi:10.1038/278441a0 ( Link )
. Home Page . Truth, the Scientific Method, and Evolution
. Maquiziliducks - The Language of Evolution . Defining Evolution
. DNA Mutation Rates . Donkeys, Horses, Mules and Evolution
. Amino Acid Racemization Dating . The Steppingstone Problem
. Harlen Bretz . Milankovitch Cycles
. Kenneth Miller's Best Arguments
Since June 1, 2002 | http://www.detectingdesign.com/ancientice.html | 13 |
72 | The Way of the Java/Conditionals graphics recursion
Conditionals, graphics and recursion
The modulus operator
The modulus operator works on integers (and integer expressions) and yields the remainder when the first operand is divided by the second. In Java, the modulus operator is a percent sign, %. The syntax is exactly the same as for other operators:
int quotient = 7 / 3; int remainder = 7 % 3;
The first operator, integer division, yields 2. The second operator yields 1. Thus, 7 divided by 3 is 2 with 1 left over.
The modulus operator turns out to be surprisingly useful. For example, you can check whether one number is divisible by another: if x%y is zero, then x is divisible by y.
Also, you can use the modulus operator to extract the rightmost digit or digits from a number. For example, x % 10 yields the rightmost digit of x (in base 10). Similarly x % 100 yields the last two digits.
In order to write useful programs, we almost always need the ability to check certain conditions and change the behavior of the program accordingly. Conditional statements give us this ability. The simplest form is the if statement:
if (x > 0) System.out.println ("x is positive");
The expression in parentheses is called the condition. If it is true, then the statements in brackets get executed. If the condition is not true, nothing happens.
The condition can contain any of the comparison operators, sometimes called relational operators:
x == y // x equals y x != y // x is not equal to y x > y // x is greater than y x < y // x is less than y x >= y // x is greater than or equal to y x <= y // x is less than or equal to y
Although these operations are probably familiar to you, the syntax Java uses is a little different from mathematical symbols like , and . A common error is to use a single = instead of a double ==. Remember that = is the assignment operator, and == is a comparison operator. Also, there is no such thing as =< or =>.
The two sides of a condition operator have to be the same type. You can only compare ints to ints and doubles to doubles. Unfortunately, at this point you can't compare Strings at all! There is a way to compare Strings, but we won't get to it for a couple of chapters.
A second form of conditional execution is alternative execution, in which there are two possibilities, and the condition determines which one gets executed. The syntax looks like:
if (x%2==0) System.out.println ("x is even"); else System.out.println ("x is odd");
If the remainder when x is divided by 2 is zero, then we know that x is even, and this code prints a message to that effect. If the condition is false, the second print statement is executed. Since the condition must be true or false, exactly one of the alternatives will be executed.
As an aside, if you think you might want to check the parity (evenness or oddness) of numbers often, you might want to ``wrap this code up in a method, as follows:
public static void printParity (int x) if (x%2==0) System.out.println ("x is even"); else System.out.println ("x is odd");
Now you have a method named printParity that will print an appropriate message for any integer you care to provide. In main you would invoke this method as follows:
Always remember that when you invoke a method, you do not have to declare the types of the arguments you provide. Java can figure out what type they are. You should resist the temptation to write things like:
int number = 17; printParity (int number); // output: "x is odd"
Sometimes you want to check for a number of related conditions and choose one of several actions. One way to do this is by chaining a series of ifs and elses:
if (x > 0) System.out.println ("x is positive"); else if (x < 0) System.out.println ("x is negative"); else System.out.println ("x is zero");
These chains can be as long as you want, although they can be difficult to read if they get out of hand. One way to make them easier to read is to use standard indentation, as demonstrated in these examples. If you keep all the statements and squiggly-brackets lined up, you are less likely to make syntax errors and you can find them more quickly if you do.
In addition to chaining, you can also nest one conditional within another. We could have written the previous example as:
if (x == 0) System.out.println ("x is zero"); else if (x > 0) System.out.println ("x is positive"); else System.out.println ("x is negative");
There is now an outer conditional that contains two branches. The first branch contains a simple print statement, but the second branch contains another conditional statement, which has two branches of its own. Fortunately, those two branches are both print statements, although they could have been conditional statements as well.
Notice again that indentation helps make the structure apparent, but nevertheless, nested conditionals get difficult to read very quickly. In general, it is a good idea to avoid them when you can.
On the other hand, this kind of nested structure is common, and we will see it again, so you better get used to it.
The return statement
The return statement allows you to terminate the execution of a method before you reach the end. One reason to use it is if you detect an error condition:
public static void printLogarithm (double x) if (x <= 0.0) System.out.println ("Positive numbers only, please."); return; double result = Math.log (x); System.out.println ("The log of x is " + result);
This defines a method named printLogarithm that takes a double named x as a parameter. The first thing it does is check whether x is less than or equal to zero, in which case it prints an error message and then uses return to exit the method. The flow of execution immediately returns to the caller and the remaining lines of the method are not executed.
I used a floating-point value on the right side of the condition because there is a floating-point variable on the left.
You might wonder how you can get away with an expression like "The log of x is " + result, since one of the operands is a String and the other is a double. Well, in this case Java is being smart on our behalf, by automatically converting the double to a String before it does the string concatenation.
This kind of feature is an example of a common problem in designing a programming language, which is that there is a conflict between formalism, which is the requirement that formal languages should have simple rules with few exceptions, and convenience, which is the requirement that programming languages be easy to use in practice.
More often than not, convenience wins, which is usually good for expert programmers (who are spared from rigorous but unwieldy formalism), but bad for beginning programmers, who are often baffled by the complexity of the rules and the number of exceptions. In this book I have tried to simplify things by emphasizing the rules and omitting many of the exceptions.
Nevertheless, it is handy to know that whenever you try to ``add two expressions, if one of them is a String, then Java will convert the other to a String and then perform string concatenation. What do you think happens if you perform an operation between an integer and a floating-point value?
Slates and Graphics objects
In order to draw things on the screen, you need two objects, a Slate and a Graphics object.
- Slate: a Slate is a window that contains a blank
rectangle you can draw on. The Slate class is not part of the standard Java library; it is something I wrote for this course.
- Graphics: the Graphics object is the object we
will use to draw lines, circles, etc. It is part of the Java library, so the documentation for it is on the Sun web site.
The methods that pertain to Graphics objects are defined in the built-in Graphics class. The methods that pertain to Slates are defined in the Slate class, which is shown in Appendix slate.
The primary method in the Slate class is makeSlate, which does pretty much what you would expect. It creates a new window and returns a Slate object you can use to refer to the window later in the program. You can create more than one Slate in a single program.
Slate slate = Slate.makeSlate (500, 500);
makeSlate takes two arguments, the width and height of the window. Because it belongs to a different class, we have to specify the name of the class using ``dot notation.
The return value gets assigned to a variable named slate. There is no conflict between the name of the class (with an upper-case S) and the name of the variable (with a lower-case s).
The next method we need is getGraphics, which takes a Slate object and creates a Graphics object that can draw on it. You can think of a Graphics object as a piece of chalk.
Graphics g = Slate.getGraphics (slate);
Using the name g is conventional, but we could have called it anything.
Invoking methods on a Graphics object
In order to draw things on the screen, you invoke methods on the graphics object. We have invoked lots of methods already, but this is the first time we have invoked a method on an object. The syntax is similar to invoking a method from another class:
g.setColor (Color.black); g.drawOval (x, y, width, height);
The name of the object comes before the dot; the name of the method comes after, followed by the arguments for that method. In this case, the method takes a single argument, which is a color.
setColor changes the current color, in this case to black. Everything that gets drawn will be black, until we use setColor again.
Color.black is a special value provided by the Color class, just as Math.PI is a special value provided by the Math class. Color, you will be happy to hear, provides a palette of other colors, including:
black blue cyan darkGray gray lightGray magenta orange pink red white yellow
To draw on the Slate, we can invoke draw methods on the Graphics object. For example:
g.drawOval (x, y, width, height);
drawOval takes four integers as arguments. These arguments specify a bounding box, which is the rectangle in which the oval will be drawn (as shown in the figure). The bounding box itself is not drawn; only the oval is. The bounding box is like a guideline. Bounding boxes are always oriented horizontally or vertically; they are never at a funny angle.
If you think about it, there are lots of ways to specify the location and size of a rectangle. You could give the location of the center or any of the corners, along with the height and width. Or, you could give the location of opposing corners. The choice is arbitrary, but in any case it will require the same number of parameters: four.
By convention, the usual way to specify a bounding box is to give the location of the upper-left corner and the width and height. The usual way to specify a location is to use a coordinate system.
You are probably familiar with Cartesian coordinates in two dimensions, in which each location is identified by an x-coordinate (distance along the x-axis) and a y-coordinate. By convention, Cartesian coordinates increase to the right and up, as shown in the figure.
Annoyingly, it is conventional for computer graphics systems to use a variation on Cartesian coordinates in which the origin is in the upper-left corner of the screen or window, and the direction of the positive y-axis is down. Java follows this convention.
The unit of measure is called a pixel; a typical screen is about 1000 pixels wide. Coordinates are always integers. If you want to use a floating-point value as a coordinate, you have to round it off to an integer (See Section rounding).
A lame Mickey Mouse
Let's say we want to draw a picture of Mickey Mouse. We can use the oval we just drew as the face, and then add ears. Before we do that it is a good idea to break the program up into two methods. main will create the Slate and Graphics objects and then invoke draw, which does the actual drawing.
public static void main (String args) int width = 500; int height = 500; Slate slate = Slate.makeSlate (width, height); Graphics g = Slate.getGraphics (slate); g.setColor (Color.black); draw (g, 0, 0, width, height); public static void draw (Graphics g, int x, int y, int width, int height) g.drawOval (x, y, width, height); g.drawOval (x, y, width/2, height/2); g.drawOval (x+width/2, y, width/2, height/2);
The parameters for draw are the Graphics object and a bounding box. draw invokes drawOval three times, to draw Mickey's face and two ears. The following figure shows the bounding boxes for the ears.
/-\ /-\ | | | | \ / \ / /---\ | | \___/
As shown in the figure, the coordinates of the upper-left corner of the bounding box for the left ear are (x, y). The coordinates for the right ear are (x+width/2, y). In both cases, the width and height of the ears are half the width and height of the original bounding box.
Notice that the coordinates of the ear boxes are all relative to the location (x and y) and size (width and height) of the original bounding box. As a result, we can use draw to draw a Mickey Mouse (albeit a lame one) anywhere on the screen in any size. As an exercise, modify the arguments passed to draw so that Mickey is one half the height and width of the screen, and centered.
Other drawing commands
drawLine drawRect fillOval fillRect prototype interface
Another drawing command with the same parameters as drawOval is
drawRect (int x, int y, int width, int height)
Here I am using a standard format for documenting the name and parameters of methods. This information is sometimes called the method's interface or prototype. Looking at this prototype, you can tell what types the parameters are and (based on their names) infer what they do. Here's another example:
drawLine (int x1, int y1, int x2, int y2)
The use of parameter names x1, x2, y1 and y2 suggests that drawLine draws a line from the point (x1, y1) to the point (x2, y2).
One other command you might want to try is
drawRoundRect (int x, int y, int width, int height,
int arcWidth, int arcHeight)
The first four parameters specify the bounding box of the rectangle; the remaining two parameters indicate how rounded the corners should be, specifying the horizontal and vertical diameter of the arcs at the corners.
There are also ``fill versions of these commands, that not only draw the outline of a shape, but also fill it in. The interfaces are identical; only the names have been changed:
fillOval (int x, int y, int width, int height) fillRect (int x, int y, int width, int height) fillRoundRect (int x, int y, int width, int height,
int arcWidth, int arcHeight)
There is no such thing as fillLine---it just doesn't make sense. Lines are one-dimensional
I mentioned in the last chapter that it is legal for one method to call another, and we have seen several examples of that. I neglected to mention that it is also legal for a method to invoke itself. It may not be obvious why that is a good thing, but it turns out to be one of the most magical and interesting things a program can do.
For example, look at the following method:
public static void countdown (int n) if (n == 0) System.out.println ("Blastoff!"); else System.out.println (n); countdown (n-1);
The name of the method is countdown and it takes a single integer as a parameter. If the parameter is zero, it prints the word Blastoff. Otherwise, it prints the number and then invokes a method named countdown---itself---passing n-1 as an argument.
- What happens if we invoke this method, in main, like
- The execution of countdown begins with n=3, and
since n is not zero, it prints the value 3, and then invokes itself, passing 3-1...
- The execution of countdown begins with n=2, and
since n is not zero, it prints the value 2, and then invokes itself, passing 2-1...
- The execution of countdown begins with n=1, and
since n is not zero, it prints the value 1, and then invokes itself, passing 1-1...
- The execution of countdown begins with n=0, and
since n is zero, it prints the word Blastoff! and then returns.
- The countdown that got n=1 returns.
- The countdown that got n=2 returns.
- The countdown that got n=3 returns.
And then you're back in main (what a trip). So the total output looks like:
3 2 1 Blastoff!
As a second example, let's look again at the methods newLine and threeLine.
public static void newLine () System.out.println (""); public static void threeLine () newLine (); newLine (); newLine ();
Although these work, they would not be much help if I wanted to print 2 newlines, or 106. A better alternative would be
public static void nLines (int n) if (n > 0) System.out.println (""); nLines (n-1);
This program is very similar; as long as n is greater than zero, it prints one newline, and then invokes itself to print n-1 additional newlines. Thus, the total number of newlines that get printed is 1 + (n-1), which usually comes out to roughly n.
The process of a method invoking itself is called recursion, and such methods are said to be recursive.
Stack diagrams for recursive methods
In the previous chapter we used a stack diagram to represent the state of a program during a method call. The same kind of diagram can make it easier to interpret a recursive method.
Remember that every time a method gets called it creates a new instance of the method that contains a new version of the method's local variables and parameters.
There is one instance of main and four instances of countdown, each with a different value for the parameter n. The bottom of the stack, countdown with n=0 is the base case. It does not make a recursive call, so there are no more instances of countdown.
The instance of main is empty because main does not have any parameters or local variables. As an exercise, draw a stack diagram for nLines, invoked with the parameter n=4.
Convention and divine law
In the last few sections, I used the phrase ``by convention several times to indicate design decisions that are arbitrary in the sense that there are no significant reasons to do things one way or another, but dictated by convention.
In these cases, it is to your advantage to be familiar with convention and use it, since it will make your programs easier for others to understand. At the same time, it is important to distinguish between (at least) three kinds of rules:
- Divine law This is my phrase to indicate a rule that
is true because of some underlying principle of logic or mathematics, and that is true in any programming language (or other formal system). For example, there is no way to specify the location and size of a bounding box using fewer than four pieces of information. Another example is that adding integers is commutative. That's part of the definition of addition and has nothing to do with Java.
- Rules of Java These are the syntactic and semantic
rules of Java that you cannot violate, because the resulting program will not compile or run. Some are arbitrary; for example, the fact that the + symbol represents addition and string concatenation. Others reflect underlying limitations of the compilation or execution process. For example, you have to specify the types of parameters, but not arguments.
- Style and convention There are a lot of rules that
are not enforced by the compiler, but that are essential for writing programs that are correct, that you can debug and modify, and that others can read. Examples include indentation and the placement of squiggly braces, as well as conventions for naming variables, methods and classes.
As we go along, I will try to indicate which category various things fall into, but you might want to give it some thought from time to time.
While I am on the topic, you have probably figured out by now that the names of classes always begin with a capital letter, but variables and methods begin with lower case. If a name includes more than one word, you usually capitalize the first letter of each word, as in newLine and printParity. Which category are these rules in?
- modulus An operator that works on integers and yields
the remainder when one number is divided by another. In Java it is denoted with a percent sign ().
- conditional A block of statements that may or may not
be executed depending on some condition.
- chaining A way of joining several conditional statements
- nesting Putting a conditional statement inside one or both
branches of another conditional statement.
- coordinate A variable or value that specifies a location
in a two-dimensional graphical window.
- pixel The unit in which coordinates are measured.
- bounding box A common way to specify the coordinates of
a rectangular area.
- typecast An operator that converts from one type to another.
In Java it appears as a type name in parentheses, like (int).
- interface A description of the parameters required by
a method and their types.
- prototype A way of describing the interface to a method
using Java-like syntax.
- recursion The process of invoking the same method you
are currently executing.
- infinite recursion A method that invokes itself
recursively without ever reaching the base case. The usual result is a StackOverflowException.
- fractal A kind of image that is defined recursively, so
that each part of the image is a smaller version of the whole. | http://en.m.wikibooks.org/wiki/The_Way_of_the_Java/Conditionals_graphics_recursion | 13 |
63 | P l a n e G e o m e t r y
An Adventure in Language and Logic
We explained in the Introduction that it is not possible to prove every statement. Nevertheless, we should prove as many statements as possible. Which is to say, the statements on which the proofs are based should be as few as possible. They are called the First Principles. There are three categories of them: Definitions, Postulates, and Axioms or Common Notions. We will follow each with a brief commentary.
11. An angle is the inclination to one another of two straight lines that meet.
12. The point at which two lines meet is called the vertex of the angle.
13. If a straight line that stands on another straight line makes the adjacent angles equal, then each of those angles is called a right angle; and the straight line that stands on the other is called a perpendicular to it.
14. An acute angle is less than a right angle. An obtuse angle is greater than a right angle.
16. Rectilinear figures are figures bounded by straight lines. A triangle is bounded by three straight lines, a quadrilateral by four, and a polygon by more than four straight lines.
17. A square is a quadrilateral in which all the sides are equal, and all the angles are right angles.
17. A regular polygon has equal sides and equal angles.
18. An equilateral triangle has three equal sides. An isosceles triangle has two equal sides. A scalene triangle has three unequal sides.
19. The vertex angle of a triangle is the angle opposite the base.
10. The height of a triangle is the straight line drawn from the vertex perpendicular to the base.
11. A right triangle is a triangle that has a right angle.
12. Figures are congruent when, if one of them were placed on the other, they would exactly coincide. (Congruent figures are thus equal to one another in all respects.)
14. A parallelogram is a quadrilateral whose opposite sides are parallel
15. A circle is a plane figure bounded by one line, called the circumference, such that all straight lines drawn from a certain point within the figure to the circumference, are equal to one another.
16. And that point is called the center of the circle.
17. A diameter of a circle is a straight line through the center and terminating in both directions on the circumference. A straight line from the center to the circumference is called a radius; plural, radii.
1. Grant the following:
1. To draw a straight line from any point to any point.
2. To extend a straight line for as far as we please in a straight line.
3. To draw a circle whose center is the extremity of any straight line, and whose radius is the straight line itself.
4. All right angles are equal to one another.
5. If a straight line that meets two straight lines makes the interior angles on the same side less than two right angles, then those two straight lines, if extended, will meet on that same side.
(That is, if angles 1 and 2 together are less than two right angles, then the straight lines AB, CD, if extended far enough, will meet on that same side; which is to say, AB, CD are not parallel.)
Axioms or Common Notions
1. Things equal to the same thing are equal to one another.
2. If equals are joined to equals, the wholes will be equal.
3. If equals are taken from equals, what remains will be equal.
4. Things that coincide with one another are equal to one another.
5. The whole is greater than the part.
6. Equal magnitudes have equal parts; equal halves, equal thirds,
2. and so on.
Commentary on the Definitions
A definition describes what is being defined, and gives it a name. A definition has the character of a postulate, because we must agree to use the name in precisely that way. A definition however does not assert that what has that name exists -- and we may not assume it does. For, just as the postulates and axioms -- the statements we do not prove -- must be as few as possible, so must be the assumptions that things exist. We may assume, for example, that points exist, but very little else. A definition is required only to be understood.
And so an "equilateral triangle" is defined. But the very first proposition shows that what has that name exists. How does it show it? By presenting the logical steps that allow us to construct a figure that satisfies the definition. The definition of an equilateral triangle correctly describes something we can actually draw.
Now, an equilateral triangle exists as an idea, for we have understood the definition. But for a defined term to exist for mathematics, it must mean more than that. It must mean that we are able to bring it into this world. With any definition, we must either postulate the existence of what has been defined (that is done in the case of a circle, Postulate 3), or we must prove it.
By maintaining the logical separation of the definition of a thing and its existence, mathematics becomes a science in the same way physics is a science. Physics must show that the things of which it speaks
A definition is reversible. That means that when the conditions of the definition are satisfied, then we may use that word. And conversely, if we use that word, that implies those conditions have been satisfied. A definition is equivalent to an if and only if sentence.
Note that the definition of a right angle says nothing about measurement, about 90°. Plane geometry is not the study of how to apply arithmetic to figures. In geometry we are concerned only with what we can see and reason directly, not through computation. A most basic form of knowledge is that two magnitudes are simply equal -- not that they are both 90° or 9 meters.
How can we know when things are equal? That is one of the main questions of geometry. The definition (and existence) of a circle provides our first way of knowing that two straight lines could be equal. Because if we know that a figure is a circle, then we would know that any two radii are equal. (Definitions 15 and 17.)
We have not formally defined a point, although Euclid does. ("A point is that which has no part." That is, it is indivisible. Most significantly, Euclid adds, "The extremities of a line are points.") And we have not defined a "line," although again Euclid does. ("A line is length without breadth.") Euclid defines them because they are elementary objects of geometry, and their existence is assumed. (Postulate 1, however, guarantees the existence of a straight line.) But since there is never occasion to prove that something is a point or a line, a definition of one is not logically required.
Commentary on the Postulates
We require that the figures of geometry -- the triangles, squares, circles -- be more than ideas. We must be able to draw them. The fact that we can draw a figure is what permits us to say that it exists. For, as we have noted in the Commentary on the Definitions, we may not assume that what we have called a "triangle" or a "circle" actually exists.
The first three Postulates narrowly set down what we are permitted to draw. Everything else we must prove. Each of those Postulates is therefore a "problem" -- a construction -- that we are asked to consider solved: "Grant the following."
The instruments of construction are straightedge and compass. Postulate 1, in effect, asks us to grant that what we draw with a straightedge is a straight line. Postulate 3 asks us to grant that the figure we draw with a compass is a circle. And so we will then have an actual figure that refers to the idea of a circle, rather than just the word "circle."
Note, finally, that the word all, as in "all right angles" or "all straight lines," refer to all that exist, that is, all that we have actually drawn. Geometry -- at any rate Euclid's -- is never just in our mind.
Commentary on the Axioms or Common Notions
The distinction between a postulate and an axiom is that a postulate is about the specific subject at hand, in this case, geometry; while an axiom is a statment we acknowledge to be more generally true; it is in fact a common notion. Yet each has the same logical function, which is to authorize statements in the proofs that follow.
Implicit in these Axioms is our very understanding of equal versus unequal, which is: Two magnitudes of the same kind are either equal or one of them is greater.
So, these Axioms, together with the Definitions and Postulates, are the first principles from which our theory of figures will be deduced.
Please "turn" the page and do some Problems.
Continue on to Proposition 1.
Please make a donation to keep TheMathPage online.
Copyright © 2012 Lawrence Spector
Questions or comments? | http://www.themathpage.com/abookI/first.htm | 13 |
50 | On, Dr.Mcleod Introduced us a new activity called Math Tutorial. As you know, Math has a lot of different categories, like fractions, decimals, etc. I paired up with Adrian and our topic was about Circumference, Perimeter, Area and Volume. We used the Mcleod Inquiry Framework to research about the facts of Circumference, Perimeter, Area and volume. We planned a big inquiry question: What do we know about Area, Volume, Perimeter and Circumference to gain a better understanding of our number system? We also made some supporting questions:
- What is area?
- What is volume?
- What is the relationship between area and volume?
- What is Circumference and how do you find it?
- How do you find out the area of a square, a triangle, a circle, a rectangle, a rhombus and a trapezoid?
- Area of irregular 2-d shapes?
- How do you find out the volume of a cube, a sphere, a pyramid, a rectangular prism, a triangular prism and a cylinder?
- Volume of irregular 3-d shapes?
- What is perimeter and how do you find it?
For collecting data, we sited down the sources and copied the information inside. We put it in a table with sources on the side and supporting questions on the top.
For analyzing data, we wrote a paragraph about each supporting question, like this:
What is Perimeter and how do you find it?
Most sites say that Perimeter is the distance around outside of a shape. Some websites said “Perimeter can be found by adding/sum of all sides together.
For Conclusions, not like the investigation, I only answered the BIQ.
What do we know about Area, Volume, Perimeter and Circumference to gain a better understanding of our number system?
Area, Perimeter, Volume and Circumference all about measuring shapes. Perimeter and Circumference are about measuring the outside of the shapes and the area and volume are about measuring the inside of shape.
For presenting, we made a keynote presentation, but our presentation went pretty wrong.
After this, I reflected on it, Adrian and I did not do a good job in presenting because we used the wrong application (when recorded in keynote, presentation starts from beginning if stopped). We had a lot of knowledge about the this topic, but we did not deliver the information well. So next time, I need to work no my delivery skills.
Throughout this presentation, not only the class learnt something, even I learnt something too! I learnt how it feels to conduct a presentation, so I try to use the delivery skill, but I could have improved in it. I also gained more knowledge about area, perimeter, circumference and volume. I also learnt the connection between all these 4 words too! I will try to improve in my delivery skills and not only use keynote for presentations. | http://sites.cdnis.edu.hk/students/074170/?p=520 | 13 |
202 | Trigonometry and Geometry Conversions
Ratios for sum angles
As the examples showed, sometimes we need angles other than 0, 30, 45, 60, and 90 degrees. In this chapter you need to learn two things:
1. Sin(A + B) is not equal to sin A + sin B. It doesn't work like removing the parentheses in algebra.
2. The formula for what sin(A + B) does equal.
First to show that removing parentheses doesn't "work." Here: make A 30 degrees and B 45 degrees. Sin 30 is 0.5. Sin 45 is 0.7071. Adding the two is 1.2071.
You know that no sine (or cosine) can be more than 1. Why? the ratio has the hypotenuse as its denominator. The most that the numerator can be is equal to the denominator. A sine or cosine can never be greater than 1, so a value of 1.2071 must be wrong.
Wanted sine, cosine, or tangent, of whole angle (A + B)
Finding sin(A + B)
The easiest way to find sin(A + B), uses the geometrical construction shown here. The big angle, (A + B), consists of two smaller ones, A and B, The construction (1) shows that the opposite side is made of two parts. The lower part, divided by the line between the angles (2), is sin A. The line between the two angles divided by the hypotenuse (3) is cos B. Multiply the two together. The middle line is in both the numerator and denominator, so each cancels and leaves the lower part of the opposite over the hypotenuse (4).
Notice the little right triangle (5). The shaded angle is A, because the line on its top side is parallel to the base line. Similar right triangles with an angle A show that the top angle, marked A, also equals the original A. The top part of the opposite (6), over the longest of that shaded triangle, is cos A. The opposite over the main hypotenuse (7) is sini?. Since the side marked "opposite" (7) is in both the numerator and denominator when cos A and sini? are multiplied together, cos A sin B is the top part of the original opposite — for (A + B) — divided by the main hypotenuse (8).
Now, put it all together (9). Sin(A + B) is the two parts of the opposite - all divided by the hypotenuse (9). Putting that into its trig form: sin(A + B) = sin A cos B + cos A sin B.
Finding cos(A + B)
A very similar construction finds the formula for the cosine of an angle made with two angles added together.
Using the same construction (1), notice that the adjacent side is the full base line (for cos A), with part of it subtracted at the right. Each part must use the same denominator, the hypotenuse of the (A + B) triangle.
The full base line, divided by the dividing line between angles A and E, is cos A (2). This dividing line, divided by the hypotenuse of (A + B) triangle, is cos B (3). So, the full base line divided by the hypotenuse is the product cos A cos B (4).
Now, for the little part that has to be subtracted. The shaded part (5) represents sin A, which multiplied by the shaded part (6) is sin E, which produces the other piece you need (7). The subtraction produces cos(A + B) (8) so that the formula we need is:
cos(A + B) = cos A cos B - sin A sin B
Finding tan(A + B)
A complete geometric derivation of the formula for tan(A + B) is complicated. An easy way is to derive it from the two formulas that you have already done. In any angle, the tangent is equal to the sine divided by the cosine. Using that fact, tan(A + B) = sin(A + B)/cos(A + B). In a way that does it, but you can expand that to:
tan(A + B) = [sin A cos B + cos A sin B]/[cos A cos B - sin A sin B]
Divide through top and bottom by cos A cos B, which turns all the terms into tangents, giving:
tan(A + B) = [tan A + tan B]/[1 - tan A tan B]
Ratios for 75 degrees
Show the ratios for sine, cosine, and tangent by substituting into the sum formula, then reducing the result to its simplest form, before evaluating the surds. After making the basic substitutions in each case, the rough work is in shading - to show how the result is reduced to the simplest form for evaluation.
If you use your pocket calculator for evaluation, it will probably make no difference whether you simplify the expressions first or just plow through it! Everything depends on the calculator: some do make a difference, some don't!
Ratios of angles greater than 90 degrees
So far, ratios of acute angles (between 0 and 90 degrees) have been considered. Other triangles with obtuse angles (over 90 degrees) might go over 180 degrees in later problems. To simplify classification of angles according to size, they are divided into quadrants.
A quadrant is a quarter of a circle. Since the circle is commonly divided into 360 degrees, the quadrants are named by 90-degree segments. 0-90 degrees is the 1st quadrant, 90-180 the 2nd, 180-270 the 3rd, and 270-360 the 4th.
Drawing in lines to represent the quadrant boundaries, with 0 or 360 horizontal to the right, 90 vertical up, 180 horizontal to the left, and 270 vertical down. Now, use this method for plotting graphs.
Progressively larger angles are defined by a rotating vector, starting from zero and rotating counterclockwise. Horizontal elements are x: positive to the right, negative to the left. Vertical elements are y. positive up, negative down. The rotating vector is r. So, the sine of an angle is y/r, the cosine x/r, and the tangent y/x. The vector r is always positive. So, the sign of the ratios can be figures for the various quadrants.
Here, the signs of the three ratios have been tabulated for the four quadrants. Also how the equivalent angle in the first quadrant "switches" as the vector passes from one quadrant to the next. In the first quadrant, the sides were defined in the ratios for sine, cosine, and tangent. As you move into bigger angles in the remaining quadrants, the opposite side is always the vertical (y). What was called the adjacent is always the horizontal (x). The hypotenuse is always the rotating vector (r). You will begin to see a pattern to the way these trigonometric ratios for angles vary.
Ratios in the four quadrants
Ratios for difference angles
Now, you have two ways to obtain formulas for difference angles. First, use a geometric construction, such as the one that was used for sum angles, reversing it so that (A - B) is the angle B subtracted from the angle A.
In reasoning similar to that which was used for the sum angles, presented here somewhat abbreviated, are the sine and cosine formulas:
sin(A - B) = sin A cos B - cos A sin B
cos(A - B) = cos A cos B + sin A sin B
Sum and difference formulas
The second method of finding the formula for difference angles uses the sum formula already obtained, but makes B negative. From our investigation of the signs for various quadrants, negative angles from the 1 st quadrant will be in the 4th quadrant. Making this substitution produces the same results that arrived geometrically in the previous section.
Finding the tangent formula follows the same method, either going through substitution into the sine and cosine formulas, or more directly, by making tan(-B) = - tan B. Either way you get:
tan(A - B) = [tan A - tan B]/[1 + tan A tan B]
Ratios through the four quadrants
You can deduce a few more ratios with the sum and difference formulas. You already did ratios for 75 degrees. Now, do those for 15 degrees. These formulas give ratios for angles at 15-degree intervals through the four quadrants. Plotting them out for the full 360 degrees, you can see how the three ratios change as the vector sweeps through the four quadrants.
Both the sine and cosine "wave" up and down between +1 and -1. Notice that the "waves" are displaced by 90 degrees, one from the other. This fact becomes important later.
The tangent starts out like the sine curve, but quickly it sweeps up to reach infinity at 90 degrees. Going "offscale" in the positive direction, it "comes on" from the negative direction on the other side of 90 degrees. Going through the 180-degree point, the tangent curve duplicates what it does going through 0 or 360 (whichever you view it as). At 270 degrees, it repeats what it did at 90 degrees.
Pythagoras in trigonometry
A formula can often be simplified, as was found by deriving the tangent formulas from the sine and cosine formulas, and changing it from terms using one ratio to terms using another ratio. In doing this, the Pythagorean theorem, expressed in trigonometry ratios, is very handy.
Assume that a right triangle has a hypotenuse of 1 unit long. Then one of the other sides will have a length of sin A and the other of cos A. From that, the Pythagorean theorem shows that: cos2 A + sin2 A = 1. This statement is always true, for any value of A.A little thing here about the way it's written. Cos2 A means (cos A)2. If you wrote it cos A2, the equation would mean something else. A is a number in some angular notation that represents an angle. A2 would be the same number squared. Its value would depend on the angular notation used, so it's not a good term to use. What is meant is the angle's sine or cosine squared, not the angle itself.
The Pythagoras formula can be transposed. For instance, two other forms are:
cos2 A = 1 - sin2 A, and sin2 = 1 - cos2 A.
The sum formulas, along with the Pythagorean theorem, are used for angles that are 2, 3, or a greater exact multiple of any original angle. Here, give formulas for 2A and 3A. The same method is pursued further in Parts 3 and 4 of this book.
The sum formula works whether both angles are the same or different: sin(A + B) or sin(A + A). However, sin(A + A) is really sin 2A. So, sin 2A is sin A cos A + cos A sin A. They are both the same product, in opposite order, so this statement can be simplified to sin 2A = 2 sin A cos A.
Similarly, cos 2A = cos A cos A - sin A sin A, which also can be written: cos 2A = cos2 A - sin2 A. Using the Pythagorean theorem, change that to: cos 2A = 2cos2 A - 1. Finally, tan 2A = 2 tan A/[1 - tan2 A].
Now, the triple angle (3A) is used just to show how further multiples are obtained. Basically, it's as simple as writing 3A = 2A + A and reapplying the sum formulas. But then, to get the resulting formula in workable form, you need to substitute for the 2A part to get everything into terms of ratios for the simple angle A.
Work your way through the three derivations shown here. You can see that it will get more complicated for 4 A and more (in Parts 3 and 4 of this book).
MULTIPLE ANGLES Derived from Sum Formulas
MULTIPLE ANGLES Ratios for 3A
Properties of the isosceles triangle
You have already seen that a right triangle is a useful building block for other shapes. An isosceles triangle has slightly different uses. But the fact on which these uses are based is that an isosceles triangle has two equal sides and two equal angles opposite those two sides. A perpendicular from the third angle (not one of the equal ones) to the third side (not one of the equal ones) bisects that third side. That is, it divides it into two equal parts, making the whole triangle into mirror-image right triangles.
With isosceles triangles, any triangle, except a right triangle, can be divided into three adjoining isosceles triangles, by dividing each side into two equal parts and erecting perpendiculars from the points of bisection. Where any two of these bisecting perpendiculars meet, if lines are drawn to the corners of the original triangle, the three lines must be equal, because two of them form the sides of an isosceles triangle. So, the perpendicular from the third side of the original triangle must also meet in the same point.
This statement is true, as we show here, whether the original triangle is acute or obtuse. The difference with an obtuse-angled triangle is that the meeting point is outside the original triangle, instead of inside.
What does a right triangle do? Perpendiculars from the mid-point of the hypotenuse to the other two sides will bisect those two sides - you get two out of three! The meeting point happens to sit on the hypotenuse.
Angles in a circle
A basic property of a circle is that its center is at an equal distance from every point on its circumference. This equal distance is the radius of the circle.
If you draw any triangle inside a circle, the perpendiculars from the mid points of its side will meet at the circle's center and radii from the corners of the triangle will divide it into three isosceles triangles.
Now, if you name the equal pairs of angles in each isosceles triangle, A, A, B, B, C, C, you find that the original triangle has one angle A + B, one angle B + C, and one angle A + C. The three angles total 2A + 2B + 2C. This, you know, adds up to 180 degrees.
In any isosceles triangle, the angle at the apex is 180 degrees minus twice the base angle. Because of the fact deduced in the previous paragraph, 180 - 2A must be the same as 2B + 2C, for example.
Consider the angles that are opposite from the part of the circle, against which the top left side of the triangle sits. The angle at the centre is 2B + 2C, as just deduced. The angle at the circumference is B + C. "You will find that, for any segment of a circle, the angle at the center is always twice the angle at the circumference.
The proof above leads to an interesting fact about angles in circles. Instead of identifying the angles with a side of a triangle, use an arc (portion of the circumference) of the circle. The important thing is the angle that corresponds to the arc at the center. A part of the circumference of a circle that is identified by the angle at the center is called the chord of the circle.
The angle at the center is twice the angle at the circumference
Any angle drawn touching the circumference, using this chord as termination for the lines bounding the angle, must be just half the angle at the center. Thus, all the angles in a circle, based on the same chord, must be equal. Suppose that the chord has an angle of 120 degrees. The angles at the circumference will all be exactly 60 degrees.
A special case is the semicircle (an exact half circle). The angle at the center is a straight line (180 degrees). Every angle at the circumference of a semicircle is exactly 90 degrees (a right angle). Any triangle in a semicircle is a right triangle.
Above we have often used angles that add up to either a right angle (90 degrees) or to two right angles (180 degrees). When two angles add up to 180 degrees (two right angles), they are called supplementary. When two angles add up to 90 degrees (one right angle), they are called complementary.
Questions and problems
1. The sine of angle A is 0.8 and the sine of angle B is 0.6. From the various relationships obtained so far, find the following: tan A, tan B, sin(A + B), cos(A + B), sin(A - B), cos(A - B), tan(A + B), and tan(A - B), without using tables or a calculator's trig buttons.
2. At the equator, Earth has a radius of 4000 miles. Angles around the equator are measured in meridians of longitude, with a north-to-south line through Greenwich, England as the zero reference. Two places are used to observe the moon: one is Mt. Kenya, on the equator at 37.5 east of Greenwich; the other is Sumatra, on the equator, at 100.5 east. How far apart are these two places, measured by an imaginary straight line through the Earth?
3. If sights were made horizontally from the observation points in question 2 (due east from the first, due west from the second), at what angle would the lines of sight cross?
4. At a certain time, exactly synchronized at both places, a satellite is observed. In Kenya, the elevation of a line of sight, centered on the satellite, is 58 degrees above horizontal, eastward. In Sumatra, the elevation is 58 degrees above horizontal, westward. How far away is the satellite? Use the distance between the points calculated in question 2.
5. The cosine of a certain angle is exactly twice the sine of the same angle. What is the tangent of this angle? You don't need either tables or calculator for this question.
6. The sine of a certain angle is exactly 0.28. Find the cosine and tangent without tables or the trig functions on your calculator.
7. The sine of a certain angle is 0.6. Find the sine of twice this angle and three times this angle.
8. Find the sine and cosine of an angle exactly twice that of question 7.
9. Using 15 degrees as a unit angle, and the formulas for ratios of 2A and ?>A, find the values of the sines of 30 and 45 degrees.
10. Using 30 degrees as a unit angle, find the values for the sines of 60 and 90 degrees.
11. Using 45 degrees as a unit angle, find values for the tangents of 90 and 135 degrees.
12. Using 60 degrees as a unit angle, find values for the cosines of 120 and 180 degrees.
13. Using 90 degrees as a unit angle, find values for the cosines of 180 and 270 degrees.
14. Using the tangent formulas for multiple angles and the tables, find the tangents for three times 29, 31, 59, and 61 degrees. Account for the changes in sign between three times 29 and 31 degrees and between 59 and 61 degrees.
15. The sine of an angle is 0.96. Find the sine and cosine for twice the angle.
16. A problem leads to an algebraic expression of the form 8cos2 A + cos A = 3. Solve for cos A, and state in which quadrant the angle representing each solution will come. Give approximate values from tables or your calculator. | http://www.math10.com/en/geometry/trigonometry-and-geometry-conversions/trigonometry.html | 13 |
59 | Illinois Learning Standards
Stage F - Math
Students who meet the standard can demonstrate knowledge and use of numbers and their many representations in a broad range of theoretical and practical settings. (Representations)
- Represent place values from units through billions using powers of ten.
- Represent, order, compare, and graph integers.
- Identify fractional pieces that have the same value but different shapes.
- Compare and order fractions and decimals efficiently and find their approximate position on a number line. **
- Represent repeated factors using exponents.
Students who meet the standard can investigate, represent and solve problems using number facts, operations, and their properties, algorithms, and relationships. (Operations and properties)
- Write prime factorizations of numbers.
- Determine the least common multiple and the greatest common factor of a set of numbers.
- Demonstrate the meaning of multiplication of fractions (e.g.,1/2 x 3 is 1/2 of a group of three objects).
- Simplify simple arithmetic expressions with rational numbers using the field properties and the order of operations.
- Recognize and use the inverse relationships of addition and subtraction, multiplication and division to simplify computations and solve problems. **
- Solve multiplication number sentences and word problems with whole numbers and familiar fractions.
Students who meet the standard can compute and estimate using mental mathematics, paper-and-pencil methods, calculators, and computers. (Choice of method)
- Select and use appropriate operations, methods, and tools to compute or estimate using whole numbers with natural number exponents. **
- Analyze algorithms for computing with whole numbers, familiar fractions, and decimals and develop fluency in their use. **
Students who meet the standard can solve problems using comparison of quantities, ratios, proportions, and percents.
- Solve number sentences and word problems using percents.
- Demonstrate and explain the meaning of percents, including greater than 100 and less than 1. **
- Create and explain a pattern that shows a constant ratio.
- Analyze situations to determine whether ratios are appropriate to solve problems.
- Determine equivalent ratios.
Students who meet the standard can measure and compare quantities using appropriate units, instruments, and methods. (Performance and conversion of measurements)
- Investigate the history of the U.S. customary and metric systems of measurement.
- Measure, with a greater degree of accuracy, any angle using a protractor or angle ruler.
Students who meet the standard can estimate measurements and determine acceptable levels of accuracy. (Estimation)
- Estimate distance, weight, temperature, and elapsed time using reasonable units and with acceptable levels of accuracy.
Students who meet the standard can select and use appropriate technology, instruments, and formulas to solve problems, interpret results, and communicate findings. (Progression from selection of appropriate tools and methods to application of measurements to solve problems)
- Select and justify an appropriate formula to find the area of triangles, parallelograms, and trapezoids. **
- Select an appropriate formula or strategy to find the surface area and volume of rectangular and triangular prisms. **
- Develop and use formulas for determining the area of triangles, parallelograms, and trapezoids.
- Develop and use the formula for determining the volume of a rectangular and triangular prism.
- Calculate the surface area of a cube, rectangular prism, and triangular prism.
- Develop and use formulas for determining the circumference and arc of circles.
Students who meet the standard can describe numerical relationships using variables and patterns. (Representations and algebraic manipulations)
- Investigate, extend, and describe arithmetic and geometric sequences of numbers whether presented in numeric or pictorial form. **
- Evaluate algebraic expressions for given values.
- Express properties of numbers and operations using variables (e.g., the commutative property is m + n = n + m).
- Simplify algebraic expressions involving like terms.
Students who meet the standard can interpret and describe numerical relationships using tables, graphs, and symbols. (Connections of representations including the rate of change)
- Graph simple inequalities on a number line.
- Create a table of values that satisfy a simple linear equation and plot the points on the Cartesian plane.
- Describe, verbally, symbolically, and graphically, a simple relationship presented by a set of ordered pairs of numbers.
Students who meet the standard can solve problems using systems of numbers and their properties. (Problem solving; number systems, systems of equations, inequalities, algebraic functions)
- Identify and explain incorrect uses of the commutative, associative, and distributive properties.
- Identify and provide examples of the identity property of addition and multiplication.
- Identify and provide examples of inverse operations.
- Explain why division by zero is undefined.
Students who meet the standard can use algebraic concepts and procedures to represent and solve problems. (Connection of 8A, 8B, and 8C to solve problems)
- Create, model, and solve algebraic equations using concrete materials.
- Solve linear equations, including direct variation, with whole number coefficients and solutions using algebraic or graphical representations.
Students who meet the standard can demonstrate and apply geometric concepts involving points, lines, planes, and space. (Properties of single figures, coordinate geometry and constructions)
- Plot and read ordered pairs of numbers in all four quadrants.
- Describe sizes, positions, and orientations of shapes under transformations, including dilations.
- Perform simple constructions (e.g., equal segments, angle and segment bisectors, or perpendicular lines, inscribing a hexagon in a circle) with a compass and straightedge or a mira.
- Determine and describe the relationship between pi, the diameter, the radius, and the circumference of a circle.
- Determine unknown angle measures using angle relationships and properties of a triangle or a quadrilateral.
Students who meet the standard can identify, describe, classify and compare relationships using points, lines, planes, and solids. (Connections between and among multiple geometric figures)
- Determine the relationships between the number of vertices or sides in a polygon, the number of diagonals, and the sum of its angles.
- Solve problems that involve vertical, complementary, and supplementary angles.
- Analyze quadrilaterals for defining characteristics.
- Create a three-dimensional object from any two-dimensional representation of the object, including multiple views, nets, or technological representations.
Students who meet the standard can construct convincing arguments and proofs to solve problems. (Justifications of conjectures and conclusions)
- Make, test, and justify conjectures about various quadrilateral and triangle relationships, including the triangle inequality.
- Justify the relationship between vertical angles.
- Justify that the sum of the angles of a triangle is 180 degrees.
9D is Not Applicable for Stages A - F.
Students who meet the standard can organize, describe and make predictions from existing data. (Data analysis)
- Construct, read, interpret, infer, predict, draw conclusions, and evaluate data from various displays, including circle graphs. **
- Recognize and explain misleading displays of data due to inappropriate intervals on a scale.
Students who meet the standard can formulate questions, design data collection methods, gather and analyze data and communicate findings. (Data Collection)
- Gather data by conducting simple simulations.
- Collect data over time with or without technology.
Students who meet the standard can determine, describe and apply the probabilities of events. (Probability including counting techniques)
- Record probabilities as fractions, decimals, or percents.
- Demonstrate that the sum of all probabilities equals one.
- Determine empirical probabilities from a set of data provided.
- Set up a simulation to model the probability of a single event.
- Discuss the effect of sample size on the empirical probability compared to the theoretical probability.
- List outcomes by a variety of methods (e.g., tree diagram).
- Determine theoretical probabilities of simple events.
* National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000.
** Adapted from: National Council of Teachers of Mathematics. Principles and Standards for School Mathematics. Reston, Va: National Council of Teachers of Mathematics, 2000. | http://isbe.net/ils/math/stage_F/descriptor.htm | 13 |
87 | 7.1. How to Solve Linear Equations
• Essentially Linear Equations
7.2. Solving Second Degree Equations
• Factoring Methods • Completing the Square • The Quadratic
7.3. Solving Inequalities
• Tools for Solving Inequalities • Simple Inequalities • Double
Inequalities • The Method of Sign Charts
7.4. Solving Absolute Inequalities
• Solving the Inequality |a| < b • Solving the Inequality
|a| > b
7. Solving Equations & Inequalities
There are two basic tools for solving equations: (1) Adding the same expression to both sides of an equation and (2) multiplying both sides
of the equation by the same expression. In symbols,
Here the symbol <=> means “if and only if ” which is a fancy way of
saying “is equivalent to .” Equation (2) has an obvious variation
Some care needs to be taken when applying (2) and (3) in the case
where c is an algebraic expression containing unknowns. Usually, students
have no problems when c is a numerical value.
7.1. How to Solve Linear Equations A linear equation is an equation of the form
ax + b = 0
The solution set would represent the zeros or roots of the
ax + b.
Even though the equation is very simple, student still
solving it. Here are some representative examples. All linear equations
are solved in the same way.
Example 7.1. Solve each linear equation.
(a) 4x+5 = 0
(b) (1/2)x − 4 = 6
(c) 7 − 3x = 2
Strategyfor Solving. Isolate the unknown, x say, on
one side of the
equation with other terms of the equation on the other side. Divide
through both sides by the coefficient of the unknown to obtain the
Exercise 7.1. Solve for x in each of the following
using good techniques
(as exhibited in Example 7.1). Passing is 100% correct.
Sometimes we have equations that have several symbolic
Examine each of the following problems.
Exercise 7.3. In each of the equations listed
below, solve for the
indicated variable. Solve for . . .
(a) x in 5x − 3y = 4 (b) y in 5x − 3y = 4
(c) z in x2z − 12x + y = 1
It is apparent from the above examples and exercises that
of adding (1), multiplying (2), and dividing(3) both sides
equation by the same expression are standard and useful tools in your
toolbox of techniques for solving equations. Do not make
up your own
methods, use the standard ones and . . . these are they!
When working out mathematical problems it is important to
your thoughts on paper properly and clearly. After solving a problem,
recopy it neatly. Copy the style of this tutorial or some other textbook.
Try to improve your handwriting. Use proper notation.
• Essentially Linear Equations Some equations are ‘disguised’ linear equations. They do not require
any special techniques other than what is needed to solve linear equations.
Example 7.2. Solve for x in each of the following.
Now, consider the following set of exercises.
Exercise 7.4. Solve for x is each of the following.
7.2. Solving Second Degree Equations We now turn to the problem of solving equations of the form:
ax2+ bx + c = 0. (4)
There are two standard methods of solving this kind of
(1) by factoring the left-hand side and (2) by applying the Binomial
• Factoring Methods We have already studied techniques of factoring polynomials of degree
two; therefore, it is not necessary to look at a large number of
examples. If necessary, review factoring.
The method of factoring can certainly be applied to any polynomial
equation and is not restricted to quadratic equations . In addition
to factoring, a major tool used in solving equations is the Zero-
ab = 0 => a = 0 or b = 0 (5)
This principle states the obvious property of the real
The only way the product of two numbers can be zero is if one of
them is zero.
The following example illustrates standard reasoning and
methods. Read carefully.
Example 7.3. Solve each of the following.
(a) x2− 5x+6 = 0 (b) x2+ 4x + 4 = 0 (c) 6x2− x
− 2 = 0
Now using the same methods as exhibited in the previous
solve each of the following. Passing is 100%.
Exercise 7.5. (Skill Level 1) Solve for x in each
of the following.
(a) x2− 7x+12 = 0 (b) x2+ 3x = 10
Here are a few more quadratic equations.
Exercise 7.6. Solve for x in each of the following.
(a) 12x2− 17x + 6 = 0 (b) 20x2+ 3x = 2
Factorization techniques are not limited to second degree
Here are a few higher degree equations that can be factored fairly
easily. Some of the fourth degree equations below can be solved using
the factorization techniques for quadratic polynomials.
Exercise 7.7. Solve for x in each of the following.
(a) x3 − 2x2− 3x = 0 (b) x4 − 16 = 0
(c) x4 − 2x2− 3 = 0 (d) x4 − 5x2+6 =
• Completing the Square In addition to factorization methods, the technique of completing
the square can also be used to solve a quadratic equation. Even
though this technique will be used to solve equation, completion of
the square has certain uses in other kinds of mathematical problems.
If you go on to Calculus, for example, you will see it within the
context of integ ration problem .
Completion of the Square Algorithm. Below are the
steps for completing
the square with an abstract and a particular equation that
illustrate the steps.
1. Problem: Solve the quadratic equation for x:
ax2+ bx + c = 0
2x2+ 12x − 3 = 0.
2. Associate the x2 term and the x term:
(ax2+ bx) + c = 0
(2x2+ 12x) − 3 = 0
3. Factor out of the coefficient of the x2term.
2(x2+ 6x) − 3 = 0
4. Take one-half of the coefficient of the x and square
take the coefficient of x . . .
and compute one-half this . . .
and square it:
5. Take this number, and add it inside the parentheses;
must be compensated for by subtracting it from outside the
Care must be made here because we are adding the term
the parentheses and subtracting an equal quantity outside
the parentheses. Study the abstract version and the particular
example closely to understand what is meant.
6. The trinomial inside the parentheses is a perfect square:
End Complete Square
What does this accomplish? Observe that the equations in
(6) can be
AX2= C (7)
where X is a linear polynomial. This kind of equation can
solved as follows:
This sequence of steps can be carried out in every case.
Let’s illustrate by continuing to solve the equation
carried in the completion
of the square algorithm. Example 7.4. Solve the equation 2x2+ 12x − 3 = 0. Example 7.5. Solve by completing the square: 3x2+ 2x − 5 = 0. Exercise 7.8. Solve each of the following by completing the square.
(a) 8x2− 2x − 1 = 0 (b) 3x2+ 5x+2 = 0 (c) x2+ x
− 1 = 0
All the above examples and exercises were done exacly the
The method of completion of the square is a useful tool when solving
quadratic equations, but it is not the most efficient method. The next
section on the Quadratic Formula is a more standard tool than is
completing the square.
Despite its inefficiencies, completion of squares is still
is useful technique
to know as it is use elsewhere in mathematics.
• The Quadratic Formula The solutions of equation (4) can be found in a more direct way than
the method of factorization or completing the square by using the
so-called Quadratic Formula. Let’s state/prove this formula.
Theorem. Consider the quadratic equation
ax2+ bx + c = 0 a ≠ 0 (8)
(1) If b2− 4ac < 0, (8) has no solutions;
(2) if b2− 4ac = 0, (8) has only one solution;
(3) if b2− 4ac > 0, (8) has two distinct solutions.
In the latter two cases, the solutions are given by the Quadratic
Proof. Theorem Notes: The expression b2−4ac is called the discriminant
the quadratic equation. It can be used, at casual glance, to determine
whether a given equation has one, two, or no solutions.
•The discriminant is a handy way of classifying a polynomial
P(x) = ax2+ bx + c as ir reducible or not . The polynomial P(x) is
irreducible if and only if its discriminant is negative: b2− 4ac <
Quiz. Using the discriminant, b2−4ac,
respond to each of the following
1. Is the quadratic polynomial x2− 4x + 3 irreducible?
(a) Yes (b) No
2. Is the quadratic polynomial 2x2− 4x + 3 irreducible?
(a) Yes (b) No
3. How many solutions does the equation 2x2− 3x − 2 = 0 have?
(a) none (b) one (c) two
EndQuiz. Let’s go to the examples.
Example 7.6. Solve each of the following using the
(a) x2− 5x+6 = 0 (b) x2+ 4x+4 = 0
(c) 6x2− x − 2 = 0 (d) 3x2− 3x+1 = 0
Exercise 7.9. Using the quadratic formula, solve
each of the following.
(a) 2x2+ 5x − 12 = 0 (b) 3x2− 7x+1 = 0
(c) x2+1 = 0 (d) x2+ x = 3
Recognition. One problem students have is
recognizing a quadratic
equation. This is especially true when the equation has several symbolic
expressions in it. Basically, a quadratic equation in x is an
equation in which x2appears as a term and x appears as a term.
The coefficient of the x2term is the value of a; the coefficient of
x term is the value of b; and all other terms comprise the value of c.
The symbol x may be some other letter like y or z, but it can also be
a compound symbol like x2, y3, or even something like | http://www.algebra-online.com/tutorials-4/powers/solving-equations-amp.html | 13 |
50 | Warning: the HTML version of this document is generated from
Latex and may contain translation errors. In
particular, some mathematical expressions are not translated correctly.
Classes and objects
12.1 User-defined compound types
Having used some of Python's built-in types, we are ready to create a user-defined type: the Point.
Consider the concept of a mathematical point. In two dimensions, a point is two numbers (coordinates) that are treated collectively as a single object. In mathematical notation, points are often written in parentheses with a comma separating the coordinates. For example, (0, 0) represents the origin, and (x, y) represents the point x units to the right and y units up from the origin.
A natural way to represent a point in Python is with two floating-point values. The question, then, is how to group these two values into a compound object. The quick and dirty solution is to use a list or tuple, and for some applications that might be the best choice.
An alternative is to define a new user-defined compound type, also called a class. This approach involves a bit more effort, but it has advantages that will be apparent soon.
A class definition looks like this:
Class definitions can appear anywhere in a program, but they are usually near the beginning (after the import statements). The syntax rules for a class definition are the same as for other compound statements (see Section 4.4).
This definition creates a new class called Point. The pass statement has no effect; it is only necessary because a compound statement must have something in its body.
By creating the Point class, we created a new type, also called Point. The members of this type are called instances of the type or objects. Creating a new instance is called instantiation. To instantiate a Point object, we call a function named (you guessed it) Point:
blank = Point()
We can add new data to an instance using dot notation:
>>> blank.x = 3.0
This syntax is similar to the syntax for selecting a variable from a module, such as math.pi or string.uppercase. In this case, though, we are selecting a data item from an instance. These named items are called attributes.
The following state diagram shows the result of these assignments:
The variable blank refers to a Point object, which contains two attributes. Each attribute refers to a floating-point number.
We can read the value of an attribute using the same syntax:
>>> print blank.y
The expression blank.x means, "Go to the object blank refers to and get the value of x." In this case, we assign that value to a variable named x. There is no conflict between the variable x and the attribute x. The purpose of dot notation is to identify which variable you are referring to unambiguously.
You can use dot notation as part of any expression, so the following statements are legal:
print '(' + str(blank.x) + ', ' + str(blank.y) + ')'
The first line outputs (3.0, 4.0); the second line calculates the value 25.0.
You might be tempted to print the value of blank itself:
>>> print blank
The result indicates that blank is an instance of the Point class and it was defined in __main__. 80f8e70 is the unique identifier for this object, written in hexadecimal (base 16). This is probably not the most informative way to display a Point object. You will see how to change it shortly.
As an exercise, create and print a Point object, and then use id to print the object's unique identifier. Translate the hexadecimal form into decimal and confirm that they match.
12.3 Instances as arguments
You can pass an instance as an argument in the usual way. For example:
printPoint takes a point as an argument and displays it in the standard format. If you call printPoint(blank), the output is (3.0, 4.0).
As an exercise, rewrite the distance function from Section 5.2 so that it takes two Points as arguments instead of four numbers.
The meaning of the word "same" seems perfectly clear until you give it some thought, and then you realize there is more to it than you expected.
For example, if you say, "Chris and I have the same car," you mean that his car and yours are the same make and model, but that they are two different cars. If you say, "Chris and I have the same mother," you mean that his mother and yours are the same person. * Note So the idea of "sameness" is different depending on the context.
When you talk about objects, there is a similar ambiguity. For example, if two Points are the same, does that mean they contain the same data (coordinates) or that they are actually the same object?
To find out if two references refer to the same object, use the is operator. For example:
>>> p1 = Point()
Even though p1 and p2 contain the same coordinates, they are not the same object. If we assign p1 to p2, then the two variables are aliases of the same object:
>>> p2 = p1
This type of equality is called shallow equality because it compares only the references, not the contents of the objects.
To compare the contents of the objects
def samePoint(p1, p2) :
Now if we create two different objects that contain the same data, we can use samePoint to find out if they represent the same point.
>>> p1 = Point()
Let's say that we want a class to represent a rectangle. The question is, what information do we have to provide in order to specify a rectangle? To keep things simple, assume that the rectangle is oriented either vertically or horizontally, never at an angle.
There are a few possibilities: we could specify the center of the rectangle (two coordinates) and its size (width and height); or we could specify one of the corners and the size; or we could specify two opposing corners. A conventional choice is to specify the upper-left corner of the rectangle and the size.
Again, we'll define a new class:
And instantiate it:
box = Rectangle()
This code creates a new Rectangle object with two floating-point attributes. To specify the upper-left corner, we can embed an object within an object!
box.corner = Point()
The dot operator composes. The expression box.corner.x means, "Go to the object box refers to and select the attribute named corner; then go to that object and select the attribute named x."
The figure shows the state of this object:
12.6 Instances as return values
Functions can return instances. For example, findCenter takes a Rectangle as an argument and returns a Point that contains the coordinates of the center of the Rectangle:
To call this function, pass box as an argument and assign the result to a variable:
>>> center = findCenter(box)
12.7 Objects are mutable
We can change the state of an object by making an assignment to one of its attributes. For example, to change the size of a rectangle without changing its position, we could modify the values of width and height:
box.width = box.width + 50
We could encapsulate this code in a method and generalize it to grow the rectangle by any amount:
def growRect(box, dwidth, dheight) :
The variables dwidth and dheight indicate how much the rectangle should grow in each direction. Invoking this method has the effect of modifying the Rectangle that is passed as an argument.
For example, we could create a new Rectangle named bob and pass it to growRect:
>>> bob = Rectangle()
While growRect is running, the parameter box is an alias for bob. Any changes made to box also affect bob.
As an exercise, write a function named moveRect that takes a Rectangle and two parameters named dx and dy. It should change the location of the rectangle by adding dx to the x coordinate of corner and adding dy to the y coordinate of corner.
Aliasing can make a program difficult to read because changes made in one place might have unexpected effects in another place. It is hard to keep track of all the variables that might refer to a given object.
Copying an object is often an alternative to aliasing. The copy module contains a function called copy that can duplicate any object:
>>> import copy
Once we import the copy module, we can use the copy method to make a new Point. p1 and p2 are not the same point, but they contain the same data.
To copy a simple object like a Point, which doesn't contain any embedded objects, copy is sufficient. This is called shallow copying.
For something like a Rectangle, which contains a reference to a Point, copy doesn't do quite the right thing. It copies the reference to the Point object, so both the old Rectangle and the new one refer to a single Point.
If we create a box, b1, in the usual way and then make a copy, b2, using copy, the resulting state diagram looks like this:
This is almost certainly not what we want. In this case, invoking growRect on one of the Rectangles would not affect the other, but invoking moveRect on either would affect both! This behavior is confusing and error-prone.
Fortunately, the copy module contains a method named deepcopy that copies not only the object but also any embedded objects. You will not be surprised to learn that this operation is called a deep copy.
>>> b2 = copy.deepcopy(b1)
Now b1 and b2 are completely separate objects.
We can use deepcopy to rewrite growRect so that instead of modifying an existing Rectangle, it creates a new Rectangle that has the same location as the old one but new dimensions:
def growRect(box, dwidth, dheight) :
An an exercise, rewrite moveRect so that it creates and returns a new Rectangle instead of modifying the old one.
Warning: the HTML version of this document is generated from Latex and may contain translation errors. In particular, some mathematical expressions are not translated correctly. | http://www.greenteapress.com/thinkpython/thinkCSpy/html/chap12.html | 13 |
89 | The largest astronomical telescope designed to operate beyond the interfering effects of the earth's atmosphere is scheduled to be transported into orbit by the U.S. space shuttle in 1985
The earth's atmosphere is an imperfect window on the universe. Electromagnetic waves in the optical part of the spectrum (that is, waves longer than X rays and shorter than radio waves) penetrate to the surface of the earth only in a few narrow spectral bands. The widest of the transmitted bands corresponds roughly to the colors of visible light; waves in the flanking ultraviolet and infrared regions of the optical spectrum are almost totally absorbed by the atmosphere. In addition, atmospheric turbulence blurs the images of celestial objects, even when they are viewed through the most powerful ground-based telescopes.
Accordingly the advantages of making astronomical observations from outside the atmosphere have long been recognized. In the past few decades considerable experience has been gained in the remote operation of telescopes that have been carried above most or all of the atmosphere by suborbital rockets, high-altitude balloons and artificial earth satellites. Significant findings have come from these efforts, altering theories of the structure and evolution of the universe.
The next stage in this program of exploration is the Space Telescope, which is scheduled to be put into orbit around the earth by the U.S. space shuttle in 1985. The Space Telescope will be a conventional reflecting telescope with unconventional capabilities. It will be the largest astronomical telescope ever orbited. It will also be the first long-term international scientific facility in space.
The Space Telescope, which is now under construction, is designed as a multi-purpose astronomical observatory. It will have a 2.4-meter (94-inch) primary mirror capable of concentrating electromagnetic radiation in the entire optical part of the spectrum. It will be equipped initially with an assortment of scientific instruments for recording extraordinarily high-resolution astronomical images, for detecting extremely faint objects, for collecting various kinds of spectrographic data and for making very precise measurements of the position of radiant sources in the sky. The observations will be made from an altitude of some 500 kilometers (300 miles), well above the obscuring layers of the atmosphere.
The plans for the Space Telescope have been developed by a large number of scientists and engineers, working for almost a decade under the supervision of the National Aeronautics and Space Administration (NASA). The prime contractors charged with the actual construction are the Perkin-Elmer Corporation (responsible for the telescope itself) and the Lockheed Missiles and Space Company, Inc. (responsible for the supporting spacecraft system and for the integration of the components into a working whole). The total cost of the project is currently estimated at S750 million.
The projected lifetime of the Space Telescope is 15 years, although in principle there is no reason it could not be operated for many decades. An essential element in ensuring such a long lifetime (and in keeping costs within reasonable limits) is the availability of the space shuttle, which not only will deploy the telescope but also will service it on a regular basis. Astronauts from the shuttle will visit the Space Telescope whenever the instruments on board the observatory need maintenance, repair or replacement. At longer intervals (perhaps every five years) the entire Space Telescope will be returned to the earth by the shuttle for refurbishment of the mirror and other major components. The telescope will then be returned to orbit.
With suitable instrumentation the Space Telescope should be able to respond to electromagnetic waves ranging in length from about 115 nanometers (billionths of a meter) in the far-ultraviolet region of the spectrum to about a million nanometers (or one millimeter) in the far-infrared. Thus the spectral band accessible to the telescope could extend over a range of wavelengths that differ by a factor of 10,000. In contrast, ground-based telescopes have a clear view of colors that range in wavelength from about 300 to 1,000 nanometers, a span of less than a factor of 10.
Because the Space Telescope will be immune to the blurring effects of atmospheric turbulence it will be able to obtain much sharper images of celestial objects than ground-based telescopes can, even at the same wavelengths that are observable from the ground. The maximum spatial resolution attainable with the Space Telescope will be on the order of a tenth of an arc-second, most astronomical images made with ground-based instruments have a resolution not much better than an arc-second. The tenfold improvement in resolution will make possible more detailed observations of extended objects. It is also expected to enable astronomers to see stars some seven times farther from the solar system than is now possible.
The observing program for the Space Telescope will be administered for NASA by the Association of Universities for Research in Astronomy (AURA), a consortium of 17 universities organized originally to operate several facilities for the National Science Foundation, including the national astronomical observatories at Kitt Peak in Arizona and at Cerro Tololo in Chile. The center for the initial processing and analysis of data from the telescope will be the Space Telescope Science Institute, a new facility that is being established by AURA on the campus of Johns Hopkins University. The first director of the institute is Riccardo Giacconi, who led the scientific teams for the highly successful Uhuru and Einstein X-ray satellites. The operation of the Space Telescope will be the joint responsibility of the institute and of NASA's Goddard Space Flight Center in Greenbelt, Md. The Goddard center will have direct control of the satellite and will serve as the collection point for the data transmitted back to the earth.
The European Space Agency (ESA) is covering approximately 15 percent of the cost of the Space Telescope and will have an independent data-analysis center at the headquarters of the European Southern Observatory in Munich. The ESA is supplying the solar panels for powering the observatory, a high-resolution faint-object camera for the instrument section and a number of scientists and technicians for the staff of the Space Telescope Science Institute. In return European astronomers will be entitled to 15 percent of the observing time. Astronomers from other parts of the world will also work with the telescope, making it a truly international observatory.
The first astronomical observations from space were made in the late 1940's with captured German V-2 rockets. Some of these early liquid-fuel rockets were brought to the U.S. after World War II and were used to send various scientific instruments far above the atmosphere for several minutes of observation. Smaller solid-fuel rockets were later developed specifically for scientific research; they typically lifted a payload of about 100 pounds to a maximum altitude of 100 miles, giving an observation time of a few minutes above the obscuring layers of the atmosphere. The subsequent development of lightweight, solid-state electronic devices has made it possible to build increasingly complex and capable scientific instruments for such missions without prohibitive increases in the power needed to lift them.
The first application of the high-altitude technology was to the study of the sun. In 1946 a rocket-launched spectrometer developed by workers at the U.S. Naval Observatory obtained the first ultraviolet solar spectrogram, revealing absorption features not previously detected in the radiation from any celestial object. It was not until 1957 that ultraviolet radiation from a star was recorded. The spectrographic resolution of this early measurement was quite coarse, with a measuring bandwidth of several tens of nanometers. The early rockets could not be aimed accurately; they rotated freely in space and so could not give the long exposure needed for a precise measurement of the faint radiation from a distant star. In the 1960's techniques were developed for pointing rocket-borne instruments at a star, utilizing small gyroscopes to provide an inertial reference system. As a result stellar spectrograms were recorded with a measuring bandwidth of about a tenth of a nanometer. This achievement marked the beginning of active research on many aspects of stellar atmospheres and interstellar matter.
Meanwhile another group of astronomers employed balloons to lift optical telescopes to altitudes of about 20 miles, above the densest part of the atmosphere. In the late 1950's a 12-inch telescope of this type, named Stratoscope I, obtained extraordinarily sharp pictures of the sun. In the following decade its successor, the 36-inch Stratoscope II, made several photographs of planets and star systems with a resolution close to a tenth of an arc-second.
An artificial earth satellite, which can operate in orbit for years, offers a much better platform for mounting an optical telescope than either a suborbital rocket or a balloon. As aerospace technology has progressed satellites have become the primary vehicles for extraterrestrial astronomy.
With satellites as with the earlier rockets and balloons the first observations made were of the sun. The process of finding an object in the sky and pointing a telescope at it is much easier with the sun than with a more distant star. Beginning in the 1960's NASA built and operated a series of Orbiting Solar Observatories, equipped with various instruments for studying the solar atmosphere.
The first NASA satellites designed for stellar observations were named the Orbiting Astronomical Observatories. Two satellites of this type were operated successfully, one from 1968 to 1973 and the other from 1972 to 1981. Both of them were used mainly for analyzing ultraviolet radiation from stars. The first one had a fairly low spectrographic resolution: its measuring bandwidth was 1.2 nanometers. The second, named Copernicus, was far superior in this respect: its measuring bandwidth was .005 nanometer. The development of precise guidance systems for such satellites was a major technological achievement. The Copernicus telescope, which had a mirror 32 inches in diameter, could stay pointed toward a star for several minutes with a maximum deviation of about .02 arc-second.
The two Orbiting Astronomical Observatories yielded a wealth of data. For example, observations made with the Copernicus satellite showed that much of the hydrogen in interstellar clouds is in the form of molecules rather than individual atoms.
COLOR-CODED TOPOGRAPHIC MAPS of the surface of the primary mirror were made on the screen of a computer-graphics terminal as an aid in determining the corrective action needed for each of the 24 cycles in the final, eight-month polishing process. The maps were based on precise interferometric measurements of the shape of the surface. The two maps shown were made at the start and the finish of the computer-controlled polishing process. The white areas represent the average surface of the mirror; the dark blue and dark red areas correspond respectively to highs and lows. At the start deviations from the prescribed shape were as great as 100 millionths of an inch; at the finish the maximum deviation was less than a millionth of an inch over most of the surface. The finished primary is the finest large astronomical mirror ever made. According to Perkin-Elmer, ``so nearly perfect is the surface that if the mirror were scaled up to the width of the continental United States, no hill or valley would depart from the mean surface by more than about 2 1/2 inches.''
Moreover, many oxygen atoms in the regions between the clouds were found to be highly ionized, indicating that the gas between the clouds is very hot: on the order of a million degrees Kelvin. The satellite data also showed that the cosmic ratio of atoms of deuterium, or heavy hydrogen, to atoms of ordinary hydrogen is about one to 100,000. According to certain cosmological theories, this measurement supports the view that the universe will continue to expand forever.
The most recent optical telescope in space is the International Ultraviolet Explorer, a satellite developed jointly by NASA, the ESA and the British Science Research Council; it has been measuring the ultraviolet spectrum of comparatively faint objects since 1978. Although the performance of this instrument is limited by the size of its mirror (which is 18 inches in diameter), it has been particularly effective in obtaining ultraviolet spectrograms of galactic nuclei and in analyzing the interstellar gas in remote parts of our galaxy.
The concept of a much larger space telescope has evolved slowly over the past two decades. The first official notice of such a project appeared in 1962 in the report of a group of scientists organized for NASA by the National Academy of Sciences to study the future of space science. The group recommended the development of a large space telescope as a logical long-range goal of the U.S. space-science program. The recommendation was repeated by a similar study group in 1965. Soon afterward the National Academy established a committee chaired by one of us (Spitzer) to define the scientific objectives of a proposed space telescope with an aperture of approximately three meters. The report of this group was issued in 1969. In spite of the many advantages cited for such a large space telescope, most astronomers were simply too busy at the time to take an active part in promoting its development. Ground-based astronomy had entered an exciting ``olden era'' with the discovery of phenomena such as quasars, the cosmic microwave background radiation and pulsating neutron stars, and few people were prepared to devote the many years of effort needed to develop a facility as complex and costly as a large space telescope.
In 1972 another committee of the National Academy of Sciences, chaired by Jesse L. Greenstein of the California Institute of Technology, reviewed the needs and priorities of astronomy in the 1970's and again drew attention to the capabilities of a large space telescope. Although the nature and cost of such a device were then only partially defined, it was viewed as a realistic and desirable long-range goal.
Meanwhile NASA had assembled a small group of astronomers under the direction of Nancy G. Roman to provide scientific guidance for the space-telescope feasibility studies then being done at Goddard and at the George C. Marshall Space Flight Center in Huntsville, Ala. Representatives of academic institutions, NASA research centers and industrial contractors assisted in the initial effort.
In 1973 NASA selected a group of scientists from several academic institutions to help establish the basic design of the telescope and its instruments. The group worked with NASA scientists and engineers to determine what objectives for the telescope were feasible and which of them should be given priority. The main scientific guidance was provided by a 12-member working group (on which both of us served) chaired by C. R. O'Dell of the University of Chicago. In order to head the scientific effort for the still unfunded Space Telescope project O'Dell left his positions as professor and chairman of the astronomy department at Chicago and as director of the Yerkes Observatory.
In 1977 NASA selected a new group of 60 scientists from 38 institutions to participate in the design and development of the proposed observatory. The scientific direction of this effort is again guided by a science working group headed by O'Dell; the current membership of the working group includes key NASA employees, the principal investigators responsible for the initial scientific instruments, several interdisciplinary scientists (including Bahcall) and specialists in data handling, spacecraft operations and telescope optics.
ATMOSPHERIC ABSORPTION of electromagnetic radiation limits ground-based optical astronomy primarily to the narrow spectral band corresponding to visible light. Radiation in the flanking ultraviolet and infrared regions is almost totally blocked. The upper edge of the gray areas indicates the boundary where the intensity of the radiation at each wavelength is reduced to half its original value. A nanometer is a billionth of a meter, or 10 angstrom units.
The Space Telescope program almost didn't happen. Between 1974 and 1978 the project was repeatedly in danger of being canceled or postponed indefinitely as a result of congressional and executive budgetary reviews. After an intensive lobbying effort, joined not only by hundreds of astronomers but also by many interested scientists in other fields, construction was finally authorized in 1977. The program survived its first appropriations test in Congress in 1978, and since then it has consistently met with a sympathetic and informed response on Capitol Hill.
|SPACE SHUTTLE will carry the Space Telescope to an altitude of approximately 500 kilometers (300 miles) and then release it into orbit with the aid of a mechanical arm The solar-power panels, communications antennas and aperture door, which will be stowed while the satellite is being carried in the shuttle's cargo bay, will be deployed by the satellite after its release. The telescope will be visited by the shuttle for maintenance, repair and replacement of parts. Every five years or so the entire satellite will be returned to the earth for refurbishment.|
By the time the Space Telescope was formally approved detailed NASA studies had led to a comprehensive design, which is being followed for the most part in the actual construction of the observatory. The telescope itself consists of two hyperboloidal reflecting surfaces: the 94-inch concave primary mirror and a much smaller convex secondary mirror mounted about 16 feet in front of the primary. Light striking the primary mirror is reflected to the secondary, where it is directed through a hole in the center of the primary; the image comes to a focus several feet be hind the primary. The telescope is described as a Ritchey-Chretien type of Cassegrain optical system.
The scientific instruments that detect and measure the radiation concentrated in the focal plane are installed in an array of boxes mounted behind the primary mirror. Four of the boxes are aligned parallel to the optical axis of the telescope and four are arranged radially around the axis. Of the four radial boxes three house the telescope's fine-guidance system. The tube of the telescope extends more than 10 feet in front of the secondary in order to shield the optical system from stray light, most of which is direct light from the sun and scattered sunlight from the earth and the moon. A system of internal baffles provides additional shielding. Electronic equipment and other devices are housed in a toroidal section surrounding the telescope tube at its base. Two panels of solar cells for powering the equipment and two dish-shaped radio antennas for communicating with the earth extend from the midsection. The cylindrical body of the satellite is about 42 feet long and 14 feet in diameter.
The most remarkable feature of the Space Telescope will be the unprecedented quality of the images formed at its focal plane. The optical surfaces will be as nearly perfect as modern technology can make them: the average deviation of the two reflecting surfaces from their ideal contour will not exceed 10 nanometers. To avoid thermal distortions the mirrors are made of fused silica glass with an extremely low coefficient of thermal expansion. In addition they will be maintained thermostatically at a nearly constant temperature while they are in space. The position of the two mirrors with respect to each other and to the focal surface will be adjustable by remote control to yield the sharpest images possible. The fine-guidance system, which will take a fix on stellar images in the outer part of the telescope's field of view, is expected to be able to hold the optical axis steady to within .01 arc-second for as long as 10 hours. (Internal reaction wheels will serve to aim the telescope and hold it steady; commanding such a wheel to rotate faster in one direction will cause the entire telescope to turn in the opposite direction.)
Six major scientific instruments are scheduled to be included in the Space Telescope's instrument section from the time it is launched through its first few years of operation. The first five are called the wide-field/planetary camera, the faint-object camera, the faint-object spectrograph, the high-resolution spectrograph and the high-speed photometer. In addition the fine-guidance system will give the telescope an astrometric capability, that is, an ability to measure the exact position of stars. Although the two mirrors will have a high reflection efficiency for radiation at all wavelengths in the optical region of the spectrum, no infrared-sensitive instrument will be included in the initial stage. Nevertheless, all aspects of the observatory are planned to be consistent with the possible future inclusion of an instrument sensitive to radiation with wavelengths as long as a millimeter.
The entrance apertures of the four axially mounted instruments are at the focal plane of the telescope. There the total field of view, which measures 28 arc-minutes in angular units, is almost half a meter in linear diameter; the resulting scale of the image at the focal plane is 3.58 arc-seconds per millimeter. With suitable pointing commands the image of any object in the field of view can be directed toward any one of the four axial instruments or toward the fifth, radially mounted one. Each instrument is designed so that it can be removed in orbit and a new instrument installed in its place by a space-suited astronaut operating from the space shuttle.
An on-board computer, external to the scientific instruments, will control the operation of the observatory and handle the flow of data. The computer will be reprogrammable, making it possible to modify the procedures as experience is gained with the instruments. Astronomers and spacecraft controllers will communicate with the Space Telescope by means of the NASA Tracking and Data Relay Satellite System. All data will be relayed back to the earth through this system also, for delivery ultimately to the Space Telescope Science Institute.
The principal investigators responsible for developing the initial set of instruments were chosen after intense competition. By the time the satellite is launched each of these investigators and his colleagues will have spent more than eight years building a general-purpose instrument for the potential use of all astronomers. In recognition of this effort each principal investigator and his team will be awarded more than a month of observing time.
INTERNAL COMPONENTS are drawn in black and external components in color in this overall perspective view of the Space Telescope in its deployed configuration. The cylindrical body of the satellite is approximately 42 feet long and 14 feet in diameter. The scientific instruments are designed so that they can be replaced in orbit by a space-suited astronaut operating from the space shuttle.
The principal investigator for the wide-field/planetary camera is James A. Westphal of Cal Tech. This instrument, as its name suggests, can be operated in either of two modes: as a wide-field camera or as a higher-resolution camera suitable for, among other things, planetary observations. In each mode the detection system consists of four charge-coupled devices (CCD's): microelectronic silicon ``chips'' that convert a pattern of incident light into a sequence of electrical signals. Each chip is a square measuring almost half an inch on a side and is subdivided into an array of pixels, or individual picture elements, with 800 pixels on a side. A single chip therefore has a total of 640,000 pixels, and the four part mosaic image formed by a set of four CCD's has more than 2.5 million pixels. Each pixel yields an electrical signal proportional to the number of photons, or quanta of electromagnetic radiation, reaching it during an exposure.
The wide field/planetary camera is mounted on the side of the telescope that will generally be kept away from the sun. Incoming light passing along the optical axis of the telescope is directed outward at a right angle by means of a flat ``pick-off'' mirror held by a rigid arm at a 45-degree angle to the optical axis. The diagonal mirror diverts only the central part of the incoming beam; the rest of the light passes around the mirror to the other instruments.
In the wide field mode the camera has a square field of view 2.67 arc-minutes on a side, the largest field of any of the instruments. Each pixel in this mode subtends an angle of .1 arc-second. In a sense the wide field camera compromises the angular resolution of the telescope in order to provide a field of view large enough for the study of extended sources such as planetary nebulas, galaxies and clusters of galaxies. Even so, the field of view is much smaller than the field that can be recorded on a photographic plate by a ground-based telescope. In the Space Telescope the field is limited by the size of the microelectronic detectors available for remotely acquiring, storing and digitizing pictures. The CCD's for the wide field/planetary camera, which are being supplied by Texas Instruments, Inc., have more pixels than any other CCD's used for astronomical purposes.
In the planetary mode the square field of view of the camera covers an area of the sky about a fifth as large as it does in the wide field mode; the field in the planetary mode measures 68.7 arc-seconds on a side, and an individual pixel subtends an angle of .043 arc-second. The planetary camera takes advantage of almost the full resolution of the optical system while providing a field of view that is more than adequate for full disk images of the planets. The high sensitivity of the CCD detection system makes possible the short exposure time required for certain planetary observations. The planetary mode will also be employed by many observers for high resolution studies of extended galactic and extragalactic objects.
The wide field/planetary camera is unique among the Space Telescope's instruments in several respects. It will gather by far the greatest number of bits of information: more than 30 million bits per picture. The spectral response of the detector will also be the widest available with any of the telescope's instruments: the camera will be sensitive to wavelengths ranging from 115 nanometers in the far-ultraviolet region to 1,100 nanometers in the near infrared. The wide spectral coverage is made possible by coating the CCD's with an organic phosphor, called Coronene, that converts photons of ultraviolet radiation into photons of visible light, which the silicon sensors can detect. The excellent response at the red end of the visible band is attributable to the natural sensitivity of the CCD's.
The CCD's used in both the wide field mode and the planetary mode have a low level of background electrical ``noise'' and hence are well suited for making pictures of faint sources. Part of the noise in such a device is thermal, and it will be reduced by cooling the detector elements thermoelectrically to about -95 degrees Celsius. The heat generated by the cooling system will be dissipated by a radiator that will form part of the outside surface of the satellite.
The incoming light to the instrument can be directed onto either the four CCD's of the wide field camera or the four CCD's of the planetary camera by means of a pyramidal mirror that can be rotated by 45 degrees about its axis, thereby allowing two essentially independent optical systems to be housed in one instrument compartment. Any of 48 filters can be inserted into the optical path. Thus the wide field/planetary camera is an extremely versatile instrument that will serve a broad range of astronomical purposes. We shall mention here just two of the many investigations that will be undertaken with this instrument.
The camera will be employed in both modes to make a series of images of certain nearby stars to see if they have planetary companions. The 10 or so stars selected for the study have been chosen because they all have a large proper motion (that is, motion across the sky). If any of the stars does have a planetary system, it may be possible, given the extraordinary resolution and accurate guidance of the Space Telescope, to detect periodic ``wobbles'' in the path of the star caused by the gravitational attraction of an unseen companion. The measurements are difficult ones, but the Space Telescope may finally resolve the long standing question of whether there are planetary systems similar to the solar system among the nearby stars.
OPTICAL PATH in the Space Telescope is said to be folded: light from the concave primary mirror is reflected from the convex secondary mirror and passes through a hole in the center of the primary before coming to a focus at the image plane in the instrument section several feet behind the primary. Technically the telescope is described as a Ritchey-Chrétien type of Cassegrain optical system.
Quasars are the most distant and the most energetic objects known in the universe. Each of these compact sources emits on the order of 100 times as much energy as a bright galaxy made up of 10 billion stars. Several competing theories have been put forward to explain how a quasar produces such an enormous amount of energy in such a small space, but some crucial observational tests required to settle the matter are not feasible with ground-based instruments. Some of the theories are based on the idea that quasars are ``sick'' galaxies; in other words, the quasars are supposed to represent a transient, disease-like stage in the evolution of an otherwise normal galaxy. To test these theories high-resolution images of quasars will be obtained with the wide-field/planetary camera to determine whether the bright objects that appear as point sources from the earth are surrounded by the fainter, more diffuse light of a galaxy. It should even be possible to tell whether the quasar stage is a disease of young galaxies or of old ones. This fundamental question is currently unanswerable because of the fuzziness of the images obtained with ground-based instruments.
The faint-object camera that will be supplied by the ESA is one of the four axially mounted instruments. The primary purpose of this second camera is to exploit the full optical power of the Space Telescope. It will detect the faintest objects visible with the telescope and will record images having the highest angular resolution attainable with the optical system. The project scientist for the faint-object camera is F. Macchetto of the ESA.
The faint-object camera is complementary in several ways to the wide- field/planetary camera. The faint-object camera will have a higher spatial resolution, whereas the wide-field/planetary camera will have a larger field of view. In the spectral region between 120 and 400 nanometers the faint-object camera will acquire an image more rapidly than the wide-field/planetary camera will. In the longer-wavelength, redward regions of the spectrum, however, the wide-field/planetary camera will be faster. In addition to forming images the faint-object camera will be able to determine the polarization of the detected radiation and to make spectroscopic measurements of both point objects and extended objects. The two cameras are not redundant, but they are designed to be sufficiently similar in function to ensure that an operable camera of some kind will be among the initial instruments even if a camera were to fail in orbit.
In the faint-object camera two similar but independent optical systems are provided to form an image of a point source. One system has a very small, square field of view, measuring 11 arc-seconds on a side; it has a pixel size of only .022 arc-second. The other system has a square field of view 22 arc-seconds on a side and a pixel size of .044 arc-second. In each system the detector consists of an image-intensifying device similar to the light-sensitive cathode-ray tube in a television camera. Unlike the CCD's in the wide-field/planetary camera, a detector of this kind counts individual photons.
INCOMING LIGHT is routed in different directions by an array of small ``pick-off'' mirrors positioned near the center of the Space Telescope's scientific-instrument section behind the primary mirror. The diamond-shaped flat mirror mounted diagonally on the optical axis directs light outward to the radially mounted wide-field/planetary camera. The three arc-shaped flat mirrors arranged around the outside of the incoming beam send light to the three fine-guidance sensors, which are also radially mounted. The light that bypasses these four mirrors comes to a focus at an image plane at the entrance apertures near at the front of the four axially mounted instrument boxes. The projections of the pick-off mirrors on this focal plane are shown in dark gray in the plan view at the bottom. Because the incoming beam is interrupted by the pick-off mirrors well in advance of the focal plane the areas blocked by the mirrors are slightly enlarged; the additional vignetted zones are represented by the light gray bands outlining the projected mirror zones. At the focal plane the field of view is 28 arc-minutes in angular diameter. The wide-field/planetary camera views a square region about three arc-minutes on a side in the center of the field. The remainder of the field out to a radius of about nine arc-minutes is divided into quadrants, each of which is viewed by one of the four axially mounted instruments. The outermost part of the field, roughly between nine and 14 arcminutes from the optical axis, is sampled by the fine-guidance system, which is designed not only to point the telescope but also to make precise measurements of the position of stars.
The faint-object camera is designed so that each point-source image produced by the telescope is sampled by several pixels. Hence it will be the instrument of choice when the highest possible resolution and the maximum contrast against the background sky are required. The camera will also be able to carry out spectroscopic and polarimetric studies of comparatively faint objects. In addition the camera will be able to view extremely narrow fields with an even smaller pixel size (approximately .007 arc-second).
The scientific tasks of the wide-field/ planetary camera and the faint-object camera are expected to overlap. Depending on the specific resolution, field of view and spectral region required, an observer may choose to work with one camera or the other. We shall mention here only one type of observation for which the faint-object camera should be particularly suited.
Globular clusters are spherical collections of millions of stars that can be seen from the ground on a clear night with a small telescope or even with binoculars. Because all the stars in a cluster are at approximately the same distance from the solar system one can test theoretical models of stellar evolution simply by counting the stars of various types in a cluster. The standard theory predicts that each globular cluster should include between about 10,000 and 100,000 of the stars called white dwarfs. These compact objects represent the last stage in the evolution of stars that have exhausted their nuclear fuels, cooled and collapsed. Because white dwarfs are very faint they cannot be seen at the great distances of the globular clusters with ground-based instruments. The Space Telescope's faint-object camera, however, should be able to detect many white dwarfs in globular clusters. By studying their properties it will be possible to learn much more about the evolution of stars.
The Space Telescope will have two spectrographs: optical devices that divide the incoming light from an astronomical source into separate beams according to wavelength. In spectroscopy resolution is usually defined as the ratio of the wavelength of the incoming light to the smallest separation that can be measured between two wavelengths. One of the two spectrographs on board the observatory, the faint-object spectrograph, will be able to observe faint stellar objects with a spectrographic resolution of 1,000 (equivalent to a measuring bandwidth of 1/1,000th of the wavelength). The principal investigator for this instrument is Richard J. Harms of the University of California at San Diego.
The faint-object spectrograph will be equipped with two systems of detectors. Both detectors are devices called Digicons; one is sensitive to red light and the other to blue light and ultraviolet radiation. A Digicon sensor operates on the basis of the photoelectric effect. The incoming light is spread out according to wavelength by a diffraction grating and strikes the surface of a thin photocathode layer deposited on a transparent plate. Light of a particular wavelength reaches a particular position along the photocathode, producing a spray of free electrons known as photoelectrons. A magnetic field focuses the photoelectrons at a point whose position depends on where they emerge from the photocathode and hence on the wavelength of the incident light. The photoelectrons are collected by a linear array of 512 diodes, each of which records the intensity of the incident light at a particular wavelength.
The faint-object spectrograph will be sensitive to radiation ranging in wavelength from about 115 to 800 nanometers. In addition the instrument will have two special features: it will be able to measure the polarization of the incoming light and to detect extremely fast variations (perhaps as brief as a few milliseconds) in the spectrum of radiation emitted by bright sources. Because the investigation of many astronomical problems depends on the spectral analysis of the radiation from extremely faint objects, this instrument is expected to be one of the busiest on the Space Telescope. By measuring the spectra of very distant quasars, for example, it should be possible to study the properties of the universe more than 10 billion years ago, perhaps 85 percent of the way back to the beginning of time (if, as the standard big-bang model of cosmology assumes, time actually had a beginning). Spectrograms of the most distant quasars are expected to indicate the chemical constitution of matter at that early stage in the evolution of the universe.
WlDE-FlELD/PLANETARY CAMERA is one of the instruments scheduled to be included in the Space Telescope during its first few years of operation The camera is designed to operate in either of two modes. In each mode the detection system consists of a rectangular array of four light-sensitive silicon ``chips'' called charge-coupled devices (CCD's). The incoming light reflected into the radially mounted instrument compartment by the diagonal pick-off mirror can be directed onto either the four CCD's of the wide-field camera or the four CCD's of the higher-resolution planetary camera by means of a pyramidal mirror that can be rotated by 45 degrees about its axis. Any of 48 filters can be inserted into the optical path. The external radiator serves to dissipate the heat generated by the cooling system associated with the detectors.
The investigation of some astronomical questions requires a higher spectrographic resolution than can be attained with the faint-object spectrograph, because the width of many emission and absorption features is narrower than the measuring bandwidth of the instrument. The high-resolution spectrograph will meet this need. Under normal operating conditions it will have a spectrographic resolution of 20,000. Narrow spectral features that might not even be detected with the lower-resolution faint-object spectrograph will be accurately measured, yielding information about the physical conditions under which the radiation was emitted. The high-resolution spectrograph will also have an ultrahigh-resolution mode of operation in which the spectrographic resolution will be improved by an additional factor of five to about 100,000. The principal investigator for the high-resolution spectrograph is John C. Brandt of Goddard.
Of course, there is a price to be paid for the higher resolution of this second spectrograph. Dividing the spectrum into a much larger number of bands in order to measure the flux of photons separately in each band has the effect of decreasing the number of photons detected per band. Thus higher resolution results in lower sensitivity, and the larger quantity of information provided by the high-resolution spectrograph can be obtained only for stars that are some 60 times brighter than those that can be studied with the faint-object spectrograph. This difference amounts to about 4.5 stellar magnitudes. For the ultrahigh-resolution mode the difference in brightness is a factor of more than 300, or the equivalent of about six stellar magnitudes.
The high-resolution spectrograph has six interchangeable diffraction gratings, each of which disperses light of different wavelengths in different directions. A camera mirror or grating then forms an image of the spectrum on the photoelectron-emitting surface of a Digicon sensor. By rotating the carousel on which the gratings are mounted, any one of them can be brought into the optical path of the instrument, making it possible to obtain a spectrographic reading at any wavelength between 110 and 320 nanometers.
This spectrograph with its normal resolution should be able to observe stars as faint as the 13th magnitude, or about six stellar magnitudes fainter than those observed by the Copernicus telescope. The gain in sensitivity over the spectrograph on the International Ultraviolet Explorer is not as great-about four magnitudes-but the spectrographic resolution and the photometric accuracy will be significantly better for the instrument on the Space Telescope.
The power of this instrument should open up a number of interesting new lines of inquiry. For example, the high-resolution spectrograph will make possible the study of interstellar gas at places in our galaxy and other galaxies where it cannot now be observed. Preliminary measurements by the International Ultraviolet Explorer have shown that the gas in the galactic ``halo'' between the earth and the nearest neighboring galaxy (one of the two Magellanic clouds) includes carbon atoms that have been stripped of three electrons indicating that the temperature in this region is about 100,000 degrees Kelvin. With the high-resolution spectrograph much more accurate data will be obtainable, perhaps revealing the relation between this gas and the even hotter material detected by Copernicus. Measurements of the way in which the properties of our galaxy vary from place to place will provide much-needed clues to the evolution of the system as a whole.
The high-resolution spectrograph will also be applied to the study of interstellar clouds. Ground-based observations of such clouds are able to detect only a few dark lines in the spectrum, created when the gas of the cloud absorbs radiation from background stars. In many instances each absorption line is split into multiple subfeatures, which can be attributed to separate clouds along the same line of sight. The clouds are moving with somewhat different speeds toward the solar system or away from it, altering the characteristic wavelengths at which they absorb radiation. The splitting of the absorption lines makes it possible to study each cloud separately, provided the spectrographic resolution is high enough. With the high-resolution spectrograph it will be possible to analyze a wide range of ultraviolet absorption features from various atoms and molecules and to determine the physical conditions in each cloud. Our understanding of how such interstellar clouds come together and contract to form stars may depend critically on the results of such studies.
The high-speed photometer, which is being developed by Robert C. Bless and his colleagues at the University of Wisconsin at Madison, is designed to make highly accurate measurements, with an extraordinary temporal resolution, of the intensity of the light from astronomical sources over a wide range of wavelengths. The photometer will be capable of distinguishing events separated in time by only 10 microseconds. Observations of sources that vary over time scales this short are difficult or impossible with ground-based instruments because of fluctuations in the atmosphere.
RANGE OF WAVELENGTHS potentially accessible to the Space Telescope extends from the far-ultraviolet part of the spectrum (left) to the far-infrared (right). For comparison the spectral bands that can be observed with the unaided human eye and with a large ground-based telescope (in this case the 200-inch Hale telescope on Palomar Mountain) under normal observing conditions are also indicated. The vertical scale gives the relative brightness (in terms of stellar magnitude) of the faintest celestial object that can be imaged; an increase of one unit in stellar magnitude corresponds to a decrease in apparent brightness by a factor of about 2.5.
The high-speed photometer is the simplest of the instruments in the initial group on board the Space Telescope. It has no moving parts and relies entirely on the fine pointing of the spacecraft to direct the light from an astronomical target onto one of its 100 or so combinations of spectral filters and entrance apertures. The photometer has four independent, magnetically focused detectors, called image dissectors; they resemble photomultiplier tubes in operation, except that they can be made to respond only to photoelectrons coming from the small region of the photocathode on which the light is focused. Each image dissector is mounted behind a plate that holds an assortment of filters and entrance apertures.
The overall spectral response of the image dissectors extends from about 115 to 650 nanometers. The instrument is also equipped with a red-sensitive photomultiplier tube and a system for measuring the polarization of ultraviolet radiation with the aid of one of the image dissectors.
The high-speed photometer will be capable in principle of detecting the smallest objects observable with any of the instruments on the Space Telescope. The ability to distinguish events that are separated in time by only 10 micro-seconds implies (according to the special theory of relativity) that variations in the light output of a star as small as three kilometers across could be detected. This is an extraordinarily small linear dimension for a star; indeed, it is very close to the diameter the sun would have if it were compressed to such a high density that it formed a black hole. Accordingly, one program scheduled for the high-speed photometer is to search for extremely fast variations in astronomical systems that are suspected of harboring a black hole, in the hope of finding further evidence of these elusive entities. The high-speed photometer will also be used for less exotic observations, including an attempt to identify optically faint objects observed mainly at radio or X-ray wavelengths.
Under the best observing conditions ground-based measurements of the position of any star are limited by the size of the star's blurry ``seeing disk,'' which is generally at least one arc-second in diameter. In determining the angular distance between two stars an uncertainty of about .1 arc-second, or a tenth of the diameter of the stellar image, is typical for a single observation. By averaging many exposures the uncertainty can be reduced to about .01 arc-second. Random errors result in corresponding uncertainties in the determination of a star's parallax. (Parallax is the average angular change in the apparent position of a star resulting from the revolution of the earth about the sun.) The determination of distance beyond the solar system is based largely on measurements of the parallax of comparatively nearby stars. Since the measurement of a stellar image with the Space Telescope will be accurate to within about .002 arc-second, the determination of stellar position, and hence of stellar parallax, should be about five times better than it is with ground-based telescopes. The fivefold improvement in the accuracy of stellar- parallax measurements is of fundamental importance to all of stellar astronomy. For example, knowing the precise distance of certain comparatively young star clusters in our galaxy will enable astronomers to determine the absolute brightness of the stars in the clusters. This knowledge in turn will make it possible to extend the calibrated distance scale, which is based on the comparison of apparent brightness and absolute brightness, to stars that are much farther away.
The Space Telescope has not been equipped with a separate instrument for astrometry because the fine-guidance system will be accurate enough to make the necessary measurements of the angular distance between stars. The leader of the team for astrometry is William H. Jefferys of the University of Texas at Austin.
Observing time on the Space Telescope will be allocated to astronomers from all parts of the world by the Space Telescope Science Institute, which will be responsible for facilitating the most effective scientific use of the powerful new observatory. To provide visiting astronomers with the most efficient operating systems, to assist and advise observers on the optimum use of the various instruments and to help create a stimulating atmosphere for research with the Space Telescope outstanding astronomers from the U.S. and abroad are being recruited to serve on the institute's staff. It is expected that half of their time will be devoted to the diverse tasks of the institute, with the other half available for their own research programs. The new institute will also make recommendations to NASA on broad policy matters pertaining to the Space Telescope. The involvement of outside astronomers in determining the policies of the institute is being ensured through a number of external committees.
The institute will solicit outside proposals for specific observing programs for the Space Telescope. With the aid of peer-review groups the institute will evaluate the proposals and select the most promising programs for inclusion in the telescope's schedule. In many cases the programs selected will be combined with those submitted by the original scientific-instrument teams, by other members of the Space Telescope working group and by the European groups. The final scheduling and the preparation of a complete list of commands for the operating computer will be done by NASA, which will retain responsibility for the day-to-day operation of the observatory.
Astronomers on the staff of the institute will advise outside astronomers on the formulation of observing plans. Other staff astronomers will be responsible for maintaining the calibration of the instruments and for the initial processing of data. Computer specialists will help to develop suitable programs for use by the astronomers in analyzing the data. Finally, the Space Telescope Science Institute will assist astronomers in communicating the results of their studies to other scientists, to NASA, to Congress and to the public.
The Space Telescope will help to solve many outstanding astronomical puzzles. The greatest excitement, however, will come when the pictures returned from the satellite reveal things no one in this generation of astronomers has dreamed of, phenomena that only the next generation will be privileged to understand.
TENFOLD IMPROVEMENT in spatial resolution expected with the Space Telescope will enable astronomers to make more detailed observations of extended objects. In this simulation the picture at the top represents the image of a distant spiral galaxy obtained with the 200-inch Hale telescope and the picture at the bottom represents the corresponding image obtained with the Space Telescope. Actually the picture at the bottom is a digitized version of a photograph of a nearby galaxy made with the 200-inch telescope and the picture at the top is a blurred version of the same image made by defocusing the original by an amount proportional to the difference in the effective resolution obtainable with the two instruments. The simulation was prepared by John L. Tonry of the Institute for Advanced Study. | http://stecf-poa.stsci.edu/poa/FOS/hstsciamerspitzer.html | 13 |
53 | Extras: Plotting in MATLAB
One of the most important functions in MATLAB is the plot function. The plot command also happens to be one of the easiest functions to learn how to use. The basic syntax of the function call is shown below. This code can be entered in the MATLAB command window or run from an m-file.
This command will plot the elements of vector y (on the vertical axis of a figure) versus the elements of the vector x (on the horizontal axis of the figure). The default is that each time the plot command is issued, the current figure will be erased; we will discuss how to override this below. Consider the following simple, linear function.
If we wished to plot this function, we could create an m-file with the following code to generate the basic plot shown below.
x = 0:0.1:100; y = 3*x; plot(x,y)
One thing to keep in mind when using the plot command is that the vectors x and y must be the same length. The other dimension can vary. MATLAB can plot a 1 x n vector versus an n x 1 vector, or a 1 x n vector versus a 2 x n matrix (you will generate two lines), as long as n is the same for both vectors.
The plot command can also be used with just one input vector. In that case the vector columns are plotted versus their indices (the vector [1:1:n] will be used for the horizontal axis). If the input vector contains complex numbers, MATLAB plots the real part of each element (on the horizontal axis) versus the imaginary part (on the vertical axis).
The color, point marker, and line style can be changed on a plot by adding a third parameter (in single quotes) to the plot command. For example, to plot the above function as a red, dotted line, change the m-file as follows to generate the figure shown below.
x = 0:0.1:100; y = 3*x; plot(x,y,'r:')
The third input consists of one to three characters which specify a color, point marker type, and/or line style. The list of colors, point markers, and line styles are summarized below.
y yellow . point - solid m magenta o circle : dotted c cyan x x-mark -. dashdot r red + plus -- dashed g green * star b blue s square w white d diamond k black v triangle (down) ^ triangle (up) < triangle (left) > triangle (right) p pentagram h hexagram
You can also plot more than one function on the same figure. Let's say you want to plot a sine wave and cosine wave on the same set of axes, using a different color and style for each. The following m-file will plot a sine wave and cosine wave, with the sine wave as a solid red line and the cosine wave as a series of green x's.
x = linspace(0,2*pi,50); y = sin(x); z = cos(x); plot(x,y,'r', x,z,'gx')
By adding more sets of parameters to plot, you can plot as many different data sets on the same figure as you want. When plotting many things on the same graph it is useful to differentiate the different data sets based on color and point marker. This same effect can also be achieved using the hold on and hold off commands. The same plot shown above could be generated using the following code.
x = linspace(0,2*pi,50); y = sin(x); plot(x,y,'r') z = cos(x); hold on plot(x,z,'gx') hold off
Always remember that if you use the hold on command all plots from then on will be generated on one set of axes without erasing the previous plot until the hold off command is issued.
More than one plot can be put in the same figure on its own set of axes using the subplot command. The subplot command allows you to separate the figure into as many plots as desired, and put them all in one figure. To use this command, the following line of code is entered into the MATLAB command window or run from an m-file.
This command splits the figure into a matrix of m rows and n columns, thereby creating m*n plots on one figure. The p'th plot is selected as the currently active plot. For instance, suppose you want to see a sine wave, cosine wave, and tangent wave plotted on the same figure, but not on the same axes. The following code will accomplish this.
x = linspace(0,2*pi,50); y = sin(x); z = cos(x); w = tan(x); subplot(2,2,1) plot(x,y) subplot(2,2,2) plot(x,z) subplot(2,2,3) plot(x,w)
As you can see, there are only three plots, even though we created a 2 x 2 matrix of 4 subplots. We did this to show that you do not have to fill all of the subplots you have created, but MATLAB will leave a spot for every position in the matrix. We could have easily made another plot using the subplot(2,2,4) command. The subplots are arranged in the same manner as you would read a book. The first subplot is in the top left corner, the next is to its right. When all the columns in that row are filled, the left-most column on the next row down is filled (all of this is assuming that you fill your subplots in order (i.e. 1, 2, 3,...).
One thing to note about the subplot command is that every plot command issued later will place the plot in whichever subplot position was last used, erasing the plot that was previously in it. For example, in the m-file above, if a plot command was issued later, it would be plotted in the third position in the subplot, erasing the tangent plot. To avoid this problem, the figure should be cleared (using clf), or a new figure should be specified (using figure).
Now that you have found different ways to plot functions, you can customize your plots. One of the most important ways to do this is with the axis command. The axis command changes the axis of the plot shown, so only the part of the axis that is desirable is displayed. The axis command is used by entering the following code right after the plot command (or any command that has a plot as an output).
axis([xmin, xmax, ymin, ymax])
For instance, suppose want to look at a plot of the function y=exp(5t)-1. The following lines of code will accomplish this.
clf %if needed, this clears the previous subplot t=0:0.01:5; y=exp(5*t)-1; plot(t,y)
As you can see, the plot goes to infinity. Looking at the vertical axis (scale: 8e10), it is apparent that the function gets large very quickly. To get a better idea of the initial behavior of the function, let's resize the axes. Enter the following command into the MATLAB command window to get a plot focused on the first second of the function.
axis([0, 1, 0, 50])
This plot may be more useful, you can now clearly see what is going on as the function moves toward infinity. When using the subplot command, the axes can be changed for each subplot by issuing an axis command before the next subplot command. There are more uses of the axis command which you can see if you type help axis or doc axis in the MATLAB command window.
Another thing that may be important for your plots is labeling. You can give your plot a title (with the title command), x-axis label (with the xlabel command), y-axis label (with the ylabel command), and put text on the actual plot. All of the above commands are issued after the actual plot command has been issued.
A title will be placed, centered, above the plot with the command: title('title string'). The x-axis label is issued with the command xlabel('x-axis string'), while the y-axis label is issued with the command ylabel('y-axis string').
Furthermore, text can be put on the plot itself in one of two ways: the text command and the gtext command. The first command involves knowing the coordinates of where you want the text string. The syntax of the command is text(xcor,ycor,'textstring'). To use the other command, you do not need to know the exact coordinates. The syntax is gtext('textstring') which provides a set of cross-hairs that you can move to the desired location with your mouse and click on the position where you want the text to be placed.
Finally, a legend can be added to the plot to identify various curves. The command is legend('string1','string2','string3'). You can specify as many text strings as there are curves in the plot. The default location of the legend is in the top-right corner, but this can be modified with a parameter called Location.
To demonstrate labeling, let's modify the exponential plot from above. Assuming that you have already changed the axes, copying the following lines of text after the axis command will put all the labels and a legend on the plot.
title('exponential function') xlabel('time (sec)') ylabel('output') legend('exponential curve','Location','SouthEast') text(0.2,35,'unnecessary labeling')
Other commands that can be used with the plot command are:
- figure (opens a new figure to plot on, so the previous figure is preserved)
- close (closes the current figure window)
- loglog (same as plot, except both axes are log base 10 scale)
- semilogx (same as plot, except the x-axis is log base 10 scale)
- semilogy (same as plot, except tje y-axis is log base 10 scale)
- plotyy (plots two curves with two axes, one on the right and one on the left)
- grid (adds grid lines to your plot)
In addition to entering commands in the command window to edit your plots, you can also edit plots interactively using the toolbar or drop-down menus of any figure, or by double-clicking on portions of a figure. You can specifically open the Plot Editor using the command plottools or by clicking on the icon shown below located at the right of a figure's toolbar.
The Plot Editor consists of three windows that automatically open and surround the original plot the first time the tool is used. To the left is the Figures Palette, to the right is the Plot Browser, and at the bottom is the Property Editor. A picture of the Plot Editor is shown below. MATLAB will remember the last confguration used, therefore, if any of these windows were closed in the previous session, they will not appear when the Plot Editor is opened. Typing the commands figurepalette, plotbrowser, or propertyeditor in the command window can also open each of these windows independently.
To adjust the axes of the plot for example, select the axes, then change the axis limits which appear in the Property Editor at the bottom of the plot.
The Property Editor can also be used to set the Title, X label, and Y label of the plot, as well as the Colors and line styles of the data.
To add new data to the plot, select the x and y data from the Figures Palette in the left window (use the Ctrl key to select more than one item) and drag them into the plot. Using y2 = 2*exp(5*t)-1;, the following plot results.
Use the Plot Browser on the right side of the plot to select which data should be displayed. If you have sub-plots, you can also select or deselect which ones should be displayed.
Of course this is not a complete account of plotting with MATLAB, but it should give you a nice start. | http://ctms.engin.umich.edu/CTMS/index.php?aux=Extras_Plot | 13 |
55 | Related Information Links
A chemical reaction is a process in which the identity of at least one substance changes. A chemical equation represents the total chemical change that occurs in a chemical reaction using symbols and chemical formulas for the substances involved. Reactants are the substances that are changed and products are the substances that are produced in a chemical reaction.
The general format for writing a chemical equation is
reactant1 + reactant2 + … → product1 + product2 + …
With the exception of nuclear reactions, the Law of Conservation of Mass–matter is neither created nor destroyed during a chemical reaction– is obeyed in “ordinary” chemical reactions. For this reason a chemical equation must be balanced–the number of atoms of each element must be the same on the reactants side of the reaction arrow as on the products side. Details on balancing chemical equations are found in the units on Stoichiometry and Redox Reactions.
The general format for writing a chemical equation can be written in a short-hand version as
a A + b B + … → c C + d D + …
where the lower case letters are the stoichiometric coefficients needed to balance a specific equation.
The units on Stoichiometry, Redox Reactions, and Acid-Base Chemistry contain additional background reading, example problems, and information on the topics covered in this unit.
Chemists classify chemical reactions in various ways. Often a major classification is based on whether or not the reaction involves oxidation-reduction. A reaction may be classified as redox in which oxidation and reduction occur or nonredox in which there is no oxidation and reduction occurring.
A redox reaction can be recognized by observing whether or not the oxidation numbers of any of the elements change during the reaction.
Example Problem: Classify the reactions as either redox or nonredox.
(1) 4 Fe(s) + 3 O2(g) → 2 Fe2O3(s)
(2) NaOH(aq) + HCl(aq) → NaCl(aq) + H2O(l)
(3) Cl2(g) + H2O(l) → HCl(aq) + HClO(aq)
Answer: In equation (1), the iron changes oxidation numbers from 0 to +3 and oxygen changes from 0 to -2. Equation (1) represents a redox reaction. In equation (2), there is no change in oxidation numbers for the elements involved: sodium is +1, oxygen is -2, hydrogen is +1, and chlorine is -1 on both the reactants and products sides. Equation (2) represents a nonredox reaction. In equation (3), the chlorine changes from 0 to -1 in HCl and to +1 in HClO. There is no change in the oxidation numbers of hydrogen (+1 in H2O, HCl, and HClO) and oxygen (-2 in H2O and HClO). Because chlorine is oxidized and reduced, equation (3) represents a redox reaction.
The reaction described in equation (3) is interesting in that an element in one oxidation state undergoes both oxidation and reduction. Such a redox process is known as a disproportionation reaction. The element undergoing disproportionation must have at least three different oxidation states–the initial one in the reactant and one higher plus one lower in the products.
Most simple redox reactions may be classified as combination, decomposition, or single displacement reactions. In a combinations reaction two reactants react to give a single product. The general format of the chemical equation is
a A + b B + … → c C
A special case of a combination reaction in which the reactants are only elements in their naturally occurring forms and physical states at the temperature and pressure of the reaction is known as a formation reaction.
Example Problem: Identify which reactions are redox combination reactions.
(4) 6 Li2O(s) + P4O10(g) → 4 Li3PO4(s)
(5) CaO(s) + H2O(l) → Ca(OH)2(s)
(6) S(s) + 3 F2(g) → SF6(g)
(7) ZnS(s) + 2 O2(g) → ZnSO4(s)
(8) SO2(g) + Cl2(g) → SO2Cl2(g)
Answer: All of the reaction are classified as combination reactions because they involved two or more reactants producing a single product. However, redox is occurring only in equations (6), (7), and (8). In equation (6), S is oxidized from 0 to +6 and F is reduced from 0 to -1; in equation (7), S is oxidized from -2 to +6 and O is reduced from 0 to -2; and in equation (8), S is oxidized from +4 to +6 and Cl is reduced from 0 to -1.
In a decomposition reaction a single reactant breaks down to give two or more substances. The general format of the chemical equation is
a A → b B + c C + …
If the decomposition reaction involves oxidation-reduction, the reaction is often called an internal redox reaction because the oxidized and reduced elements originate in the same compound.
Example Problem: Identify which reactions are redox decomposition reactions.
(9) CuSO4⋅5H2O(s) → CuSO4(s) + 5 H2O(g)
(10) SnCl4⋅6H2O(s) → SnO2(s) + 4 HCl(g) + 4 H2O(g)
(11) NH4NO2(s) → N2(g) + 2 H2O(g)
(12) (NH4)2Cr2O7(s) → N2(g) + Cr2O3(s) + 4 H2O(g)
Answer: All of the reactions are classified as decomposition reactions because they involve a single reactant producing two or more substances. However, redox is occurring only in equations (11) and (12). In equation (11), the N in NH4+ is oxidized from -3 to 0 and the N in NO2- is reduced from +3 to 0. In equation (12), the N is oxidized from -3 to 0 and the Cr is reduced from +6 to +3.
In a single displacement reaction the atoms or ions of one reactant replace the atoms or ions in another reactant. Single displacement reactions are also known as displacement, single replacement, and replacement reactions. The general format of the chemical equation is
a A + b BC → c AC + d B
Whether or not a redox single displacement reaction occurs will depend on the relative reducing strengths of A and B.
Example Problem: Identify which reactions are redox single displacement reactions.
(13) 2 Al(s) + Fe2O3(s) → 2 Fe(s) + Al2O3(s)
(14) 2 NaI(aq) + Br2(aq) → 2 NaBr(aq) + I2(aq)
(15) Zn(s) + Cu2+(aq) → Zn2+(aq) + Cu(s)
Answer: All three reactions are redox. Both equations (13) and (14) fit the general format of the single displacement reaction by assigning A as Al, B as Fe, and C as O in equation (13) and A as Br, B as I, and C as Na in equation (14). To classify equation (15) is a little more difficult. The reaction has been represented by a net ionic equation in which the anion has been omitted. If an anion X is added to generate the overall equation, Zn(s) + CuX(aq) → ZnX(aq) + Cu(s), then assigning A as Zn, B as Cu, and C as X shows that this is also a redox single displacement reaction.
In addition to the single redox reactions described above, a redox reaction may be classified as a simple redox electron transfer reaction in which the oxidation numbers of ionic reactants are changed by the direct transfer of electrons from one ion to the other–typically in aqueous solutions. For example
2 Fe3+(aq) + Sn2+(aq) → 2 Fe2+(aq) + Sn4+(aq)
Many redox reactions do not fit into the classifications described above. For example, redox reactions involving oxygen-containing reactants in aqueous acidic or basic solutions such as
3 Cu(s) + 8 HNO3(aq, dil) → 3 Cu(NO3)2(aq) + 2 NO(g) + 4 H2O(l)
Cu(s) + 4 HNO3(aq, conc) → Cu(NO3)2(aq) + 2 NO2(g) + 2 H2O(l)
or the combustion of oxygen with more than one element in a reactant
2 CH3OH(g) + 3 O2(g) → 2 CO2(g) + 4 H2O(l)
These types of reactions are classified as complex redox reactions.
There are several classifications of nonredox reactions–including combination, decomposition, single displacement, and double displacement reactions.
The general format of the chemical equation for a nonredox combination reaction is the same as for a redox combination reaction
a A + b B + … → c C
However, all reactants and the product must be compounds and no changes in oxidation numbers of the elements occur. Usually these reactions involve reactants that are acidic and basic anhydrides.
Example Problem: Identify which reactions are nonredox combination reactions.
(16) 2 Na(s) + Cl2(g) → 2 NaCl(s)
(17) SO3(g) + CaO(s) → CaSO4(s)
(18) SO2(g) + H2O(l) → H2SO3(aq)
Answer: All three equations are combination reactions, but only equations (17) and (18) are nonredox.
The general format of the chemical equation for a nonredox decomposition reaction is the same as for a redox decomposition reaction
a A → b B + c C + …
However, the reactant and all products must be compounds and no changes in oxidation numbers occur. Quite often one of the products formed will be a gas.
Example Problem: Identify which reactions are nonredox decomposition reactions.
(19) NH4HCO3(s) → NH3(g) + CO2(g) + H2O(g)
(20) NH4NO2(s) → N2(g) + 2 H2O(g)
(21) CaCO3(s) → CaO(s) + CO2(g)
Answer: All three equations are decomposition reactions, but only equations (19) and (21) are nonredox.
The general format of the chemical equation for a nonredox single displacement reaction is the same as for a redox single displacement reaction
a A + b BC → c AC + d B
However, there are no changes in the oxidation numbers of the elements during the reaction. Common nonredox single displacement reactions include ligand substitution in complexes and formation of more stable oxygen-containing compounds from less stable oxygen-containing compounds.
Example Problem: Identify which reactions are nonredox single displacement reactions.
(22) [PtCl4]2-(aq) + 2 NH3(aq) → [Pt(NH3)Cl2](s) + 2 Cl-(aq)
(23) Na2CO3(s) + SiO2(s) → Na2SiO3(l) + CO2(g)
(24) 2 AgNO3(aq) + Cu(s) → Cu(NO3)2(aq) + 2 Ag(s)
Answer: All three equations are single displacement reactions, but only equations (22) and (23) are nonredox.
Finally, the last classification of nonredox reactions is that of nonredox double displacement reactions. The general format of the chemical equation is
a AC + b BD → c AD + d BC
with no oxidation or reduction of A, B, C, or D occurring. These reactions are also known as double replacement, “partner” exchange, and metathesis reactions. Usually one or more of the products will be a gas, a precipitate, a weak electrolyte, or water. An important example of a nonredox double displacement reaction is the reaction of an acid with a base under aqueous conditions.
Example Problem: Identify the nonredox double displacement reactions.
(25) CaCO3(s) + 2 HCl(aq) → CaCl2(aq) + H2O(l) + CO2(g)
(26) HCl(aq) + KOH(aq) → KCl(aq) + H2O(l)
(27) AgNO3(aq) + KCl(aq) → AgCl(s) + KNO3(aq)
Answer: All three reactions are nonredox double displacement reactions.
Most experienced chemists can classify a given chemical reaction rather easily and quickly by “inspection” of the formulas of the reactants and products in the chemical equation. The first decision most chemists make is to determine whether or not the reaction involves redox. Based on this decision, the answers to a few more specific questions will readily lead to the reaction classification. These specific questions are based on the general formats of the chemical equations for the different classifications of the reactions described above.
To begin learning what these questions are, you might consider using the online analysis program that is available at http://www.xxx.yyy to classify the reactions given in the Example Problem. The basis of this analysis program is outlined by the flow charts given in Figures (1) and (2). Please do not memorize this flow chart–it is simply a tool to help you learn what to look for and what questions should be asked as you classify a given reaction.
Example Problem: Classify each reaction.
(28) XeF6(s) → XeF4(s) + F2(g)
(29) Zn(s) + 2 AgNO3(aq) → Zn(NO3)2(aq) + Ag(s)
(30) H2SO4(aq) + Ba(OH)2 → BaSO4(s) + 2 H2O(l)
(31) XeF6(s) + RbF(s) → RbXeF7(s)
(32) 2 Cs(s) + I2(g) → 2 CsI(s)
Answer: In equation (28), Xe is reduced from +6 to +4 and some of the F is oxidized from -1 to 0. This reaction would be classified as a redox decomposition reaction. In equation (29), Zn is oxidized from 0 to +2 and Ag is reduced from +1 to 0. One of the reactants is an element and one of the products is an element. This reaction would be classified as a redox single displacement reaction. In equation (30), there is no redox occurring. Both reactants are in the form of aqueous ions, but are not complex ions. This acid-base reaction is classified as nonredox double displacement. In equation (31), there is no redox occurring. Because there is one product formed, this reaction is classified as a nonredox combination reaction. In equation (32), Cs is oxidized from 0 to +1 and I is reduced from 0 to -1. Both reactants are elements and there is only one product formed. The reaction is classified as a (redox) formation reaction.
Try It Out
Classify each reaction.
(33) Na2CO3(s) + SiO2(s) → Na2SiO3(l) + CO2(g)
(34) 2 Mg(NO3)2(s) → 2 Mg(NO2)2(s) + O2(g)
(35) 3 HNO2(aq) → 2 NO(g) + NO3-(aq) + H3O+(aq)
(36) BaCO3(s) → BaO(s) + CO2(g)
(37) 2 Eu2+(aq) + 2 H+(aq) → 2 Eu3+(aq) + H2(g)
(38) [Ag(NH3)2]+(aq) + 2 CN-(aq) → [Ag(CN)2]-(aq) + 2 NH3(aq)
(39) MgO(s) + 2 HCl(aq) → MgCl2(aq) + H2O(l)
(40) CaO(s) + SO2(g) → CaSO3(s)
(41) 2 NO(g) + O2(g) → 2 NO2(g)
(42) N2(g) + 2 O2(g) → 2 NO2(g)
Additional Information Available on the Web
Copyright © 1996-2008 Shodor
Last Update: Sunday, 06-Oct-2002 13:08:39 EDT
Please direct questions and comments about this page to | http://www.shodor.org/unchem/basic/chemreac/index.html | 13 |